00:00:00.001 Started by upstream project "autotest-nightly" build number 4253 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3616 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.063 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.063 The recommended git tool is: git 00:00:00.064 using credential 00000000-0000-0000-0000-000000000002 00:00:00.065 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.096 Fetching changes from the remote Git repository 00:00:00.097 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.151 Using shallow fetch with depth 1 00:00:00.151 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.151 > git --version # timeout=10 00:00:00.218 > git --version # 'git version 2.39.2' 00:00:00.218 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.270 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.270 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.167 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.181 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.193 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:05.193 > git config core.sparsecheckout # timeout=10 00:00:05.204 > git read-tree -mu HEAD # timeout=10 00:00:05.221 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:05.240 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:05.240 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:05.346 [Pipeline] Start of Pipeline 00:00:05.359 [Pipeline] library 00:00:05.361 Loading library shm_lib@master 00:00:05.361 Library shm_lib@master is cached. Copying from home. 00:00:05.378 [Pipeline] node 00:00:05.390 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.391 [Pipeline] { 00:00:05.399 [Pipeline] catchError 00:00:05.400 [Pipeline] { 00:00:05.409 [Pipeline] wrap 00:00:05.417 [Pipeline] { 00:00:05.424 [Pipeline] stage 00:00:05.426 [Pipeline] { (Prologue) 00:00:05.623 [Pipeline] sh 00:00:05.913 + logger -p user.info -t JENKINS-CI 00:00:05.934 [Pipeline] echo 00:00:05.936 Node: CYP12 00:00:05.946 [Pipeline] sh 00:00:06.258 [Pipeline] setCustomBuildProperty 00:00:06.272 [Pipeline] echo 00:00:06.274 Cleanup processes 00:00:06.281 [Pipeline] sh 00:00:06.571 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.571 3489688 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.587 [Pipeline] sh 00:00:06.877 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.877 ++ grep -v 'sudo pgrep' 00:00:06.877 ++ awk '{print $1}' 00:00:06.877 + sudo kill -9 00:00:06.877 + true 00:00:06.891 [Pipeline] cleanWs 00:00:06.907 [WS-CLEANUP] Deleting project workspace... 00:00:06.907 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.918 [WS-CLEANUP] done 00:00:06.922 [Pipeline] setCustomBuildProperty 00:00:06.932 [Pipeline] sh 00:00:07.219 + sudo git config --global --replace-all safe.directory '*' 00:00:07.313 [Pipeline] httpRequest 00:00:07.691 [Pipeline] echo 00:00:07.693 Sorcerer 10.211.164.101 is alive 00:00:07.700 [Pipeline] retry 00:00:07.702 [Pipeline] { 00:00:07.711 [Pipeline] httpRequest 00:00:07.715 HttpMethod: GET 00:00:07.715 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.716 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.719 Response Code: HTTP/1.1 200 OK 00:00:07.719 Success: Status code 200 is in the accepted range: 200,404 00:00:07.719 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.764 [Pipeline] } 00:00:08.780 [Pipeline] // retry 00:00:08.788 [Pipeline] sh 00:00:09.075 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:09.092 [Pipeline] httpRequest 00:00:10.568 [Pipeline] echo 00:00:10.570 Sorcerer 10.211.164.101 is alive 00:00:10.580 [Pipeline] retry 00:00:10.582 [Pipeline] { 00:00:10.597 [Pipeline] httpRequest 00:00:10.602 HttpMethod: GET 00:00:10.603 URL: http://10.211.164.101/packages/spdk_b264e22f0a79822588cc09c257bc84dedc4e1862.tar.gz 00:00:10.603 Sending request to url: http://10.211.164.101/packages/spdk_b264e22f0a79822588cc09c257bc84dedc4e1862.tar.gz 00:00:10.623 Response Code: HTTP/1.1 200 OK 00:00:10.624 Success: Status code 200 is in the accepted range: 200,404 00:00:10.624 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_b264e22f0a79822588cc09c257bc84dedc4e1862.tar.gz 00:00:49.056 [Pipeline] } 00:00:49.075 [Pipeline] // retry 00:00:49.083 [Pipeline] sh 00:00:49.372 + tar --no-same-owner -xf spdk_b264e22f0a79822588cc09c257bc84dedc4e1862.tar.gz 00:00:52.690 [Pipeline] sh 00:00:52.978 + git -C spdk log --oneline -n5 00:00:52.978 b264e22f0 accel/error: fix callback type for tasks in a sequence 00:00:52.978 0732c1430 accel/error: don't submit tasks intended to fail 00:00:52.978 b53b961c8 accel/error: move interval check to a function 00:00:52.978 c9f92cbfa accel/error: check interval before submission 00:00:52.978 ff0dc8ce5 lib/reduce: Use memset instead of memcpy setting 0 00:00:52.991 [Pipeline] } 00:00:53.006 [Pipeline] // stage 00:00:53.015 [Pipeline] stage 00:00:53.018 [Pipeline] { (Prepare) 00:00:53.035 [Pipeline] writeFile 00:00:53.051 [Pipeline] sh 00:00:53.338 + logger -p user.info -t JENKINS-CI 00:00:53.350 [Pipeline] sh 00:00:53.636 + logger -p user.info -t JENKINS-CI 00:00:53.651 [Pipeline] sh 00:00:53.939 + cat autorun-spdk.conf 00:00:53.939 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:53.939 SPDK_TEST_NVMF=1 00:00:53.939 SPDK_TEST_NVME_CLI=1 00:00:53.939 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:53.939 SPDK_TEST_NVMF_NICS=e810 00:00:53.939 SPDK_RUN_ASAN=1 00:00:53.939 SPDK_RUN_UBSAN=1 00:00:53.939 NET_TYPE=phy 00:00:53.947 RUN_NIGHTLY=1 00:00:53.953 [Pipeline] readFile 00:00:53.979 [Pipeline] withEnv 00:00:53.982 [Pipeline] { 00:00:53.995 [Pipeline] sh 00:00:54.287 + set -ex 00:00:54.287 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:54.287 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:54.287 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:54.287 ++ SPDK_TEST_NVMF=1 00:00:54.287 ++ SPDK_TEST_NVME_CLI=1 00:00:54.287 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:54.287 ++ SPDK_TEST_NVMF_NICS=e810 00:00:54.287 ++ SPDK_RUN_ASAN=1 00:00:54.287 ++ SPDK_RUN_UBSAN=1 00:00:54.287 ++ NET_TYPE=phy 00:00:54.287 ++ RUN_NIGHTLY=1 00:00:54.287 + case $SPDK_TEST_NVMF_NICS in 00:00:54.287 + DRIVERS=ice 00:00:54.287 + [[ tcp == \r\d\m\a ]] 00:00:54.287 + [[ -n ice ]] 00:00:54.287 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:54.287 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:02.432 rmmod: ERROR: Module irdma is not currently loaded 00:01:02.432 rmmod: ERROR: Module i40iw is not currently loaded 00:01:02.432 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:02.432 + true 00:01:02.432 + for D in $DRIVERS 00:01:02.432 + sudo modprobe ice 00:01:02.432 + exit 0 00:01:02.442 [Pipeline] } 00:01:02.456 [Pipeline] // withEnv 00:01:02.462 [Pipeline] } 00:01:02.476 [Pipeline] // stage 00:01:02.486 [Pipeline] catchError 00:01:02.488 [Pipeline] { 00:01:02.501 [Pipeline] timeout 00:01:02.501 Timeout set to expire in 1 hr 0 min 00:01:02.503 [Pipeline] { 00:01:02.516 [Pipeline] stage 00:01:02.518 [Pipeline] { (Tests) 00:01:02.530 [Pipeline] sh 00:01:02.817 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:02.817 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:02.817 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:02.817 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:02.817 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:02.817 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:02.817 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:02.817 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:02.817 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:02.817 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:02.817 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:02.817 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:02.817 + source /etc/os-release 00:01:02.817 ++ NAME='Fedora Linux' 00:01:02.817 ++ VERSION='39 (Cloud Edition)' 00:01:02.817 ++ ID=fedora 00:01:02.817 ++ VERSION_ID=39 00:01:02.817 ++ VERSION_CODENAME= 00:01:02.817 ++ PLATFORM_ID=platform:f39 00:01:02.817 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:02.817 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:02.817 ++ LOGO=fedora-logo-icon 00:01:02.817 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:02.817 ++ HOME_URL=https://fedoraproject.org/ 00:01:02.817 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:02.817 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:02.817 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:02.817 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:02.817 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:02.817 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:02.817 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:02.817 ++ SUPPORT_END=2024-11-12 00:01:02.817 ++ VARIANT='Cloud Edition' 00:01:02.817 ++ VARIANT_ID=cloud 00:01:02.817 + uname -a 00:01:02.817 Linux spdk-cyp-12 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:02.817 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:06.117 Hugepages 00:01:06.117 node hugesize free / total 00:01:06.117 node0 1048576kB 0 / 0 00:01:06.117 node0 2048kB 0 / 0 00:01:06.117 node1 1048576kB 0 / 0 00:01:06.117 node1 2048kB 0 / 0 00:01:06.117 00:01:06.117 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:06.117 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:06.117 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:06.117 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:06.117 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:06.117 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:06.117 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:06.117 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:06.117 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:06.378 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:06.378 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:06.378 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:06.378 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:06.378 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:06.378 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:06.378 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:06.378 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:06.378 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:06.378 + rm -f /tmp/spdk-ld-path 00:01:06.378 + source autorun-spdk.conf 00:01:06.378 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:06.378 ++ SPDK_TEST_NVMF=1 00:01:06.378 ++ SPDK_TEST_NVME_CLI=1 00:01:06.378 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:06.378 ++ SPDK_TEST_NVMF_NICS=e810 00:01:06.378 ++ SPDK_RUN_ASAN=1 00:01:06.378 ++ SPDK_RUN_UBSAN=1 00:01:06.378 ++ NET_TYPE=phy 00:01:06.378 ++ RUN_NIGHTLY=1 00:01:06.378 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:06.378 + [[ -n '' ]] 00:01:06.378 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:06.378 + for M in /var/spdk/build-*-manifest.txt 00:01:06.378 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:06.378 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:06.378 + for M in /var/spdk/build-*-manifest.txt 00:01:06.378 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:06.378 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:06.378 + for M in /var/spdk/build-*-manifest.txt 00:01:06.378 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:06.378 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:06.378 ++ uname 00:01:06.378 + [[ Linux == \L\i\n\u\x ]] 00:01:06.378 + sudo dmesg -T 00:01:06.378 + sudo dmesg --clear 00:01:06.378 + dmesg_pid=3491358 00:01:06.378 + [[ Fedora Linux == FreeBSD ]] 00:01:06.378 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:06.378 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:06.378 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:06.378 + [[ -x /usr/src/fio-static/fio ]] 00:01:06.378 + export FIO_BIN=/usr/src/fio-static/fio 00:01:06.378 + FIO_BIN=/usr/src/fio-static/fio 00:01:06.378 + sudo dmesg -Tw 00:01:06.378 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:06.378 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:06.378 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:06.378 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:06.378 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:06.378 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:06.378 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:06.378 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:06.378 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:06.639 13:06:14 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:06.639 13:06:14 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:06.639 13:06:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:06.639 13:06:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:06.639 13:06:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:06.639 13:06:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:06.639 13:06:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:06.639 13:06:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_RUN_ASAN=1 00:01:06.639 13:06:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:06.639 13:06:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:06.639 13:06:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=1 00:01:06.639 13:06:14 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:06.639 13:06:14 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:06.639 13:06:14 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:06.639 13:06:14 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:06.639 13:06:14 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:06.639 13:06:14 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:06.639 13:06:14 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:06.639 13:06:14 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:06.639 13:06:14 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:06.639 13:06:14 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:06.639 13:06:14 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:06.639 13:06:14 -- paths/export.sh@5 -- $ export PATH 00:01:06.639 13:06:14 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:06.639 13:06:14 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:06.639 13:06:14 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:06.639 13:06:14 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730981174.XXXXXX 00:01:06.639 13:06:14 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730981174.l4ixdx 00:01:06.639 13:06:14 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:06.639 13:06:14 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:06.639 13:06:14 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:06.639 13:06:14 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:06.639 13:06:14 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:06.639 13:06:14 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:06.639 13:06:14 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:06.639 13:06:14 -- common/autotest_common.sh@10 -- $ set +x 00:01:06.639 13:06:14 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:06.639 13:06:14 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:06.639 13:06:14 -- pm/common@17 -- $ local monitor 00:01:06.639 13:06:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:06.640 13:06:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:06.640 13:06:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:06.640 13:06:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:06.640 13:06:14 -- pm/common@21 -- $ date +%s 00:01:06.640 13:06:14 -- pm/common@21 -- $ date +%s 00:01:06.640 13:06:14 -- pm/common@25 -- $ sleep 1 00:01:06.640 13:06:14 -- pm/common@21 -- $ date +%s 00:01:06.640 13:06:14 -- pm/common@21 -- $ date +%s 00:01:06.640 13:06:14 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730981174 00:01:06.640 13:06:14 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730981174 00:01:06.640 13:06:14 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730981174 00:01:06.640 13:06:14 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730981174 00:01:06.640 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730981174_collect-cpu-load.pm.log 00:01:06.640 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730981174_collect-vmstat.pm.log 00:01:06.640 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730981174_collect-cpu-temp.pm.log 00:01:06.640 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730981174_collect-bmc-pm.bmc.pm.log 00:01:07.584 13:06:15 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:07.584 13:06:15 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:07.584 13:06:15 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:07.584 13:06:15 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:07.584 13:06:15 -- spdk/autobuild.sh@16 -- $ date -u 00:01:07.584 Thu Nov 7 12:06:15 PM UTC 2024 00:01:07.584 13:06:15 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:07.584 v25.01-pre-175-gb264e22f0 00:01:07.584 13:06:15 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:07.584 13:06:15 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:07.584 13:06:15 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:07.584 13:06:15 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:07.584 13:06:15 -- common/autotest_common.sh@10 -- $ set +x 00:01:07.584 ************************************ 00:01:07.584 START TEST asan 00:01:07.584 ************************************ 00:01:07.584 13:06:15 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:01:07.584 using asan 00:01:07.584 00:01:07.584 real 0m0.000s 00:01:07.584 user 0m0.000s 00:01:07.584 sys 0m0.000s 00:01:07.584 13:06:15 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:07.584 13:06:15 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:07.584 ************************************ 00:01:07.584 END TEST asan 00:01:07.584 ************************************ 00:01:07.845 13:06:15 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:07.845 13:06:15 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:07.845 13:06:15 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:07.845 13:06:15 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:07.845 13:06:15 -- common/autotest_common.sh@10 -- $ set +x 00:01:07.845 ************************************ 00:01:07.845 START TEST ubsan 00:01:07.845 ************************************ 00:01:07.845 13:06:15 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:01:07.845 using ubsan 00:01:07.845 00:01:07.845 real 0m0.000s 00:01:07.845 user 0m0.000s 00:01:07.845 sys 0m0.000s 00:01:07.845 13:06:15 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:07.845 13:06:15 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:07.845 ************************************ 00:01:07.845 END TEST ubsan 00:01:07.845 ************************************ 00:01:07.845 13:06:15 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:07.845 13:06:15 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:07.845 13:06:15 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:07.845 13:06:15 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:07.845 13:06:15 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:07.845 13:06:15 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:07.845 13:06:15 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:07.845 13:06:15 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:07.845 13:06:15 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:07.845 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:07.845 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:08.106 Using 'verbs' RDMA provider 00:01:23.983 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:36.217 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:36.217 Creating mk/config.mk...done. 00:01:36.217 Creating mk/cc.flags.mk...done. 00:01:36.217 Type 'make' to build. 00:01:36.217 13:06:43 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:01:36.217 13:06:43 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:36.217 13:06:43 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:36.217 13:06:43 -- common/autotest_common.sh@10 -- $ set +x 00:01:36.217 ************************************ 00:01:36.217 START TEST make 00:01:36.217 ************************************ 00:01:36.217 13:06:44 make -- common/autotest_common.sh@1127 -- $ make -j144 00:01:36.478 make[1]: Nothing to be done for 'all'. 00:01:44.620 The Meson build system 00:01:44.620 Version: 1.5.0 00:01:44.620 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:44.620 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:44.620 Build type: native build 00:01:44.620 Program cat found: YES (/usr/bin/cat) 00:01:44.620 Project name: DPDK 00:01:44.620 Project version: 24.03.0 00:01:44.620 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:44.620 C linker for the host machine: cc ld.bfd 2.40-14 00:01:44.620 Host machine cpu family: x86_64 00:01:44.620 Host machine cpu: x86_64 00:01:44.620 Message: ## Building in Developer Mode ## 00:01:44.620 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:44.620 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:44.620 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:44.620 Program python3 found: YES (/usr/bin/python3) 00:01:44.620 Program cat found: YES (/usr/bin/cat) 00:01:44.620 Compiler for C supports arguments -march=native: YES 00:01:44.620 Checking for size of "void *" : 8 00:01:44.620 Checking for size of "void *" : 8 (cached) 00:01:44.620 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:44.620 Library m found: YES 00:01:44.620 Library numa found: YES 00:01:44.620 Has header "numaif.h" : YES 00:01:44.620 Library fdt found: NO 00:01:44.620 Library execinfo found: NO 00:01:44.620 Has header "execinfo.h" : YES 00:01:44.620 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:44.621 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:44.621 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:44.621 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:44.621 Run-time dependency openssl found: YES 3.1.1 00:01:44.621 Run-time dependency libpcap found: YES 1.10.4 00:01:44.621 Has header "pcap.h" with dependency libpcap: YES 00:01:44.621 Compiler for C supports arguments -Wcast-qual: YES 00:01:44.621 Compiler for C supports arguments -Wdeprecated: YES 00:01:44.621 Compiler for C supports arguments -Wformat: YES 00:01:44.621 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:44.621 Compiler for C supports arguments -Wformat-security: NO 00:01:44.621 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:44.621 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:44.621 Compiler for C supports arguments -Wnested-externs: YES 00:01:44.621 Compiler for C supports arguments -Wold-style-definition: YES 00:01:44.621 Compiler for C supports arguments -Wpointer-arith: YES 00:01:44.621 Compiler for C supports arguments -Wsign-compare: YES 00:01:44.621 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:44.621 Compiler for C supports arguments -Wundef: YES 00:01:44.621 Compiler for C supports arguments -Wwrite-strings: YES 00:01:44.621 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:44.621 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:44.621 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:44.621 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:44.621 Program objdump found: YES (/usr/bin/objdump) 00:01:44.621 Compiler for C supports arguments -mavx512f: YES 00:01:44.621 Checking if "AVX512 checking" compiles: YES 00:01:44.621 Fetching value of define "__SSE4_2__" : 1 00:01:44.621 Fetching value of define "__AES__" : 1 00:01:44.621 Fetching value of define "__AVX__" : 1 00:01:44.621 Fetching value of define "__AVX2__" : 1 00:01:44.621 Fetching value of define "__AVX512BW__" : 1 00:01:44.621 Fetching value of define "__AVX512CD__" : 1 00:01:44.621 Fetching value of define "__AVX512DQ__" : 1 00:01:44.621 Fetching value of define "__AVX512F__" : 1 00:01:44.621 Fetching value of define "__AVX512VL__" : 1 00:01:44.621 Fetching value of define "__PCLMUL__" : 1 00:01:44.621 Fetching value of define "__RDRND__" : 1 00:01:44.621 Fetching value of define "__RDSEED__" : 1 00:01:44.621 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:44.621 Fetching value of define "__znver1__" : (undefined) 00:01:44.621 Fetching value of define "__znver2__" : (undefined) 00:01:44.621 Fetching value of define "__znver3__" : (undefined) 00:01:44.621 Fetching value of define "__znver4__" : (undefined) 00:01:44.621 Library asan found: YES 00:01:44.621 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:44.621 Message: lib/log: Defining dependency "log" 00:01:44.621 Message: lib/kvargs: Defining dependency "kvargs" 00:01:44.621 Message: lib/telemetry: Defining dependency "telemetry" 00:01:44.621 Library rt found: YES 00:01:44.621 Checking for function "getentropy" : NO 00:01:44.621 Message: lib/eal: Defining dependency "eal" 00:01:44.621 Message: lib/ring: Defining dependency "ring" 00:01:44.621 Message: lib/rcu: Defining dependency "rcu" 00:01:44.621 Message: lib/mempool: Defining dependency "mempool" 00:01:44.621 Message: lib/mbuf: Defining dependency "mbuf" 00:01:44.621 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:44.621 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:44.621 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:44.621 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:44.621 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:44.621 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:44.621 Compiler for C supports arguments -mpclmul: YES 00:01:44.621 Compiler for C supports arguments -maes: YES 00:01:44.621 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:44.621 Compiler for C supports arguments -mavx512bw: YES 00:01:44.621 Compiler for C supports arguments -mavx512dq: YES 00:01:44.621 Compiler for C supports arguments -mavx512vl: YES 00:01:44.621 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:44.621 Compiler for C supports arguments -mavx2: YES 00:01:44.621 Compiler for C supports arguments -mavx: YES 00:01:44.621 Message: lib/net: Defining dependency "net" 00:01:44.621 Message: lib/meter: Defining dependency "meter" 00:01:44.621 Message: lib/ethdev: Defining dependency "ethdev" 00:01:44.621 Message: lib/pci: Defining dependency "pci" 00:01:44.621 Message: lib/cmdline: Defining dependency "cmdline" 00:01:44.621 Message: lib/hash: Defining dependency "hash" 00:01:44.621 Message: lib/timer: Defining dependency "timer" 00:01:44.621 Message: lib/compressdev: Defining dependency "compressdev" 00:01:44.621 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:44.621 Message: lib/dmadev: Defining dependency "dmadev" 00:01:44.621 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:44.621 Message: lib/power: Defining dependency "power" 00:01:44.621 Message: lib/reorder: Defining dependency "reorder" 00:01:44.621 Message: lib/security: Defining dependency "security" 00:01:44.621 Has header "linux/userfaultfd.h" : YES 00:01:44.621 Has header "linux/vduse.h" : YES 00:01:44.621 Message: lib/vhost: Defining dependency "vhost" 00:01:44.621 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:44.621 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:44.621 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:44.621 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:44.621 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:44.621 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:44.621 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:44.621 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:44.621 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:44.621 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:44.621 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:44.621 Configuring doxy-api-html.conf using configuration 00:01:44.621 Configuring doxy-api-man.conf using configuration 00:01:44.621 Program mandb found: YES (/usr/bin/mandb) 00:01:44.621 Program sphinx-build found: NO 00:01:44.621 Configuring rte_build_config.h using configuration 00:01:44.621 Message: 00:01:44.621 ================= 00:01:44.621 Applications Enabled 00:01:44.621 ================= 00:01:44.621 00:01:44.621 apps: 00:01:44.621 00:01:44.621 00:01:44.621 Message: 00:01:44.621 ================= 00:01:44.621 Libraries Enabled 00:01:44.621 ================= 00:01:44.621 00:01:44.621 libs: 00:01:44.621 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:44.621 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:44.621 cryptodev, dmadev, power, reorder, security, vhost, 00:01:44.621 00:01:44.621 Message: 00:01:44.621 =============== 00:01:44.621 Drivers Enabled 00:01:44.621 =============== 00:01:44.621 00:01:44.621 common: 00:01:44.621 00:01:44.621 bus: 00:01:44.621 pci, vdev, 00:01:44.621 mempool: 00:01:44.621 ring, 00:01:44.621 dma: 00:01:44.621 00:01:44.621 net: 00:01:44.621 00:01:44.621 crypto: 00:01:44.621 00:01:44.621 compress: 00:01:44.621 00:01:44.621 vdpa: 00:01:44.621 00:01:44.621 00:01:44.621 Message: 00:01:44.621 ================= 00:01:44.621 Content Skipped 00:01:44.621 ================= 00:01:44.621 00:01:44.621 apps: 00:01:44.621 dumpcap: explicitly disabled via build config 00:01:44.621 graph: explicitly disabled via build config 00:01:44.621 pdump: explicitly disabled via build config 00:01:44.621 proc-info: explicitly disabled via build config 00:01:44.621 test-acl: explicitly disabled via build config 00:01:44.621 test-bbdev: explicitly disabled via build config 00:01:44.621 test-cmdline: explicitly disabled via build config 00:01:44.621 test-compress-perf: explicitly disabled via build config 00:01:44.621 test-crypto-perf: explicitly disabled via build config 00:01:44.621 test-dma-perf: explicitly disabled via build config 00:01:44.621 test-eventdev: explicitly disabled via build config 00:01:44.621 test-fib: explicitly disabled via build config 00:01:44.621 test-flow-perf: explicitly disabled via build config 00:01:44.621 test-gpudev: explicitly disabled via build config 00:01:44.621 test-mldev: explicitly disabled via build config 00:01:44.621 test-pipeline: explicitly disabled via build config 00:01:44.621 test-pmd: explicitly disabled via build config 00:01:44.621 test-regex: explicitly disabled via build config 00:01:44.621 test-sad: explicitly disabled via build config 00:01:44.621 test-security-perf: explicitly disabled via build config 00:01:44.621 00:01:44.621 libs: 00:01:44.621 argparse: explicitly disabled via build config 00:01:44.621 metrics: explicitly disabled via build config 00:01:44.621 acl: explicitly disabled via build config 00:01:44.621 bbdev: explicitly disabled via build config 00:01:44.621 bitratestats: explicitly disabled via build config 00:01:44.621 bpf: explicitly disabled via build config 00:01:44.621 cfgfile: explicitly disabled via build config 00:01:44.621 distributor: explicitly disabled via build config 00:01:44.621 efd: explicitly disabled via build config 00:01:44.621 eventdev: explicitly disabled via build config 00:01:44.621 dispatcher: explicitly disabled via build config 00:01:44.621 gpudev: explicitly disabled via build config 00:01:44.621 gro: explicitly disabled via build config 00:01:44.621 gso: explicitly disabled via build config 00:01:44.621 ip_frag: explicitly disabled via build config 00:01:44.621 jobstats: explicitly disabled via build config 00:01:44.621 latencystats: explicitly disabled via build config 00:01:44.621 lpm: explicitly disabled via build config 00:01:44.621 member: explicitly disabled via build config 00:01:44.621 pcapng: explicitly disabled via build config 00:01:44.621 rawdev: explicitly disabled via build config 00:01:44.621 regexdev: explicitly disabled via build config 00:01:44.621 mldev: explicitly disabled via build config 00:01:44.621 rib: explicitly disabled via build config 00:01:44.621 sched: explicitly disabled via build config 00:01:44.621 stack: explicitly disabled via build config 00:01:44.621 ipsec: explicitly disabled via build config 00:01:44.621 pdcp: explicitly disabled via build config 00:01:44.622 fib: explicitly disabled via build config 00:01:44.622 port: explicitly disabled via build config 00:01:44.622 pdump: explicitly disabled via build config 00:01:44.622 table: explicitly disabled via build config 00:01:44.622 pipeline: explicitly disabled via build config 00:01:44.622 graph: explicitly disabled via build config 00:01:44.622 node: explicitly disabled via build config 00:01:44.622 00:01:44.622 drivers: 00:01:44.622 common/cpt: not in enabled drivers build config 00:01:44.622 common/dpaax: not in enabled drivers build config 00:01:44.622 common/iavf: not in enabled drivers build config 00:01:44.622 common/idpf: not in enabled drivers build config 00:01:44.622 common/ionic: not in enabled drivers build config 00:01:44.622 common/mvep: not in enabled drivers build config 00:01:44.622 common/octeontx: not in enabled drivers build config 00:01:44.622 bus/auxiliary: not in enabled drivers build config 00:01:44.622 bus/cdx: not in enabled drivers build config 00:01:44.622 bus/dpaa: not in enabled drivers build config 00:01:44.622 bus/fslmc: not in enabled drivers build config 00:01:44.622 bus/ifpga: not in enabled drivers build config 00:01:44.622 bus/platform: not in enabled drivers build config 00:01:44.622 bus/uacce: not in enabled drivers build config 00:01:44.622 bus/vmbus: not in enabled drivers build config 00:01:44.622 common/cnxk: not in enabled drivers build config 00:01:44.622 common/mlx5: not in enabled drivers build config 00:01:44.622 common/nfp: not in enabled drivers build config 00:01:44.622 common/nitrox: not in enabled drivers build config 00:01:44.622 common/qat: not in enabled drivers build config 00:01:44.622 common/sfc_efx: not in enabled drivers build config 00:01:44.622 mempool/bucket: not in enabled drivers build config 00:01:44.622 mempool/cnxk: not in enabled drivers build config 00:01:44.622 mempool/dpaa: not in enabled drivers build config 00:01:44.622 mempool/dpaa2: not in enabled drivers build config 00:01:44.622 mempool/octeontx: not in enabled drivers build config 00:01:44.622 mempool/stack: not in enabled drivers build config 00:01:44.622 dma/cnxk: not in enabled drivers build config 00:01:44.622 dma/dpaa: not in enabled drivers build config 00:01:44.622 dma/dpaa2: not in enabled drivers build config 00:01:44.622 dma/hisilicon: not in enabled drivers build config 00:01:44.622 dma/idxd: not in enabled drivers build config 00:01:44.622 dma/ioat: not in enabled drivers build config 00:01:44.622 dma/skeleton: not in enabled drivers build config 00:01:44.622 net/af_packet: not in enabled drivers build config 00:01:44.622 net/af_xdp: not in enabled drivers build config 00:01:44.622 net/ark: not in enabled drivers build config 00:01:44.622 net/atlantic: not in enabled drivers build config 00:01:44.622 net/avp: not in enabled drivers build config 00:01:44.622 net/axgbe: not in enabled drivers build config 00:01:44.622 net/bnx2x: not in enabled drivers build config 00:01:44.622 net/bnxt: not in enabled drivers build config 00:01:44.622 net/bonding: not in enabled drivers build config 00:01:44.622 net/cnxk: not in enabled drivers build config 00:01:44.622 net/cpfl: not in enabled drivers build config 00:01:44.622 net/cxgbe: not in enabled drivers build config 00:01:44.622 net/dpaa: not in enabled drivers build config 00:01:44.622 net/dpaa2: not in enabled drivers build config 00:01:44.622 net/e1000: not in enabled drivers build config 00:01:44.622 net/ena: not in enabled drivers build config 00:01:44.622 net/enetc: not in enabled drivers build config 00:01:44.622 net/enetfec: not in enabled drivers build config 00:01:44.622 net/enic: not in enabled drivers build config 00:01:44.622 net/failsafe: not in enabled drivers build config 00:01:44.622 net/fm10k: not in enabled drivers build config 00:01:44.622 net/gve: not in enabled drivers build config 00:01:44.622 net/hinic: not in enabled drivers build config 00:01:44.622 net/hns3: not in enabled drivers build config 00:01:44.622 net/i40e: not in enabled drivers build config 00:01:44.622 net/iavf: not in enabled drivers build config 00:01:44.622 net/ice: not in enabled drivers build config 00:01:44.622 net/idpf: not in enabled drivers build config 00:01:44.622 net/igc: not in enabled drivers build config 00:01:44.622 net/ionic: not in enabled drivers build config 00:01:44.622 net/ipn3ke: not in enabled drivers build config 00:01:44.622 net/ixgbe: not in enabled drivers build config 00:01:44.622 net/mana: not in enabled drivers build config 00:01:44.622 net/memif: not in enabled drivers build config 00:01:44.622 net/mlx4: not in enabled drivers build config 00:01:44.622 net/mlx5: not in enabled drivers build config 00:01:44.622 net/mvneta: not in enabled drivers build config 00:01:44.622 net/mvpp2: not in enabled drivers build config 00:01:44.622 net/netvsc: not in enabled drivers build config 00:01:44.622 net/nfb: not in enabled drivers build config 00:01:44.622 net/nfp: not in enabled drivers build config 00:01:44.622 net/ngbe: not in enabled drivers build config 00:01:44.622 net/null: not in enabled drivers build config 00:01:44.622 net/octeontx: not in enabled drivers build config 00:01:44.622 net/octeon_ep: not in enabled drivers build config 00:01:44.622 net/pcap: not in enabled drivers build config 00:01:44.622 net/pfe: not in enabled drivers build config 00:01:44.622 net/qede: not in enabled drivers build config 00:01:44.622 net/ring: not in enabled drivers build config 00:01:44.622 net/sfc: not in enabled drivers build config 00:01:44.622 net/softnic: not in enabled drivers build config 00:01:44.622 net/tap: not in enabled drivers build config 00:01:44.622 net/thunderx: not in enabled drivers build config 00:01:44.622 net/txgbe: not in enabled drivers build config 00:01:44.622 net/vdev_netvsc: not in enabled drivers build config 00:01:44.622 net/vhost: not in enabled drivers build config 00:01:44.622 net/virtio: not in enabled drivers build config 00:01:44.622 net/vmxnet3: not in enabled drivers build config 00:01:44.622 raw/*: missing internal dependency, "rawdev" 00:01:44.622 crypto/armv8: not in enabled drivers build config 00:01:44.622 crypto/bcmfs: not in enabled drivers build config 00:01:44.622 crypto/caam_jr: not in enabled drivers build config 00:01:44.622 crypto/ccp: not in enabled drivers build config 00:01:44.622 crypto/cnxk: not in enabled drivers build config 00:01:44.622 crypto/dpaa_sec: not in enabled drivers build config 00:01:44.622 crypto/dpaa2_sec: not in enabled drivers build config 00:01:44.622 crypto/ipsec_mb: not in enabled drivers build config 00:01:44.622 crypto/mlx5: not in enabled drivers build config 00:01:44.622 crypto/mvsam: not in enabled drivers build config 00:01:44.622 crypto/nitrox: not in enabled drivers build config 00:01:44.622 crypto/null: not in enabled drivers build config 00:01:44.622 crypto/octeontx: not in enabled drivers build config 00:01:44.622 crypto/openssl: not in enabled drivers build config 00:01:44.622 crypto/scheduler: not in enabled drivers build config 00:01:44.622 crypto/uadk: not in enabled drivers build config 00:01:44.622 crypto/virtio: not in enabled drivers build config 00:01:44.622 compress/isal: not in enabled drivers build config 00:01:44.622 compress/mlx5: not in enabled drivers build config 00:01:44.622 compress/nitrox: not in enabled drivers build config 00:01:44.622 compress/octeontx: not in enabled drivers build config 00:01:44.622 compress/zlib: not in enabled drivers build config 00:01:44.622 regex/*: missing internal dependency, "regexdev" 00:01:44.622 ml/*: missing internal dependency, "mldev" 00:01:44.622 vdpa/ifc: not in enabled drivers build config 00:01:44.622 vdpa/mlx5: not in enabled drivers build config 00:01:44.622 vdpa/nfp: not in enabled drivers build config 00:01:44.622 vdpa/sfc: not in enabled drivers build config 00:01:44.622 event/*: missing internal dependency, "eventdev" 00:01:44.622 baseband/*: missing internal dependency, "bbdev" 00:01:44.622 gpu/*: missing internal dependency, "gpudev" 00:01:44.622 00:01:44.622 00:01:44.883 Build targets in project: 84 00:01:44.883 00:01:44.883 DPDK 24.03.0 00:01:44.883 00:01:44.883 User defined options 00:01:44.883 buildtype : debug 00:01:44.883 default_library : shared 00:01:44.883 libdir : lib 00:01:44.883 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:44.883 b_sanitize : address 00:01:44.883 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:44.883 c_link_args : 00:01:44.883 cpu_instruction_set: native 00:01:44.883 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:44.883 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:44.883 enable_docs : false 00:01:44.883 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:44.883 enable_kmods : false 00:01:44.883 max_lcores : 128 00:01:44.883 tests : false 00:01:44.883 00:01:44.883 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:45.460 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:45.460 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:45.460 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:45.460 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:45.460 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:45.460 [5/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:45.460 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:45.460 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:45.460 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:45.460 [9/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:45.460 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:45.460 [11/267] Linking static target lib/librte_kvargs.a 00:01:45.460 [12/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:45.460 [13/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:45.460 [14/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:45.719 [15/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:45.719 [16/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:45.719 [17/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:45.719 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:45.719 [19/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:45.719 [20/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:45.719 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:45.719 [22/267] Linking static target lib/librte_log.a 00:01:45.719 [23/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:45.719 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:45.719 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:45.719 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:45.719 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:45.719 [28/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:45.719 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:45.719 [30/267] Linking static target lib/librte_pci.a 00:01:45.719 [31/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:45.719 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:45.719 [33/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:45.719 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:45.719 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:45.978 [36/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:45.978 [37/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:45.978 [38/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:45.978 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:45.978 [40/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:45.978 [41/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.978 [42/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.978 [43/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:45.978 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:45.978 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:45.978 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:45.978 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:45.978 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:45.978 [49/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:45.978 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:45.978 [51/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:45.978 [52/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:45.978 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:45.978 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:45.978 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:45.978 [56/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:45.978 [57/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:45.978 [58/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:45.978 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:45.978 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:45.978 [61/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:45.978 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:45.978 [63/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:45.978 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:45.978 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:45.978 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:45.978 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:45.978 [68/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:45.979 [69/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:45.979 [70/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:45.979 [71/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:45.979 [72/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:45.979 [73/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:45.979 [74/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:46.238 [75/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:46.238 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:46.238 [77/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:46.238 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:46.238 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:46.238 [80/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:46.238 [81/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:46.238 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:46.238 [83/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:46.238 [84/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:46.238 [85/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:46.238 [86/267] Linking static target lib/librte_meter.a 00:01:46.238 [87/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:46.238 [88/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:46.238 [89/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:46.238 [90/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:46.238 [91/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:46.238 [92/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:46.238 [93/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:46.238 [94/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:46.238 [95/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:46.238 [96/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:46.238 [97/267] Linking static target lib/librte_cmdline.a 00:01:46.238 [98/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:46.238 [99/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:46.238 [100/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:46.238 [101/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:46.238 [102/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:46.238 [103/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:46.238 [104/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:46.238 [105/267] Linking static target lib/librte_ring.a 00:01:46.238 [106/267] Linking static target lib/librte_timer.a 00:01:46.238 [107/267] Linking static target lib/librte_telemetry.a 00:01:46.238 [108/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:46.238 [109/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:46.238 [110/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:46.238 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:46.238 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:46.238 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:46.238 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:46.238 [115/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:46.238 [116/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:46.238 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:46.238 [118/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:46.238 [119/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:46.238 [120/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:46.238 [121/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:46.238 [122/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:46.238 [123/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:46.238 [124/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:46.238 [125/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:46.238 [126/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:46.239 [127/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:46.239 [128/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:46.239 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:46.239 [130/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:46.239 [131/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:46.239 [132/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.239 [133/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:46.239 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:46.239 [135/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:46.239 [136/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:46.239 [137/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:46.239 [138/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:46.239 [139/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:46.239 [140/267] Linking target lib/librte_log.so.24.1 00:01:46.239 [141/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:46.239 [142/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:46.239 [143/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:46.239 [144/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:46.239 [145/267] Linking static target lib/librte_dmadev.a 00:01:46.239 [146/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:46.239 [147/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:46.239 [148/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:46.239 [149/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:46.239 [150/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:46.239 [151/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:46.239 [152/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:46.239 [153/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:46.239 [154/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:46.239 [155/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:46.239 [156/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:46.239 [157/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:46.239 [158/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:46.239 [159/267] Linking static target lib/librte_mempool.a 00:01:46.239 [160/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:46.239 [161/267] Linking static target lib/librte_rcu.a 00:01:46.239 [162/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:46.499 [163/267] Linking static target lib/librte_power.a 00:01:46.499 [164/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:46.499 [165/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:46.499 [166/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:46.499 [167/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.499 [168/267] Linking static target lib/librte_net.a 00:01:46.499 [169/267] Linking static target lib/librte_reorder.a 00:01:46.499 [170/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:46.499 [171/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:46.499 [172/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:46.499 [173/267] Linking static target lib/librte_compressdev.a 00:01:46.499 [174/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:46.499 [175/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:46.499 [176/267] Linking target lib/librte_kvargs.so.24.1 00:01:46.499 [177/267] Linking static target lib/librte_eal.a 00:01:46.499 [178/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:46.499 [179/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:46.499 [180/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:46.499 [181/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:46.499 [182/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:46.499 [183/267] Linking static target drivers/librte_bus_vdev.a 00:01:46.499 [184/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:46.499 [185/267] Linking static target lib/librte_security.a 00:01:46.499 [186/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:46.499 [187/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.499 [188/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:46.499 [189/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:46.499 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:46.499 [191/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.760 [192/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:46.760 [193/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:46.760 [194/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:46.760 [195/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:46.760 [196/267] Linking static target drivers/librte_bus_pci.a 00:01:46.760 [197/267] Linking static target lib/librte_mbuf.a 00:01:46.760 [198/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:46.760 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:46.760 [200/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:46.760 [201/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.760 [202/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:46.760 [203/267] Linking static target drivers/librte_mempool_ring.a 00:01:46.760 [204/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.760 [205/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.760 [206/267] Linking target lib/librte_telemetry.so.24.1 00:01:46.760 [207/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.021 [208/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:47.021 [209/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.021 [210/267] Linking static target lib/librte_hash.a 00:01:47.021 [211/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:47.021 [212/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.282 [213/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:47.282 [214/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:47.282 [215/267] Linking static target lib/librte_cryptodev.a 00:01:47.282 [216/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.282 [217/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.282 [218/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.282 [219/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.543 [220/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.543 [221/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.543 [222/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.115 [223/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.116 [224/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:48.116 [225/267] Linking static target lib/librte_ethdev.a 00:01:48.377 [226/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:49.321 [227/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.708 [228/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:50.708 [229/267] Linking static target lib/librte_vhost.a 00:01:53.251 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.456 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.026 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.026 [233/267] Linking target lib/librte_eal.so.24.1 00:01:58.026 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:58.026 [235/267] Linking target lib/librte_ring.so.24.1 00:01:58.026 [236/267] Linking target lib/librte_meter.so.24.1 00:01:58.026 [237/267] Linking target lib/librte_timer.so.24.1 00:01:58.026 [238/267] Linking target lib/librte_pci.so.24.1 00:01:58.026 [239/267] Linking target drivers/librte_bus_vdev.so.24.1 00:01:58.026 [240/267] Linking target lib/librte_dmadev.so.24.1 00:01:58.287 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:58.287 [242/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:58.287 [243/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:58.287 [244/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:58.287 [245/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:58.287 [246/267] Linking target lib/librte_rcu.so.24.1 00:01:58.287 [247/267] Linking target lib/librte_mempool.so.24.1 00:01:58.287 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:01:58.547 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:58.547 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:58.547 [251/267] Linking target lib/librte_mbuf.so.24.1 00:01:58.547 [252/267] Linking target drivers/librte_mempool_ring.so.24.1 00:01:58.547 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:58.807 [254/267] Linking target lib/librte_compressdev.so.24.1 00:01:58.807 [255/267] Linking target lib/librte_reorder.so.24.1 00:01:58.807 [256/267] Linking target lib/librte_net.so.24.1 00:01:58.807 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:01:58.807 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:58.807 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:58.807 [260/267] Linking target lib/librte_hash.so.24.1 00:01:58.807 [261/267] Linking target lib/librte_cmdline.so.24.1 00:01:58.807 [262/267] Linking target lib/librte_security.so.24.1 00:01:58.807 [263/267] Linking target lib/librte_ethdev.so.24.1 00:01:59.068 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:59.068 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:59.068 [266/267] Linking target lib/librte_power.so.24.1 00:01:59.068 [267/267] Linking target lib/librte_vhost.so.24.1 00:01:59.068 INFO: autodetecting backend as ninja 00:01:59.068 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:03.271 CC lib/ut_mock/mock.o 00:02:03.271 CC lib/ut/ut.o 00:02:03.271 CC lib/log/log.o 00:02:03.271 CC lib/log/log_flags.o 00:02:03.271 CC lib/log/log_deprecated.o 00:02:03.531 LIB libspdk_ut.a 00:02:03.531 LIB libspdk_ut_mock.a 00:02:03.531 LIB libspdk_log.a 00:02:03.531 SO libspdk_ut.so.2.0 00:02:03.531 SO libspdk_ut_mock.so.6.0 00:02:03.531 SO libspdk_log.so.7.1 00:02:03.531 SYMLINK libspdk_ut_mock.so 00:02:03.531 SYMLINK libspdk_ut.so 00:02:03.531 SYMLINK libspdk_log.so 00:02:03.793 CC lib/ioat/ioat.o 00:02:04.054 CXX lib/trace_parser/trace.o 00:02:04.054 CC lib/dma/dma.o 00:02:04.054 CC lib/util/base64.o 00:02:04.054 CC lib/util/bit_array.o 00:02:04.054 CC lib/util/crc32.o 00:02:04.054 CC lib/util/cpuset.o 00:02:04.054 CC lib/util/crc16.o 00:02:04.054 CC lib/util/crc32c.o 00:02:04.054 CC lib/util/crc32_ieee.o 00:02:04.054 CC lib/util/crc64.o 00:02:04.054 CC lib/util/dif.o 00:02:04.054 CC lib/util/fd.o 00:02:04.054 CC lib/util/fd_group.o 00:02:04.054 CC lib/util/file.o 00:02:04.054 CC lib/util/hexlify.o 00:02:04.054 CC lib/util/iov.o 00:02:04.054 CC lib/util/math.o 00:02:04.054 CC lib/util/net.o 00:02:04.054 CC lib/util/pipe.o 00:02:04.054 CC lib/util/strerror_tls.o 00:02:04.054 CC lib/util/string.o 00:02:04.054 CC lib/util/uuid.o 00:02:04.054 CC lib/util/xor.o 00:02:04.054 CC lib/util/zipf.o 00:02:04.054 CC lib/util/md5.o 00:02:04.054 CC lib/vfio_user/host/vfio_user.o 00:02:04.054 CC lib/vfio_user/host/vfio_user_pci.o 00:02:04.315 LIB libspdk_dma.a 00:02:04.315 SO libspdk_dma.so.5.0 00:02:04.315 LIB libspdk_ioat.a 00:02:04.315 SYMLINK libspdk_dma.so 00:02:04.315 SO libspdk_ioat.so.7.0 00:02:04.315 SYMLINK libspdk_ioat.so 00:02:04.576 LIB libspdk_vfio_user.a 00:02:04.576 SO libspdk_vfio_user.so.5.0 00:02:04.576 SYMLINK libspdk_vfio_user.so 00:02:04.576 LIB libspdk_util.a 00:02:04.837 SO libspdk_util.so.10.1 00:02:04.837 SYMLINK libspdk_util.so 00:02:04.837 LIB libspdk_trace_parser.a 00:02:04.837 SO libspdk_trace_parser.so.6.0 00:02:05.135 SYMLINK libspdk_trace_parser.so 00:02:05.135 CC lib/rdma_utils/rdma_utils.o 00:02:05.135 CC lib/json/json_parse.o 00:02:05.135 CC lib/json/json_util.o 00:02:05.135 CC lib/json/json_write.o 00:02:05.135 CC lib/vmd/vmd.o 00:02:05.135 CC lib/vmd/led.o 00:02:05.135 CC lib/conf/conf.o 00:02:05.135 CC lib/env_dpdk/env.o 00:02:05.135 CC lib/env_dpdk/memory.o 00:02:05.448 CC lib/idxd/idxd.o 00:02:05.448 CC lib/idxd/idxd_kernel.o 00:02:05.448 CC lib/env_dpdk/pci.o 00:02:05.448 CC lib/idxd/idxd_user.o 00:02:05.448 CC lib/env_dpdk/init.o 00:02:05.448 CC lib/env_dpdk/pci_virtio.o 00:02:05.448 CC lib/env_dpdk/threads.o 00:02:05.448 CC lib/env_dpdk/pci_ioat.o 00:02:05.448 CC lib/env_dpdk/pci_vmd.o 00:02:05.448 CC lib/env_dpdk/pci_idxd.o 00:02:05.448 CC lib/env_dpdk/pci_event.o 00:02:05.448 CC lib/env_dpdk/sigbus_handler.o 00:02:05.448 CC lib/env_dpdk/pci_dpdk.o 00:02:05.448 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:05.448 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:05.448 LIB libspdk_conf.a 00:02:05.448 LIB libspdk_rdma_utils.a 00:02:05.448 LIB libspdk_json.a 00:02:05.448 SO libspdk_conf.so.6.0 00:02:05.448 SO libspdk_rdma_utils.so.1.0 00:02:05.793 SO libspdk_json.so.6.0 00:02:05.793 SYMLINK libspdk_conf.so 00:02:05.793 SYMLINK libspdk_rdma_utils.so 00:02:05.793 SYMLINK libspdk_json.so 00:02:06.053 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:06.053 CC lib/rdma_provider/common.o 00:02:06.053 LIB libspdk_idxd.a 00:02:06.053 CC lib/jsonrpc/jsonrpc_server.o 00:02:06.053 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:06.053 CC lib/jsonrpc/jsonrpc_client.o 00:02:06.053 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:06.053 LIB libspdk_vmd.a 00:02:06.053 SO libspdk_idxd.so.12.1 00:02:06.053 SO libspdk_vmd.so.6.0 00:02:06.053 SYMLINK libspdk_idxd.so 00:02:06.053 SYMLINK libspdk_vmd.so 00:02:06.053 LIB libspdk_rdma_provider.a 00:02:06.053 SO libspdk_rdma_provider.so.7.0 00:02:06.314 LIB libspdk_jsonrpc.a 00:02:06.314 SYMLINK libspdk_rdma_provider.so 00:02:06.314 SO libspdk_jsonrpc.so.6.0 00:02:06.314 LIB libspdk_env_dpdk.a 00:02:06.314 SO libspdk_env_dpdk.so.15.1 00:02:06.314 SYMLINK libspdk_jsonrpc.so 00:02:06.576 SYMLINK libspdk_env_dpdk.so 00:02:06.576 CC lib/rpc/rpc.o 00:02:06.837 LIB libspdk_rpc.a 00:02:07.097 SO libspdk_rpc.so.6.0 00:02:07.097 SYMLINK libspdk_rpc.so 00:02:07.357 CC lib/trace/trace.o 00:02:07.357 CC lib/notify/notify.o 00:02:07.357 CC lib/trace/trace_flags.o 00:02:07.357 CC lib/notify/notify_rpc.o 00:02:07.357 CC lib/trace/trace_rpc.o 00:02:07.357 CC lib/keyring/keyring.o 00:02:07.357 CC lib/keyring/keyring_rpc.o 00:02:07.618 LIB libspdk_notify.a 00:02:07.618 SO libspdk_notify.so.6.0 00:02:07.618 LIB libspdk_keyring.a 00:02:07.618 LIB libspdk_trace.a 00:02:07.618 SYMLINK libspdk_notify.so 00:02:07.618 SO libspdk_trace.so.11.0 00:02:07.618 SO libspdk_keyring.so.2.0 00:02:07.879 SYMLINK libspdk_trace.so 00:02:07.879 SYMLINK libspdk_keyring.so 00:02:08.139 CC lib/thread/thread.o 00:02:08.139 CC lib/thread/iobuf.o 00:02:08.139 CC lib/sock/sock.o 00:02:08.139 CC lib/sock/sock_rpc.o 00:02:08.710 LIB libspdk_sock.a 00:02:08.710 SO libspdk_sock.so.10.0 00:02:08.710 SYMLINK libspdk_sock.so 00:02:08.971 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:08.971 CC lib/nvme/nvme_ctrlr.o 00:02:08.971 CC lib/nvme/nvme_fabric.o 00:02:08.971 CC lib/nvme/nvme_ns_cmd.o 00:02:08.971 CC lib/nvme/nvme_ns.o 00:02:08.971 CC lib/nvme/nvme_pcie_common.o 00:02:08.971 CC lib/nvme/nvme_pcie.o 00:02:08.971 CC lib/nvme/nvme_qpair.o 00:02:08.971 CC lib/nvme/nvme.o 00:02:08.971 CC lib/nvme/nvme_quirks.o 00:02:08.971 CC lib/nvme/nvme_transport.o 00:02:08.971 CC lib/nvme/nvme_discovery.o 00:02:08.971 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:08.971 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:08.971 CC lib/nvme/nvme_tcp.o 00:02:08.971 CC lib/nvme/nvme_opal.o 00:02:08.971 CC lib/nvme/nvme_io_msg.o 00:02:08.971 CC lib/nvme/nvme_poll_group.o 00:02:08.971 CC lib/nvme/nvme_zns.o 00:02:08.971 CC lib/nvme/nvme_stubs.o 00:02:08.971 CC lib/nvme/nvme_auth.o 00:02:08.971 CC lib/nvme/nvme_cuse.o 00:02:08.971 CC lib/nvme/nvme_rdma.o 00:02:09.910 LIB libspdk_thread.a 00:02:09.910 SO libspdk_thread.so.11.0 00:02:09.910 SYMLINK libspdk_thread.so 00:02:10.171 CC lib/accel/accel.o 00:02:10.171 CC lib/accel/accel_rpc.o 00:02:10.171 CC lib/accel/accel_sw.o 00:02:10.171 CC lib/virtio/virtio.o 00:02:10.171 CC lib/virtio/virtio_vhost_user.o 00:02:10.171 CC lib/blob/blobstore.o 00:02:10.171 CC lib/virtio/virtio_vfio_user.o 00:02:10.171 CC lib/blob/request.o 00:02:10.171 CC lib/virtio/virtio_pci.o 00:02:10.171 CC lib/blob/zeroes.o 00:02:10.171 CC lib/blob/blob_bs_dev.o 00:02:10.171 CC lib/init/json_config.o 00:02:10.171 CC lib/init/subsystem.o 00:02:10.171 CC lib/init/subsystem_rpc.o 00:02:10.171 CC lib/fsdev/fsdev.o 00:02:10.171 CC lib/init/rpc.o 00:02:10.171 CC lib/fsdev/fsdev_io.o 00:02:10.171 CC lib/fsdev/fsdev_rpc.o 00:02:10.430 LIB libspdk_init.a 00:02:10.430 SO libspdk_init.so.6.0 00:02:10.689 SYMLINK libspdk_init.so 00:02:10.689 LIB libspdk_virtio.a 00:02:10.689 SO libspdk_virtio.so.7.0 00:02:10.689 SYMLINK libspdk_virtio.so 00:02:10.950 CC lib/event/app.o 00:02:10.950 CC lib/event/reactor.o 00:02:10.950 CC lib/event/log_rpc.o 00:02:10.950 CC lib/event/app_rpc.o 00:02:10.950 CC lib/event/scheduler_static.o 00:02:10.950 LIB libspdk_fsdev.a 00:02:10.950 SO libspdk_fsdev.so.2.0 00:02:11.212 SYMLINK libspdk_fsdev.so 00:02:11.474 LIB libspdk_nvme.a 00:02:11.474 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:11.474 LIB libspdk_accel.a 00:02:11.474 LIB libspdk_event.a 00:02:11.474 SO libspdk_accel.so.16.0 00:02:11.474 SO libspdk_event.so.14.0 00:02:11.474 SO libspdk_nvme.so.15.0 00:02:11.740 SYMLINK libspdk_accel.so 00:02:11.740 SYMLINK libspdk_event.so 00:02:12.007 SYMLINK libspdk_nvme.so 00:02:12.007 CC lib/bdev/bdev.o 00:02:12.007 CC lib/bdev/bdev_rpc.o 00:02:12.007 CC lib/bdev/bdev_zone.o 00:02:12.007 CC lib/bdev/part.o 00:02:12.007 CC lib/bdev/scsi_nvme.o 00:02:12.268 LIB libspdk_fuse_dispatcher.a 00:02:12.268 SO libspdk_fuse_dispatcher.so.1.0 00:02:12.268 SYMLINK libspdk_fuse_dispatcher.so 00:02:14.185 LIB libspdk_blob.a 00:02:14.185 SO libspdk_blob.so.11.0 00:02:14.185 SYMLINK libspdk_blob.so 00:02:14.445 CC lib/blobfs/blobfs.o 00:02:14.445 CC lib/blobfs/tree.o 00:02:14.445 CC lib/lvol/lvol.o 00:02:15.019 LIB libspdk_bdev.a 00:02:15.019 SO libspdk_bdev.so.17.0 00:02:15.280 SYMLINK libspdk_bdev.so 00:02:15.281 LIB libspdk_blobfs.a 00:02:15.560 SO libspdk_blobfs.so.10.0 00:02:15.560 LIB libspdk_lvol.a 00:02:15.560 SYMLINK libspdk_blobfs.so 00:02:15.560 SO libspdk_lvol.so.10.0 00:02:15.560 CC lib/ftl/ftl_core.o 00:02:15.560 CC lib/ftl/ftl_init.o 00:02:15.560 CC lib/ftl/ftl_layout.o 00:02:15.560 CC lib/ftl/ftl_debug.o 00:02:15.560 CC lib/ftl/ftl_io.o 00:02:15.560 CC lib/ftl/ftl_l2p.o 00:02:15.560 CC lib/ftl/ftl_sb.o 00:02:15.560 CC lib/ftl/ftl_l2p_flat.o 00:02:15.560 CC lib/ftl/ftl_nv_cache.o 00:02:15.560 CC lib/ftl/ftl_band.o 00:02:15.560 CC lib/ftl/ftl_band_ops.o 00:02:15.560 CC lib/ftl/ftl_writer.o 00:02:15.560 CC lib/ftl/ftl_rq.o 00:02:15.560 CC lib/scsi/lun.o 00:02:15.560 CC lib/ftl/ftl_reloc.o 00:02:15.560 CC lib/scsi/port.o 00:02:15.560 CC lib/scsi/dev.o 00:02:15.560 CC lib/ftl/ftl_l2p_cache.o 00:02:15.560 CC lib/ublk/ublk.o 00:02:15.560 CC lib/ftl/ftl_p2l.o 00:02:15.560 CC lib/ftl/ftl_p2l_log.o 00:02:15.560 CC lib/ublk/ublk_rpc.o 00:02:15.560 CC lib/scsi/scsi.o 00:02:15.560 CC lib/ftl/mngt/ftl_mngt.o 00:02:15.560 CC lib/scsi/scsi_bdev.o 00:02:15.560 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:15.560 CC lib/nbd/nbd_rpc.o 00:02:15.560 CC lib/scsi/scsi_pr.o 00:02:15.560 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:15.560 CC lib/nbd/nbd.o 00:02:15.560 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:15.560 CC lib/scsi/scsi_rpc.o 00:02:15.560 CC lib/scsi/task.o 00:02:15.560 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:15.560 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:15.560 CC lib/nvmf/ctrlr.o 00:02:15.560 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:15.560 CC lib/nvmf/ctrlr_discovery.o 00:02:15.560 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:15.560 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:15.560 CC lib/nvmf/ctrlr_bdev.o 00:02:15.560 CC lib/nvmf/subsystem.o 00:02:15.560 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:15.560 CC lib/nvmf/nvmf_rpc.o 00:02:15.560 CC lib/nvmf/nvmf.o 00:02:15.560 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:15.560 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:15.560 CC lib/nvmf/tcp.o 00:02:15.560 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:15.560 CC lib/nvmf/transport.o 00:02:15.560 CC lib/ftl/utils/ftl_conf.o 00:02:15.560 CC lib/nvmf/stubs.o 00:02:15.560 CC lib/ftl/utils/ftl_md.o 00:02:15.560 CC lib/ftl/utils/ftl_mempool.o 00:02:15.560 CC lib/nvmf/mdns_server.o 00:02:15.560 CC lib/nvmf/rdma.o 00:02:15.560 CC lib/ftl/utils/ftl_bitmap.o 00:02:15.560 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:15.560 CC lib/ftl/utils/ftl_property.o 00:02:15.560 CC lib/nvmf/auth.o 00:02:15.560 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:15.560 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:15.560 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:15.560 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:15.560 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:15.560 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:15.560 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:15.560 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:15.560 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:15.560 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:15.560 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:15.560 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:15.560 CC lib/ftl/base/ftl_base_dev.o 00:02:15.560 CC lib/ftl/base/ftl_base_bdev.o 00:02:15.560 CC lib/ftl/ftl_trace.o 00:02:15.560 SYMLINK libspdk_lvol.so 00:02:16.126 LIB libspdk_scsi.a 00:02:16.386 LIB libspdk_nbd.a 00:02:16.386 SO libspdk_scsi.so.9.0 00:02:16.386 SO libspdk_nbd.so.7.0 00:02:16.386 SYMLINK libspdk_nbd.so 00:02:16.386 SYMLINK libspdk_scsi.so 00:02:16.386 LIB libspdk_ublk.a 00:02:16.386 SO libspdk_ublk.so.3.0 00:02:16.647 SYMLINK libspdk_ublk.so 00:02:16.647 CC lib/iscsi/conn.o 00:02:16.647 CC lib/iscsi/init_grp.o 00:02:16.647 CC lib/iscsi/iscsi.o 00:02:16.647 CC lib/iscsi/param.o 00:02:16.647 CC lib/iscsi/portal_grp.o 00:02:16.647 CC lib/iscsi/tgt_node.o 00:02:16.647 CC lib/iscsi/iscsi_subsystem.o 00:02:16.647 CC lib/vhost/vhost.o 00:02:16.647 CC lib/vhost/vhost_scsi.o 00:02:16.647 CC lib/iscsi/iscsi_rpc.o 00:02:16.647 CC lib/iscsi/task.o 00:02:16.647 CC lib/vhost/vhost_rpc.o 00:02:16.647 CC lib/vhost/vhost_blk.o 00:02:16.647 CC lib/vhost/rte_vhost_user.o 00:02:16.907 LIB libspdk_ftl.a 00:02:16.907 SO libspdk_ftl.so.9.0 00:02:17.167 SYMLINK libspdk_ftl.so 00:02:17.738 LIB libspdk_vhost.a 00:02:17.998 SO libspdk_vhost.so.8.0 00:02:17.998 LIB libspdk_nvmf.a 00:02:17.998 SYMLINK libspdk_vhost.so 00:02:17.998 SO libspdk_nvmf.so.20.0 00:02:18.259 SYMLINK libspdk_nvmf.so 00:02:18.259 LIB libspdk_iscsi.a 00:02:18.259 SO libspdk_iscsi.so.8.0 00:02:18.519 SYMLINK libspdk_iscsi.so 00:02:19.092 CC module/env_dpdk/env_dpdk_rpc.o 00:02:19.351 LIB libspdk_env_dpdk_rpc.a 00:02:19.351 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:19.351 CC module/scheduler/gscheduler/gscheduler.o 00:02:19.351 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:19.351 CC module/accel/ioat/accel_ioat.o 00:02:19.351 CC module/accel/ioat/accel_ioat_rpc.o 00:02:19.351 CC module/keyring/linux/keyring.o 00:02:19.351 CC module/keyring/linux/keyring_rpc.o 00:02:19.351 SO libspdk_env_dpdk_rpc.so.6.0 00:02:19.351 CC module/sock/posix/posix.o 00:02:19.351 CC module/fsdev/aio/fsdev_aio.o 00:02:19.351 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:19.351 CC module/blob/bdev/blob_bdev.o 00:02:19.351 CC module/accel/dsa/accel_dsa.o 00:02:19.351 CC module/fsdev/aio/linux_aio_mgr.o 00:02:19.351 CC module/accel/error/accel_error.o 00:02:19.351 CC module/keyring/file/keyring.o 00:02:19.351 CC module/accel/iaa/accel_iaa.o 00:02:19.351 CC module/accel/dsa/accel_dsa_rpc.o 00:02:19.351 CC module/accel/error/accel_error_rpc.o 00:02:19.351 CC module/accel/iaa/accel_iaa_rpc.o 00:02:19.351 CC module/keyring/file/keyring_rpc.o 00:02:19.351 SYMLINK libspdk_env_dpdk_rpc.so 00:02:19.351 LIB libspdk_scheduler_gscheduler.a 00:02:19.351 LIB libspdk_scheduler_dpdk_governor.a 00:02:19.351 SO libspdk_scheduler_gscheduler.so.4.0 00:02:19.351 LIB libspdk_keyring_linux.a 00:02:19.611 LIB libspdk_keyring_file.a 00:02:19.611 LIB libspdk_accel_ioat.a 00:02:19.611 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:19.611 LIB libspdk_scheduler_dynamic.a 00:02:19.611 SO libspdk_keyring_linux.so.1.0 00:02:19.611 SO libspdk_keyring_file.so.2.0 00:02:19.611 SYMLINK libspdk_scheduler_gscheduler.so 00:02:19.611 SO libspdk_accel_ioat.so.6.0 00:02:19.611 LIB libspdk_accel_iaa.a 00:02:19.611 SO libspdk_scheduler_dynamic.so.4.0 00:02:19.611 LIB libspdk_accel_error.a 00:02:19.611 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:19.611 SYMLINK libspdk_keyring_file.so 00:02:19.611 SO libspdk_accel_iaa.so.3.0 00:02:19.611 SYMLINK libspdk_keyring_linux.so 00:02:19.611 SO libspdk_accel_error.so.2.0 00:02:19.611 SYMLINK libspdk_accel_ioat.so 00:02:19.611 SYMLINK libspdk_scheduler_dynamic.so 00:02:19.611 LIB libspdk_blob_bdev.a 00:02:19.611 LIB libspdk_accel_dsa.a 00:02:19.611 SYMLINK libspdk_accel_iaa.so 00:02:19.611 SO libspdk_blob_bdev.so.11.0 00:02:19.611 SYMLINK libspdk_accel_error.so 00:02:19.611 SO libspdk_accel_dsa.so.5.0 00:02:19.611 SYMLINK libspdk_blob_bdev.so 00:02:19.871 SYMLINK libspdk_accel_dsa.so 00:02:20.132 LIB libspdk_fsdev_aio.a 00:02:20.132 SO libspdk_fsdev_aio.so.1.0 00:02:20.132 LIB libspdk_sock_posix.a 00:02:20.132 SO libspdk_sock_posix.so.6.0 00:02:20.132 SYMLINK libspdk_fsdev_aio.so 00:02:20.132 CC module/blobfs/bdev/blobfs_bdev.o 00:02:20.133 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:20.391 CC module/bdev/error/vbdev_error.o 00:02:20.391 CC module/bdev/error/vbdev_error_rpc.o 00:02:20.391 CC module/bdev/raid/bdev_raid.o 00:02:20.391 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:20.391 CC module/bdev/raid/bdev_raid_rpc.o 00:02:20.391 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:20.391 CC module/bdev/raid/bdev_raid_sb.o 00:02:20.391 CC module/bdev/gpt/gpt.o 00:02:20.391 CC module/bdev/raid/raid1.o 00:02:20.391 CC module/bdev/raid/raid0.o 00:02:20.391 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:20.391 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:20.391 CC module/bdev/gpt/vbdev_gpt.o 00:02:20.391 CC module/bdev/raid/concat.o 00:02:20.391 CC module/bdev/split/vbdev_split.o 00:02:20.391 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:20.391 CC module/bdev/passthru/vbdev_passthru.o 00:02:20.391 CC module/bdev/split/vbdev_split_rpc.o 00:02:20.391 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:20.391 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:20.391 CC module/bdev/delay/vbdev_delay.o 00:02:20.391 CC module/bdev/aio/bdev_aio.o 00:02:20.391 CC module/bdev/iscsi/bdev_iscsi.o 00:02:20.391 CC module/bdev/aio/bdev_aio_rpc.o 00:02:20.391 CC module/bdev/ftl/bdev_ftl.o 00:02:20.391 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:20.391 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:20.391 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:20.391 CC module/bdev/lvol/vbdev_lvol.o 00:02:20.391 CC module/bdev/nvme/bdev_nvme.o 00:02:20.391 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:20.391 CC module/bdev/null/bdev_null.o 00:02:20.391 CC module/bdev/malloc/bdev_malloc.o 00:02:20.391 CC module/bdev/nvme/nvme_rpc.o 00:02:20.391 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:20.391 CC module/bdev/nvme/bdev_mdns_client.o 00:02:20.391 CC module/bdev/null/bdev_null_rpc.o 00:02:20.391 CC module/bdev/nvme/vbdev_opal.o 00:02:20.391 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:20.391 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:20.391 SYMLINK libspdk_sock_posix.so 00:02:20.391 LIB libspdk_blobfs_bdev.a 00:02:20.391 SO libspdk_blobfs_bdev.so.6.0 00:02:20.651 SYMLINK libspdk_blobfs_bdev.so 00:02:20.651 LIB libspdk_bdev_split.a 00:02:20.651 LIB libspdk_bdev_error.a 00:02:20.651 SO libspdk_bdev_split.so.6.0 00:02:20.651 LIB libspdk_bdev_gpt.a 00:02:20.651 LIB libspdk_bdev_null.a 00:02:20.651 SO libspdk_bdev_error.so.6.0 00:02:20.651 SYMLINK libspdk_bdev_split.so 00:02:20.651 LIB libspdk_bdev_passthru.a 00:02:20.651 LIB libspdk_bdev_ftl.a 00:02:20.651 SO libspdk_bdev_null.so.6.0 00:02:20.651 SO libspdk_bdev_gpt.so.6.0 00:02:20.651 LIB libspdk_bdev_zone_block.a 00:02:20.651 SO libspdk_bdev_passthru.so.6.0 00:02:20.651 SO libspdk_bdev_ftl.so.6.0 00:02:20.651 SYMLINK libspdk_bdev_error.so 00:02:20.651 SO libspdk_bdev_zone_block.so.6.0 00:02:20.651 LIB libspdk_bdev_aio.a 00:02:20.651 SYMLINK libspdk_bdev_null.so 00:02:20.651 SYMLINK libspdk_bdev_gpt.so 00:02:20.651 LIB libspdk_bdev_iscsi.a 00:02:20.651 LIB libspdk_bdev_delay.a 00:02:20.651 SO libspdk_bdev_aio.so.6.0 00:02:20.651 LIB libspdk_bdev_malloc.a 00:02:20.912 SYMLINK libspdk_bdev_passthru.so 00:02:20.913 SYMLINK libspdk_bdev_ftl.so 00:02:20.913 SYMLINK libspdk_bdev_zone_block.so 00:02:20.913 SO libspdk_bdev_iscsi.so.6.0 00:02:20.913 SO libspdk_bdev_delay.so.6.0 00:02:20.913 SO libspdk_bdev_malloc.so.6.0 00:02:20.913 SYMLINK libspdk_bdev_aio.so 00:02:20.913 SYMLINK libspdk_bdev_iscsi.so 00:02:20.913 SYMLINK libspdk_bdev_delay.so 00:02:20.913 SYMLINK libspdk_bdev_malloc.so 00:02:20.913 LIB libspdk_bdev_lvol.a 00:02:20.913 LIB libspdk_bdev_virtio.a 00:02:20.913 SO libspdk_bdev_lvol.so.6.0 00:02:20.913 SO libspdk_bdev_virtio.so.6.0 00:02:20.913 SYMLINK libspdk_bdev_virtio.so 00:02:21.174 SYMLINK libspdk_bdev_lvol.so 00:02:21.436 LIB libspdk_bdev_raid.a 00:02:21.436 SO libspdk_bdev_raid.so.6.0 00:02:21.696 SYMLINK libspdk_bdev_raid.so 00:02:23.080 LIB libspdk_bdev_nvme.a 00:02:23.341 SO libspdk_bdev_nvme.so.7.1 00:02:23.341 SYMLINK libspdk_bdev_nvme.so 00:02:24.284 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:24.284 CC module/event/subsystems/vmd/vmd.o 00:02:24.284 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:24.284 CC module/event/subsystems/fsdev/fsdev.o 00:02:24.284 CC module/event/subsystems/sock/sock.o 00:02:24.284 CC module/event/subsystems/iobuf/iobuf.o 00:02:24.284 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:24.284 CC module/event/subsystems/keyring/keyring.o 00:02:24.284 CC module/event/subsystems/scheduler/scheduler.o 00:02:24.284 LIB libspdk_event_vhost_blk.a 00:02:24.284 LIB libspdk_event_sock.a 00:02:24.284 LIB libspdk_event_vmd.a 00:02:24.284 LIB libspdk_event_fsdev.a 00:02:24.284 LIB libspdk_event_keyring.a 00:02:24.284 LIB libspdk_event_scheduler.a 00:02:24.284 LIB libspdk_event_iobuf.a 00:02:24.284 SO libspdk_event_vhost_blk.so.3.0 00:02:24.284 SO libspdk_event_keyring.so.1.0 00:02:24.284 SO libspdk_event_sock.so.5.0 00:02:24.284 SO libspdk_event_vmd.so.6.0 00:02:24.284 SO libspdk_event_fsdev.so.1.0 00:02:24.284 SO libspdk_event_scheduler.so.4.0 00:02:24.284 SO libspdk_event_iobuf.so.3.0 00:02:24.284 SYMLINK libspdk_event_vhost_blk.so 00:02:24.284 SYMLINK libspdk_event_keyring.so 00:02:24.284 SYMLINK libspdk_event_sock.so 00:02:24.284 SYMLINK libspdk_event_fsdev.so 00:02:24.284 SYMLINK libspdk_event_vmd.so 00:02:24.284 SYMLINK libspdk_event_scheduler.so 00:02:24.284 SYMLINK libspdk_event_iobuf.so 00:02:24.855 CC module/event/subsystems/accel/accel.o 00:02:24.855 LIB libspdk_event_accel.a 00:02:24.855 SO libspdk_event_accel.so.6.0 00:02:25.115 SYMLINK libspdk_event_accel.so 00:02:25.376 CC module/event/subsystems/bdev/bdev.o 00:02:25.637 LIB libspdk_event_bdev.a 00:02:25.637 SO libspdk_event_bdev.so.6.0 00:02:25.637 SYMLINK libspdk_event_bdev.so 00:02:25.897 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:25.897 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:25.897 CC module/event/subsystems/ublk/ublk.o 00:02:25.897 CC module/event/subsystems/scsi/scsi.o 00:02:25.897 CC module/event/subsystems/nbd/nbd.o 00:02:26.157 LIB libspdk_event_ublk.a 00:02:26.157 LIB libspdk_event_nbd.a 00:02:26.157 LIB libspdk_event_scsi.a 00:02:26.157 SO libspdk_event_ublk.so.3.0 00:02:26.157 SO libspdk_event_nbd.so.6.0 00:02:26.157 LIB libspdk_event_nvmf.a 00:02:26.157 SO libspdk_event_scsi.so.6.0 00:02:26.157 SO libspdk_event_nvmf.so.6.0 00:02:26.157 SYMLINK libspdk_event_ublk.so 00:02:26.157 SYMLINK libspdk_event_nbd.so 00:02:26.157 SYMLINK libspdk_event_scsi.so 00:02:26.418 SYMLINK libspdk_event_nvmf.so 00:02:26.679 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:26.679 CC module/event/subsystems/iscsi/iscsi.o 00:02:26.679 LIB libspdk_event_vhost_scsi.a 00:02:26.679 LIB libspdk_event_iscsi.a 00:02:26.679 SO libspdk_event_vhost_scsi.so.3.0 00:02:26.940 SO libspdk_event_iscsi.so.6.0 00:02:26.940 SYMLINK libspdk_event_vhost_scsi.so 00:02:26.940 SYMLINK libspdk_event_iscsi.so 00:02:27.201 SO libspdk.so.6.0 00:02:27.201 SYMLINK libspdk.so 00:02:27.461 CC app/trace_record/trace_record.o 00:02:27.461 CXX app/trace/trace.o 00:02:27.461 CC app/spdk_lspci/spdk_lspci.o 00:02:27.461 CC app/spdk_nvme_identify/identify.o 00:02:27.461 CC app/spdk_nvme_discover/discovery_aer.o 00:02:27.461 CC app/spdk_nvme_perf/perf.o 00:02:27.461 TEST_HEADER include/spdk/accel.h 00:02:27.461 CC test/rpc_client/rpc_client_test.o 00:02:27.461 CC app/spdk_top/spdk_top.o 00:02:27.461 TEST_HEADER include/spdk/accel_module.h 00:02:27.461 TEST_HEADER include/spdk/barrier.h 00:02:27.461 TEST_HEADER include/spdk/assert.h 00:02:27.461 TEST_HEADER include/spdk/base64.h 00:02:27.461 TEST_HEADER include/spdk/bdev_module.h 00:02:27.461 TEST_HEADER include/spdk/bdev.h 00:02:27.461 TEST_HEADER include/spdk/bdev_zone.h 00:02:27.461 TEST_HEADER include/spdk/bit_array.h 00:02:27.461 TEST_HEADER include/spdk/bit_pool.h 00:02:27.461 TEST_HEADER include/spdk/blob_bdev.h 00:02:27.461 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:27.461 TEST_HEADER include/spdk/blobfs.h 00:02:27.461 TEST_HEADER include/spdk/blob.h 00:02:27.461 TEST_HEADER include/spdk/conf.h 00:02:27.461 TEST_HEADER include/spdk/config.h 00:02:27.461 TEST_HEADER include/spdk/cpuset.h 00:02:27.461 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:27.461 TEST_HEADER include/spdk/crc16.h 00:02:27.461 TEST_HEADER include/spdk/crc32.h 00:02:27.461 TEST_HEADER include/spdk/crc64.h 00:02:27.461 TEST_HEADER include/spdk/dif.h 00:02:27.461 TEST_HEADER include/spdk/dma.h 00:02:27.461 TEST_HEADER include/spdk/endian.h 00:02:27.461 TEST_HEADER include/spdk/env_dpdk.h 00:02:27.461 TEST_HEADER include/spdk/env.h 00:02:27.461 TEST_HEADER include/spdk/event.h 00:02:27.461 TEST_HEADER include/spdk/fd_group.h 00:02:27.461 CC app/spdk_dd/spdk_dd.o 00:02:27.461 TEST_HEADER include/spdk/fd.h 00:02:27.461 CC app/nvmf_tgt/nvmf_main.o 00:02:27.461 TEST_HEADER include/spdk/file.h 00:02:27.461 TEST_HEADER include/spdk/fsdev.h 00:02:27.461 TEST_HEADER include/spdk/fsdev_module.h 00:02:27.461 TEST_HEADER include/spdk/ftl.h 00:02:27.461 TEST_HEADER include/spdk/gpt_spec.h 00:02:27.461 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:27.462 TEST_HEADER include/spdk/hexlify.h 00:02:27.462 TEST_HEADER include/spdk/histogram_data.h 00:02:27.462 TEST_HEADER include/spdk/idxd.h 00:02:27.462 CC app/iscsi_tgt/iscsi_tgt.o 00:02:27.462 TEST_HEADER include/spdk/idxd_spec.h 00:02:27.462 TEST_HEADER include/spdk/init.h 00:02:27.462 CC app/spdk_tgt/spdk_tgt.o 00:02:27.462 TEST_HEADER include/spdk/ioat.h 00:02:27.462 TEST_HEADER include/spdk/ioat_spec.h 00:02:27.462 TEST_HEADER include/spdk/iscsi_spec.h 00:02:27.462 TEST_HEADER include/spdk/json.h 00:02:27.462 TEST_HEADER include/spdk/jsonrpc.h 00:02:27.462 TEST_HEADER include/spdk/keyring.h 00:02:27.462 TEST_HEADER include/spdk/keyring_module.h 00:02:27.462 TEST_HEADER include/spdk/likely.h 00:02:27.462 TEST_HEADER include/spdk/lvol.h 00:02:27.462 TEST_HEADER include/spdk/log.h 00:02:27.462 TEST_HEADER include/spdk/md5.h 00:02:27.462 TEST_HEADER include/spdk/memory.h 00:02:27.462 TEST_HEADER include/spdk/mmio.h 00:02:27.462 TEST_HEADER include/spdk/nbd.h 00:02:27.462 TEST_HEADER include/spdk/notify.h 00:02:27.462 TEST_HEADER include/spdk/net.h 00:02:27.462 TEST_HEADER include/spdk/nvme.h 00:02:27.462 TEST_HEADER include/spdk/nvme_intel.h 00:02:27.462 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:27.462 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:27.462 TEST_HEADER include/spdk/nvme_spec.h 00:02:27.462 TEST_HEADER include/spdk/nvme_zns.h 00:02:27.727 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:27.727 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:27.727 TEST_HEADER include/spdk/nvmf.h 00:02:27.727 TEST_HEADER include/spdk/nvmf_transport.h 00:02:27.727 TEST_HEADER include/spdk/nvmf_spec.h 00:02:27.727 TEST_HEADER include/spdk/opal.h 00:02:27.727 TEST_HEADER include/spdk/pci_ids.h 00:02:27.727 TEST_HEADER include/spdk/opal_spec.h 00:02:27.727 TEST_HEADER include/spdk/pipe.h 00:02:27.727 TEST_HEADER include/spdk/queue.h 00:02:27.727 TEST_HEADER include/spdk/reduce.h 00:02:27.727 TEST_HEADER include/spdk/rpc.h 00:02:27.727 TEST_HEADER include/spdk/scheduler.h 00:02:27.727 TEST_HEADER include/spdk/scsi.h 00:02:27.727 TEST_HEADER include/spdk/scsi_spec.h 00:02:27.727 TEST_HEADER include/spdk/sock.h 00:02:27.727 TEST_HEADER include/spdk/stdinc.h 00:02:27.727 TEST_HEADER include/spdk/string.h 00:02:27.727 TEST_HEADER include/spdk/thread.h 00:02:27.727 TEST_HEADER include/spdk/trace.h 00:02:27.727 TEST_HEADER include/spdk/trace_parser.h 00:02:27.727 TEST_HEADER include/spdk/tree.h 00:02:27.727 TEST_HEADER include/spdk/util.h 00:02:27.727 TEST_HEADER include/spdk/ublk.h 00:02:27.727 TEST_HEADER include/spdk/uuid.h 00:02:27.727 TEST_HEADER include/spdk/version.h 00:02:27.727 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:27.727 TEST_HEADER include/spdk/vhost.h 00:02:27.727 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:27.727 TEST_HEADER include/spdk/vmd.h 00:02:27.727 TEST_HEADER include/spdk/xor.h 00:02:27.727 CXX test/cpp_headers/accel.o 00:02:27.727 TEST_HEADER include/spdk/zipf.h 00:02:27.727 CXX test/cpp_headers/accel_module.o 00:02:27.727 CXX test/cpp_headers/barrier.o 00:02:27.727 CXX test/cpp_headers/assert.o 00:02:27.727 CXX test/cpp_headers/base64.o 00:02:27.727 CXX test/cpp_headers/bdev.o 00:02:27.727 CXX test/cpp_headers/bdev_module.o 00:02:27.727 CXX test/cpp_headers/bdev_zone.o 00:02:27.727 CXX test/cpp_headers/bit_array.o 00:02:27.727 CXX test/cpp_headers/bit_pool.o 00:02:27.727 CXX test/cpp_headers/blob_bdev.o 00:02:27.727 CXX test/cpp_headers/blobfs_bdev.o 00:02:27.727 CXX test/cpp_headers/blobfs.o 00:02:27.727 CXX test/cpp_headers/blob.o 00:02:27.727 CXX test/cpp_headers/conf.o 00:02:27.727 CXX test/cpp_headers/config.o 00:02:27.727 CXX test/cpp_headers/cpuset.o 00:02:27.727 CXX test/cpp_headers/crc32.o 00:02:27.727 CXX test/cpp_headers/crc16.o 00:02:27.727 CXX test/cpp_headers/crc64.o 00:02:27.727 CXX test/cpp_headers/dif.o 00:02:27.727 CXX test/cpp_headers/dma.o 00:02:27.727 CXX test/cpp_headers/endian.o 00:02:27.727 CXX test/cpp_headers/event.o 00:02:27.727 CXX test/cpp_headers/env_dpdk.o 00:02:27.727 CXX test/cpp_headers/env.o 00:02:27.727 CXX test/cpp_headers/fd_group.o 00:02:27.727 CXX test/cpp_headers/file.o 00:02:27.727 CXX test/cpp_headers/fd.o 00:02:27.727 CXX test/cpp_headers/fuse_dispatcher.o 00:02:27.727 CXX test/cpp_headers/fsdev.o 00:02:27.727 CXX test/cpp_headers/ftl.o 00:02:27.727 CXX test/cpp_headers/gpt_spec.o 00:02:27.727 CXX test/cpp_headers/fsdev_module.o 00:02:27.727 CXX test/cpp_headers/hexlify.o 00:02:27.727 CXX test/cpp_headers/idxd.o 00:02:27.727 CXX test/cpp_headers/idxd_spec.o 00:02:27.727 CC examples/util/zipf/zipf.o 00:02:27.727 CXX test/cpp_headers/histogram_data.o 00:02:27.727 CXX test/cpp_headers/ioat.o 00:02:27.727 CXX test/cpp_headers/init.o 00:02:27.727 CXX test/cpp_headers/ioat_spec.o 00:02:27.727 CXX test/cpp_headers/keyring.o 00:02:27.727 CXX test/cpp_headers/jsonrpc.o 00:02:27.727 CXX test/cpp_headers/iscsi_spec.o 00:02:27.727 CXX test/cpp_headers/keyring_module.o 00:02:27.727 CXX test/cpp_headers/likely.o 00:02:27.727 CXX test/cpp_headers/json.o 00:02:27.727 CC examples/ioat/verify/verify.o 00:02:27.727 CXX test/cpp_headers/lvol.o 00:02:27.727 CXX test/cpp_headers/log.o 00:02:27.727 CXX test/cpp_headers/memory.o 00:02:27.727 LINK spdk_lspci 00:02:27.727 CXX test/cpp_headers/md5.o 00:02:27.727 CXX test/cpp_headers/nbd.o 00:02:27.727 CXX test/cpp_headers/net.o 00:02:27.727 CC examples/ioat/perf/perf.o 00:02:27.727 CXX test/cpp_headers/mmio.o 00:02:27.727 CXX test/cpp_headers/nvme.o 00:02:27.727 CXX test/cpp_headers/notify.o 00:02:27.727 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:27.727 CXX test/cpp_headers/nvme_ocssd.o 00:02:27.727 CC test/app/histogram_perf/histogram_perf.o 00:02:27.727 CXX test/cpp_headers/nvme_intel.o 00:02:27.727 CC test/app/stub/stub.o 00:02:27.727 CXX test/cpp_headers/nvme_spec.o 00:02:27.728 CXX test/cpp_headers/nvmf_cmd.o 00:02:27.728 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:27.728 CXX test/cpp_headers/nvme_zns.o 00:02:27.728 CXX test/cpp_headers/nvmf.o 00:02:27.728 CC test/app/jsoncat/jsoncat.o 00:02:27.728 CXX test/cpp_headers/nvmf_spec.o 00:02:27.728 CXX test/cpp_headers/opal.o 00:02:27.728 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:27.728 CXX test/cpp_headers/nvmf_transport.o 00:02:27.728 CXX test/cpp_headers/opal_spec.o 00:02:27.728 CXX test/cpp_headers/pci_ids.o 00:02:27.728 CC test/env/vtophys/vtophys.o 00:02:27.728 CXX test/cpp_headers/rpc.o 00:02:27.728 CXX test/cpp_headers/pipe.o 00:02:27.728 CXX test/cpp_headers/queue.o 00:02:27.728 CXX test/cpp_headers/reduce.o 00:02:27.728 CXX test/cpp_headers/scsi.o 00:02:27.728 CXX test/cpp_headers/scheduler.o 00:02:27.728 CXX test/cpp_headers/scsi_spec.o 00:02:27.728 CXX test/cpp_headers/sock.o 00:02:27.728 CXX test/cpp_headers/stdinc.o 00:02:27.728 CC test/env/memory/memory_ut.o 00:02:27.728 CXX test/cpp_headers/string.o 00:02:27.728 CXX test/cpp_headers/tree.o 00:02:27.728 CXX test/cpp_headers/thread.o 00:02:27.728 CXX test/cpp_headers/trace.o 00:02:27.728 CXX test/cpp_headers/ublk.o 00:02:27.728 CXX test/cpp_headers/trace_parser.o 00:02:27.728 CXX test/cpp_headers/util.o 00:02:27.728 CXX test/cpp_headers/uuid.o 00:02:27.728 CXX test/cpp_headers/version.o 00:02:27.728 CXX test/cpp_headers/vfio_user_pci.o 00:02:27.728 CXX test/cpp_headers/vhost.o 00:02:27.728 CXX test/cpp_headers/vfio_user_spec.o 00:02:27.728 CXX test/cpp_headers/zipf.o 00:02:27.728 CXX test/cpp_headers/vmd.o 00:02:27.728 CXX test/cpp_headers/xor.o 00:02:27.728 CC test/env/pci/pci_ut.o 00:02:27.728 CC test/app/bdev_svc/bdev_svc.o 00:02:27.728 CC test/thread/poller_perf/poller_perf.o 00:02:27.728 CC app/fio/nvme/fio_plugin.o 00:02:27.728 CC app/fio/bdev/fio_plugin.o 00:02:27.728 LINK interrupt_tgt 00:02:27.728 CC test/dma/test_dma/test_dma.o 00:02:27.728 LINK rpc_client_test 00:02:27.989 LINK spdk_nvme_discover 00:02:27.989 LINK nvmf_tgt 00:02:27.989 LINK spdk_tgt 00:02:27.989 LINK spdk_trace_record 00:02:27.989 LINK iscsi_tgt 00:02:27.989 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:27.989 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:28.248 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:28.248 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:28.248 CC test/env/mem_callbacks/mem_callbacks.o 00:02:28.248 LINK ioat_perf 00:02:28.248 LINK spdk_trace 00:02:28.248 LINK env_dpdk_post_init 00:02:28.248 LINK histogram_perf 00:02:28.248 LINK spdk_dd 00:02:28.248 LINK vtophys 00:02:28.248 LINK zipf 00:02:28.248 LINK poller_perf 00:02:28.248 LINK bdev_svc 00:02:28.509 LINK jsoncat 00:02:28.509 LINK verify 00:02:28.509 LINK stub 00:02:28.509 CC app/vhost/vhost.o 00:02:28.770 LINK pci_ut 00:02:28.770 LINK vhost_fuzz 00:02:28.770 LINK spdk_bdev 00:02:28.770 CC examples/vmd/lsvmd/lsvmd.o 00:02:28.770 CC examples/vmd/led/led.o 00:02:28.770 LINK nvme_fuzz 00:02:28.770 CC examples/sock/hello_world/hello_sock.o 00:02:28.770 LINK vhost 00:02:28.770 CC examples/idxd/perf/perf.o 00:02:28.770 CC test/event/event_perf/event_perf.o 00:02:28.770 CC test/event/reactor_perf/reactor_perf.o 00:02:28.770 CC test/event/reactor/reactor.o 00:02:28.770 LINK test_dma 00:02:28.770 LINK spdk_nvme_perf 00:02:28.770 CC test/event/app_repeat/app_repeat.o 00:02:28.770 LINK spdk_nvme 00:02:28.770 CC test/event/scheduler/scheduler.o 00:02:28.770 CC examples/thread/thread/thread_ex.o 00:02:29.031 LINK mem_callbacks 00:02:29.031 LINK lsvmd 00:02:29.031 LINK led 00:02:29.031 LINK spdk_nvme_identify 00:02:29.031 LINK reactor 00:02:29.031 LINK event_perf 00:02:29.031 LINK reactor_perf 00:02:29.031 LINK spdk_top 00:02:29.031 LINK app_repeat 00:02:29.031 LINK hello_sock 00:02:29.290 LINK scheduler 00:02:29.290 LINK thread 00:02:29.290 LINK idxd_perf 00:02:29.290 CC test/nvme/e2edp/nvme_dp.o 00:02:29.290 CC test/nvme/reset/reset.o 00:02:29.290 CC test/nvme/reserve/reserve.o 00:02:29.290 CC test/nvme/aer/aer.o 00:02:29.290 CC test/nvme/fdp/fdp.o 00:02:29.290 CC test/nvme/overhead/overhead.o 00:02:29.290 CC test/nvme/sgl/sgl.o 00:02:29.290 CC test/nvme/cuse/cuse.o 00:02:29.290 CC test/nvme/err_injection/err_injection.o 00:02:29.290 CC test/nvme/connect_stress/connect_stress.o 00:02:29.290 CC test/nvme/simple_copy/simple_copy.o 00:02:29.290 CC test/nvme/startup/startup.o 00:02:29.290 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:29.290 CC test/nvme/boot_partition/boot_partition.o 00:02:29.290 CC test/nvme/compliance/nvme_compliance.o 00:02:29.290 CC test/nvme/fused_ordering/fused_ordering.o 00:02:29.290 CC test/blobfs/mkfs/mkfs.o 00:02:29.290 CC test/accel/dif/dif.o 00:02:29.551 LINK memory_ut 00:02:29.551 CC test/lvol/esnap/esnap.o 00:02:29.551 LINK boot_partition 00:02:29.551 LINK reserve 00:02:29.551 LINK doorbell_aers 00:02:29.551 LINK startup 00:02:29.551 LINK connect_stress 00:02:29.551 CC examples/nvme/abort/abort.o 00:02:29.551 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:29.551 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:29.551 CC examples/nvme/hello_world/hello_world.o 00:02:29.551 CC examples/nvme/hotplug/hotplug.o 00:02:29.551 CC examples/nvme/reconnect/reconnect.o 00:02:29.551 CC examples/nvme/arbitration/arbitration.o 00:02:29.551 LINK err_injection 00:02:29.551 LINK fused_ordering 00:02:29.551 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:29.551 LINK mkfs 00:02:29.551 LINK simple_copy 00:02:29.551 LINK nvme_dp 00:02:29.551 LINK reset 00:02:29.811 LINK aer 00:02:29.811 LINK sgl 00:02:29.811 LINK overhead 00:02:29.811 CC examples/accel/perf/accel_perf.o 00:02:29.811 CC examples/blob/cli/blobcli.o 00:02:29.811 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:29.811 LINK nvme_compliance 00:02:29.811 CC examples/blob/hello_world/hello_blob.o 00:02:29.811 LINK fdp 00:02:29.812 LINK cmb_copy 00:02:29.812 LINK pmr_persistence 00:02:29.812 LINK hello_world 00:02:29.812 LINK hotplug 00:02:30.072 LINK reconnect 00:02:30.072 LINK arbitration 00:02:30.072 LINK hello_blob 00:02:30.072 LINK abort 00:02:30.072 LINK hello_fsdev 00:02:30.072 LINK iscsi_fuzz 00:02:30.072 LINK nvme_manage 00:02:30.072 LINK dif 00:02:30.332 LINK accel_perf 00:02:30.332 LINK blobcli 00:02:30.903 LINK cuse 00:02:30.903 CC test/bdev/bdevio/bdevio.o 00:02:30.903 CC examples/bdev/bdevperf/bdevperf.o 00:02:30.903 CC examples/bdev/hello_world/hello_bdev.o 00:02:31.163 LINK hello_bdev 00:02:31.163 LINK bdevio 00:02:31.733 LINK bdevperf 00:02:32.303 CC examples/nvmf/nvmf/nvmf.o 00:02:32.562 LINK nvmf 00:02:35.109 LINK esnap 00:02:35.109 00:02:35.109 real 0m59.095s 00:02:35.109 user 8m13.541s 00:02:35.109 sys 4m14.711s 00:02:35.109 13:07:43 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:35.109 13:07:43 make -- common/autotest_common.sh@10 -- $ set +x 00:02:35.109 ************************************ 00:02:35.109 END TEST make 00:02:35.109 ************************************ 00:02:35.371 13:07:43 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:35.371 13:07:43 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:35.371 13:07:43 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:35.371 13:07:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.371 13:07:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:35.371 13:07:43 -- pm/common@44 -- $ pid=3491400 00:02:35.371 13:07:43 -- pm/common@50 -- $ kill -TERM 3491400 00:02:35.371 13:07:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.371 13:07:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:35.371 13:07:43 -- pm/common@44 -- $ pid=3491401 00:02:35.371 13:07:43 -- pm/common@50 -- $ kill -TERM 3491401 00:02:35.371 13:07:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.371 13:07:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:35.371 13:07:43 -- pm/common@44 -- $ pid=3491403 00:02:35.371 13:07:43 -- pm/common@50 -- $ kill -TERM 3491403 00:02:35.371 13:07:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.371 13:07:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:35.371 13:07:43 -- pm/common@44 -- $ pid=3491426 00:02:35.371 13:07:43 -- pm/common@50 -- $ sudo -E kill -TERM 3491426 00:02:35.371 13:07:43 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:35.371 13:07:43 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:35.371 13:07:43 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:02:35.371 13:07:43 -- common/autotest_common.sh@1691 -- # lcov --version 00:02:35.371 13:07:43 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:02:35.371 13:07:43 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:02:35.371 13:07:43 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:35.371 13:07:43 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:35.371 13:07:43 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:35.371 13:07:43 -- scripts/common.sh@336 -- # IFS=.-: 00:02:35.371 13:07:43 -- scripts/common.sh@336 -- # read -ra ver1 00:02:35.371 13:07:43 -- scripts/common.sh@337 -- # IFS=.-: 00:02:35.371 13:07:43 -- scripts/common.sh@337 -- # read -ra ver2 00:02:35.371 13:07:43 -- scripts/common.sh@338 -- # local 'op=<' 00:02:35.371 13:07:43 -- scripts/common.sh@340 -- # ver1_l=2 00:02:35.371 13:07:43 -- scripts/common.sh@341 -- # ver2_l=1 00:02:35.371 13:07:43 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:35.371 13:07:43 -- scripts/common.sh@344 -- # case "$op" in 00:02:35.371 13:07:43 -- scripts/common.sh@345 -- # : 1 00:02:35.371 13:07:43 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:35.371 13:07:43 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:35.371 13:07:43 -- scripts/common.sh@365 -- # decimal 1 00:02:35.371 13:07:43 -- scripts/common.sh@353 -- # local d=1 00:02:35.371 13:07:43 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:35.633 13:07:43 -- scripts/common.sh@355 -- # echo 1 00:02:35.633 13:07:43 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:35.633 13:07:43 -- scripts/common.sh@366 -- # decimal 2 00:02:35.633 13:07:43 -- scripts/common.sh@353 -- # local d=2 00:02:35.633 13:07:43 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:35.633 13:07:43 -- scripts/common.sh@355 -- # echo 2 00:02:35.633 13:07:43 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:35.633 13:07:43 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:35.633 13:07:43 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:35.633 13:07:43 -- scripts/common.sh@368 -- # return 0 00:02:35.633 13:07:43 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:35.633 13:07:43 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:02:35.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:35.633 --rc genhtml_branch_coverage=1 00:02:35.633 --rc genhtml_function_coverage=1 00:02:35.633 --rc genhtml_legend=1 00:02:35.633 --rc geninfo_all_blocks=1 00:02:35.633 --rc geninfo_unexecuted_blocks=1 00:02:35.633 00:02:35.633 ' 00:02:35.633 13:07:43 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:02:35.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:35.633 --rc genhtml_branch_coverage=1 00:02:35.633 --rc genhtml_function_coverage=1 00:02:35.633 --rc genhtml_legend=1 00:02:35.633 --rc geninfo_all_blocks=1 00:02:35.633 --rc geninfo_unexecuted_blocks=1 00:02:35.633 00:02:35.633 ' 00:02:35.633 13:07:43 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:02:35.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:35.633 --rc genhtml_branch_coverage=1 00:02:35.633 --rc genhtml_function_coverage=1 00:02:35.633 --rc genhtml_legend=1 00:02:35.633 --rc geninfo_all_blocks=1 00:02:35.633 --rc geninfo_unexecuted_blocks=1 00:02:35.633 00:02:35.633 ' 00:02:35.633 13:07:43 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:02:35.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:35.633 --rc genhtml_branch_coverage=1 00:02:35.633 --rc genhtml_function_coverage=1 00:02:35.633 --rc genhtml_legend=1 00:02:35.633 --rc geninfo_all_blocks=1 00:02:35.633 --rc geninfo_unexecuted_blocks=1 00:02:35.633 00:02:35.633 ' 00:02:35.633 13:07:43 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:35.633 13:07:43 -- nvmf/common.sh@7 -- # uname -s 00:02:35.633 13:07:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:35.633 13:07:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:35.633 13:07:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:35.633 13:07:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:35.633 13:07:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:35.633 13:07:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:35.633 13:07:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:35.633 13:07:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:35.633 13:07:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:35.633 13:07:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:35.633 13:07:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:35.633 13:07:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:35.633 13:07:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:35.633 13:07:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:35.633 13:07:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:35.633 13:07:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:35.633 13:07:43 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:35.633 13:07:43 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:35.633 13:07:43 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:35.633 13:07:43 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:35.633 13:07:43 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:35.633 13:07:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.633 13:07:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.633 13:07:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.633 13:07:43 -- paths/export.sh@5 -- # export PATH 00:02:35.633 13:07:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.633 13:07:43 -- nvmf/common.sh@51 -- # : 0 00:02:35.633 13:07:43 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:35.633 13:07:43 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:35.633 13:07:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:35.633 13:07:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:35.633 13:07:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:35.633 13:07:43 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:35.633 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:35.633 13:07:43 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:35.633 13:07:43 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:35.633 13:07:43 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:35.633 13:07:43 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:35.633 13:07:43 -- spdk/autotest.sh@32 -- # uname -s 00:02:35.633 13:07:43 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:35.633 13:07:43 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:35.633 13:07:43 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:35.633 13:07:43 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:35.633 13:07:43 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:35.633 13:07:43 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:35.633 13:07:43 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:35.633 13:07:43 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:35.633 13:07:43 -- spdk/autotest.sh@48 -- # udevadm_pid=3557124 00:02:35.633 13:07:43 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:35.633 13:07:43 -- pm/common@17 -- # local monitor 00:02:35.633 13:07:43 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:35.633 13:07:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.633 13:07:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.633 13:07:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.633 13:07:43 -- pm/common@21 -- # date +%s 00:02:35.634 13:07:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.634 13:07:43 -- pm/common@21 -- # date +%s 00:02:35.634 13:07:43 -- pm/common@25 -- # sleep 1 00:02:35.634 13:07:43 -- pm/common@21 -- # date +%s 00:02:35.634 13:07:43 -- pm/common@21 -- # date +%s 00:02:35.634 13:07:43 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730981263 00:02:35.634 13:07:43 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730981263 00:02:35.634 13:07:43 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730981263 00:02:35.634 13:07:43 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730981263 00:02:35.634 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730981263_collect-cpu-load.pm.log 00:02:35.634 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730981263_collect-vmstat.pm.log 00:02:35.634 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730981263_collect-cpu-temp.pm.log 00:02:35.634 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730981263_collect-bmc-pm.bmc.pm.log 00:02:36.573 13:07:44 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:36.573 13:07:44 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:36.573 13:07:44 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:36.573 13:07:44 -- common/autotest_common.sh@10 -- # set +x 00:02:36.573 13:07:44 -- spdk/autotest.sh@59 -- # create_test_list 00:02:36.573 13:07:44 -- common/autotest_common.sh@750 -- # xtrace_disable 00:02:36.573 13:07:44 -- common/autotest_common.sh@10 -- # set +x 00:02:36.573 13:07:44 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:36.573 13:07:44 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:36.573 13:07:44 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:36.573 13:07:44 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:36.573 13:07:44 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:36.573 13:07:44 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:36.573 13:07:44 -- common/autotest_common.sh@1455 -- # uname 00:02:36.573 13:07:44 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:36.573 13:07:44 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:36.573 13:07:44 -- common/autotest_common.sh@1475 -- # uname 00:02:36.573 13:07:44 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:36.573 13:07:44 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:36.573 13:07:44 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:36.832 lcov: LCOV version 1.15 00:02:36.832 13:07:44 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:51.731 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:51.731 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:06.772 13:08:13 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:06.772 13:08:13 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:06.772 13:08:13 -- common/autotest_common.sh@10 -- # set +x 00:03:06.772 13:08:13 -- spdk/autotest.sh@78 -- # rm -f 00:03:06.772 13:08:13 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:10.071 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:10.071 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:10.071 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:10.071 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:10.071 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:10.071 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:10.071 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:10.071 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:10.071 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:10.071 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:10.071 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:10.071 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:10.071 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:10.071 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:10.071 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:10.071 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:10.331 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:10.592 13:08:18 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:10.592 13:08:18 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:10.592 13:08:18 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:10.592 13:08:18 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:10.592 13:08:18 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:10.592 13:08:18 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:10.592 13:08:18 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:10.592 13:08:18 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:10.592 13:08:18 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:10.592 13:08:18 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:10.592 13:08:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:10.592 13:08:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:10.592 13:08:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:10.592 13:08:18 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:10.592 13:08:18 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:10.592 No valid GPT data, bailing 00:03:10.592 13:08:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:10.592 13:08:18 -- scripts/common.sh@394 -- # pt= 00:03:10.592 13:08:18 -- scripts/common.sh@395 -- # return 1 00:03:10.592 13:08:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:10.592 1+0 records in 00:03:10.592 1+0 records out 00:03:10.592 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00152234 s, 689 MB/s 00:03:10.592 13:08:18 -- spdk/autotest.sh@105 -- # sync 00:03:10.592 13:08:18 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:10.592 13:08:18 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:10.592 13:08:18 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:18.730 13:08:26 -- spdk/autotest.sh@111 -- # uname -s 00:03:18.730 13:08:26 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:18.730 13:08:26 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:18.730 13:08:26 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:22.080 Hugepages 00:03:22.081 node hugesize free / total 00:03:22.081 node0 1048576kB 0 / 0 00:03:22.081 node0 2048kB 0 / 0 00:03:22.081 node1 1048576kB 0 / 0 00:03:22.081 node1 2048kB 0 / 0 00:03:22.081 00:03:22.081 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:22.081 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:22.081 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:22.081 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:22.081 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:22.081 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:22.081 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:22.081 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:22.081 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:22.342 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:22.342 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:22.342 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:22.342 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:22.342 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:22.342 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:22.342 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:22.342 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:22.342 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:22.342 13:08:30 -- spdk/autotest.sh@117 -- # uname -s 00:03:22.342 13:08:30 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:22.342 13:08:30 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:22.342 13:08:30 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:25.641 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:25.641 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:25.641 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:25.641 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:25.641 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:25.641 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:25.902 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:25.902 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:25.902 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:25.902 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:25.902 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:25.902 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:25.902 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:25.902 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:25.902 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:25.902 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:27.817 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:28.078 13:08:35 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:29.019 13:08:36 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:29.019 13:08:36 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:29.019 13:08:36 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:29.019 13:08:36 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:29.020 13:08:36 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:29.020 13:08:36 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:29.020 13:08:36 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:29.020 13:08:36 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:29.020 13:08:36 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:29.020 13:08:36 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:29.020 13:08:36 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:03:29.020 13:08:36 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:33.224 Waiting for block devices as requested 00:03:33.224 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:33.224 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:33.224 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:33.224 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:33.224 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:33.224 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:33.483 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:33.483 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:33.483 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:03:33.743 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:33.743 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:33.743 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:34.003 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:34.003 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:34.003 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:34.003 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:34.263 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:34.524 13:08:42 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:34.524 13:08:42 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:03:34.524 13:08:42 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:03:34.524 13:08:42 -- common/autotest_common.sh@1485 -- # grep 0000:65:00.0/nvme/nvme 00:03:34.524 13:08:42 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:34.524 13:08:42 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:03:34.524 13:08:42 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:34.524 13:08:42 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:34.524 13:08:42 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:34.524 13:08:42 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:34.524 13:08:42 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:34.524 13:08:42 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:34.524 13:08:42 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:34.524 13:08:42 -- common/autotest_common.sh@1529 -- # oacs=' 0x5f' 00:03:34.524 13:08:42 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:34.524 13:08:42 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:34.524 13:08:42 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:34.524 13:08:42 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:34.524 13:08:42 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:34.524 13:08:42 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:34.524 13:08:42 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:34.524 13:08:42 -- common/autotest_common.sh@1541 -- # continue 00:03:34.524 13:08:42 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:34.524 13:08:42 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:34.524 13:08:42 -- common/autotest_common.sh@10 -- # set +x 00:03:34.524 13:08:42 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:34.524 13:08:42 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:34.524 13:08:42 -- common/autotest_common.sh@10 -- # set +x 00:03:34.524 13:08:42 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:38.725 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:38.725 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:38.725 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:38.725 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:38.725 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:38.725 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:38.725 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:38.725 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:38.725 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:38.725 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:38.725 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:38.725 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:38.725 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:38.725 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:38.725 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:38.725 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:38.725 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:38.986 13:08:46 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:38.986 13:08:46 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:38.986 13:08:46 -- common/autotest_common.sh@10 -- # set +x 00:03:38.986 13:08:46 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:38.986 13:08:46 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:38.986 13:08:46 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:38.986 13:08:46 -- common/autotest_common.sh@1561 -- # bdfs=() 00:03:38.986 13:08:46 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:03:38.986 13:08:46 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:03:38.986 13:08:46 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:03:38.986 13:08:46 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:03:38.986 13:08:46 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:38.986 13:08:46 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:38.986 13:08:46 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:38.986 13:08:46 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:38.986 13:08:46 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:39.245 13:08:47 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:39.245 13:08:47 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:03:39.245 13:08:47 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:39.245 13:08:47 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:03:39.245 13:08:47 -- common/autotest_common.sh@1564 -- # device=0xa80a 00:03:39.245 13:08:47 -- common/autotest_common.sh@1565 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:03:39.245 13:08:47 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:03:39.245 13:08:47 -- common/autotest_common.sh@1570 -- # return 0 00:03:39.245 13:08:47 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:03:39.245 13:08:47 -- common/autotest_common.sh@1578 -- # return 0 00:03:39.245 13:08:47 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:39.245 13:08:47 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:39.245 13:08:47 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:39.245 13:08:47 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:39.245 13:08:47 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:39.245 13:08:47 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:39.245 13:08:47 -- common/autotest_common.sh@10 -- # set +x 00:03:39.245 13:08:47 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:39.245 13:08:47 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:39.245 13:08:47 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:39.245 13:08:47 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:39.245 13:08:47 -- common/autotest_common.sh@10 -- # set +x 00:03:39.245 ************************************ 00:03:39.245 START TEST env 00:03:39.245 ************************************ 00:03:39.245 13:08:47 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:39.245 * Looking for test storage... 00:03:39.245 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:39.245 13:08:47 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:39.245 13:08:47 env -- common/autotest_common.sh@1691 -- # lcov --version 00:03:39.245 13:08:47 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:39.245 13:08:47 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:39.245 13:08:47 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:39.245 13:08:47 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:39.245 13:08:47 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:39.245 13:08:47 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:39.245 13:08:47 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:39.245 13:08:47 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:39.245 13:08:47 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:39.245 13:08:47 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:39.245 13:08:47 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:39.245 13:08:47 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:39.245 13:08:47 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:39.245 13:08:47 env -- scripts/common.sh@344 -- # case "$op" in 00:03:39.245 13:08:47 env -- scripts/common.sh@345 -- # : 1 00:03:39.245 13:08:47 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:39.245 13:08:47 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:39.245 13:08:47 env -- scripts/common.sh@365 -- # decimal 1 00:03:39.245 13:08:47 env -- scripts/common.sh@353 -- # local d=1 00:03:39.245 13:08:47 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:39.245 13:08:47 env -- scripts/common.sh@355 -- # echo 1 00:03:39.245 13:08:47 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:39.245 13:08:47 env -- scripts/common.sh@366 -- # decimal 2 00:03:39.245 13:08:47 env -- scripts/common.sh@353 -- # local d=2 00:03:39.245 13:08:47 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:39.245 13:08:47 env -- scripts/common.sh@355 -- # echo 2 00:03:39.245 13:08:47 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:39.245 13:08:47 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:39.245 13:08:47 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:39.245 13:08:47 env -- scripts/common.sh@368 -- # return 0 00:03:39.245 13:08:47 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:39.245 13:08:47 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:39.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.245 --rc genhtml_branch_coverage=1 00:03:39.245 --rc genhtml_function_coverage=1 00:03:39.245 --rc genhtml_legend=1 00:03:39.245 --rc geninfo_all_blocks=1 00:03:39.245 --rc geninfo_unexecuted_blocks=1 00:03:39.245 00:03:39.245 ' 00:03:39.245 13:08:47 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:39.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.245 --rc genhtml_branch_coverage=1 00:03:39.245 --rc genhtml_function_coverage=1 00:03:39.245 --rc genhtml_legend=1 00:03:39.245 --rc geninfo_all_blocks=1 00:03:39.245 --rc geninfo_unexecuted_blocks=1 00:03:39.245 00:03:39.245 ' 00:03:39.245 13:08:47 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:39.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.245 --rc genhtml_branch_coverage=1 00:03:39.245 --rc genhtml_function_coverage=1 00:03:39.245 --rc genhtml_legend=1 00:03:39.245 --rc geninfo_all_blocks=1 00:03:39.245 --rc geninfo_unexecuted_blocks=1 00:03:39.245 00:03:39.245 ' 00:03:39.245 13:08:47 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:39.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.245 --rc genhtml_branch_coverage=1 00:03:39.245 --rc genhtml_function_coverage=1 00:03:39.245 --rc genhtml_legend=1 00:03:39.245 --rc geninfo_all_blocks=1 00:03:39.245 --rc geninfo_unexecuted_blocks=1 00:03:39.245 00:03:39.245 ' 00:03:39.245 13:08:47 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:39.245 13:08:47 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:39.245 13:08:47 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:39.245 13:08:47 env -- common/autotest_common.sh@10 -- # set +x 00:03:39.506 ************************************ 00:03:39.506 START TEST env_memory 00:03:39.506 ************************************ 00:03:39.506 13:08:47 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:39.506 00:03:39.506 00:03:39.506 CUnit - A unit testing framework for C - Version 2.1-3 00:03:39.506 http://cunit.sourceforge.net/ 00:03:39.506 00:03:39.506 00:03:39.506 Suite: memory 00:03:39.506 Test: alloc and free memory map ...[2024-11-07 13:08:47.317470] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:39.506 passed 00:03:39.506 Test: mem map translation ...[2024-11-07 13:08:47.359309] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:39.506 [2024-11-07 13:08:47.359363] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:39.506 [2024-11-07 13:08:47.359433] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:39.506 [2024-11-07 13:08:47.359452] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:39.506 passed 00:03:39.506 Test: mem map registration ...[2024-11-07 13:08:47.433284] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:39.506 [2024-11-07 13:08:47.433319] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:39.506 passed 00:03:39.768 Test: mem map adjacent registrations ...passed 00:03:39.768 00:03:39.768 Run Summary: Type Total Ran Passed Failed Inactive 00:03:39.768 suites 1 1 n/a 0 0 00:03:39.768 tests 4 4 4 0 0 00:03:39.768 asserts 152 152 152 0 n/a 00:03:39.768 00:03:39.768 Elapsed time = 0.259 seconds 00:03:39.768 00:03:39.768 real 0m0.296s 00:03:39.768 user 0m0.267s 00:03:39.768 sys 0m0.029s 00:03:39.768 13:08:47 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:39.768 13:08:47 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:39.768 ************************************ 00:03:39.768 END TEST env_memory 00:03:39.768 ************************************ 00:03:39.768 13:08:47 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:39.768 13:08:47 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:39.768 13:08:47 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:39.768 13:08:47 env -- common/autotest_common.sh@10 -- # set +x 00:03:39.768 ************************************ 00:03:39.768 START TEST env_vtophys 00:03:39.768 ************************************ 00:03:39.768 13:08:47 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:39.768 EAL: lib.eal log level changed from notice to debug 00:03:39.768 EAL: Detected lcore 0 as core 0 on socket 0 00:03:39.768 EAL: Detected lcore 1 as core 1 on socket 0 00:03:39.768 EAL: Detected lcore 2 as core 2 on socket 0 00:03:39.768 EAL: Detected lcore 3 as core 3 on socket 0 00:03:39.768 EAL: Detected lcore 4 as core 4 on socket 0 00:03:39.768 EAL: Detected lcore 5 as core 5 on socket 0 00:03:39.768 EAL: Detected lcore 6 as core 6 on socket 0 00:03:39.768 EAL: Detected lcore 7 as core 7 on socket 0 00:03:39.768 EAL: Detected lcore 8 as core 8 on socket 0 00:03:39.768 EAL: Detected lcore 9 as core 9 on socket 0 00:03:39.768 EAL: Detected lcore 10 as core 10 on socket 0 00:03:39.768 EAL: Detected lcore 11 as core 11 on socket 0 00:03:39.768 EAL: Detected lcore 12 as core 12 on socket 0 00:03:39.768 EAL: Detected lcore 13 as core 13 on socket 0 00:03:39.768 EAL: Detected lcore 14 as core 14 on socket 0 00:03:39.768 EAL: Detected lcore 15 as core 15 on socket 0 00:03:39.768 EAL: Detected lcore 16 as core 16 on socket 0 00:03:39.768 EAL: Detected lcore 17 as core 17 on socket 0 00:03:39.768 EAL: Detected lcore 18 as core 18 on socket 0 00:03:39.768 EAL: Detected lcore 19 as core 19 on socket 0 00:03:39.768 EAL: Detected lcore 20 as core 20 on socket 0 00:03:39.768 EAL: Detected lcore 21 as core 21 on socket 0 00:03:39.768 EAL: Detected lcore 22 as core 22 on socket 0 00:03:39.768 EAL: Detected lcore 23 as core 23 on socket 0 00:03:39.768 EAL: Detected lcore 24 as core 24 on socket 0 00:03:39.768 EAL: Detected lcore 25 as core 25 on socket 0 00:03:39.768 EAL: Detected lcore 26 as core 26 on socket 0 00:03:39.768 EAL: Detected lcore 27 as core 27 on socket 0 00:03:39.768 EAL: Detected lcore 28 as core 28 on socket 0 00:03:39.768 EAL: Detected lcore 29 as core 29 on socket 0 00:03:39.768 EAL: Detected lcore 30 as core 30 on socket 0 00:03:39.768 EAL: Detected lcore 31 as core 31 on socket 0 00:03:39.768 EAL: Detected lcore 32 as core 32 on socket 0 00:03:39.768 EAL: Detected lcore 33 as core 33 on socket 0 00:03:39.768 EAL: Detected lcore 34 as core 34 on socket 0 00:03:39.768 EAL: Detected lcore 35 as core 35 on socket 0 00:03:39.768 EAL: Detected lcore 36 as core 0 on socket 1 00:03:39.768 EAL: Detected lcore 37 as core 1 on socket 1 00:03:39.768 EAL: Detected lcore 38 as core 2 on socket 1 00:03:39.768 EAL: Detected lcore 39 as core 3 on socket 1 00:03:39.768 EAL: Detected lcore 40 as core 4 on socket 1 00:03:39.768 EAL: Detected lcore 41 as core 5 on socket 1 00:03:39.769 EAL: Detected lcore 42 as core 6 on socket 1 00:03:39.769 EAL: Detected lcore 43 as core 7 on socket 1 00:03:39.769 EAL: Detected lcore 44 as core 8 on socket 1 00:03:39.769 EAL: Detected lcore 45 as core 9 on socket 1 00:03:39.769 EAL: Detected lcore 46 as core 10 on socket 1 00:03:39.769 EAL: Detected lcore 47 as core 11 on socket 1 00:03:39.769 EAL: Detected lcore 48 as core 12 on socket 1 00:03:39.769 EAL: Detected lcore 49 as core 13 on socket 1 00:03:39.769 EAL: Detected lcore 50 as core 14 on socket 1 00:03:39.769 EAL: Detected lcore 51 as core 15 on socket 1 00:03:39.769 EAL: Detected lcore 52 as core 16 on socket 1 00:03:39.769 EAL: Detected lcore 53 as core 17 on socket 1 00:03:39.769 EAL: Detected lcore 54 as core 18 on socket 1 00:03:39.769 EAL: Detected lcore 55 as core 19 on socket 1 00:03:39.769 EAL: Detected lcore 56 as core 20 on socket 1 00:03:39.769 EAL: Detected lcore 57 as core 21 on socket 1 00:03:39.769 EAL: Detected lcore 58 as core 22 on socket 1 00:03:39.769 EAL: Detected lcore 59 as core 23 on socket 1 00:03:39.769 EAL: Detected lcore 60 as core 24 on socket 1 00:03:39.769 EAL: Detected lcore 61 as core 25 on socket 1 00:03:39.769 EAL: Detected lcore 62 as core 26 on socket 1 00:03:39.769 EAL: Detected lcore 63 as core 27 on socket 1 00:03:39.769 EAL: Detected lcore 64 as core 28 on socket 1 00:03:39.769 EAL: Detected lcore 65 as core 29 on socket 1 00:03:39.769 EAL: Detected lcore 66 as core 30 on socket 1 00:03:39.769 EAL: Detected lcore 67 as core 31 on socket 1 00:03:39.769 EAL: Detected lcore 68 as core 32 on socket 1 00:03:39.769 EAL: Detected lcore 69 as core 33 on socket 1 00:03:39.769 EAL: Detected lcore 70 as core 34 on socket 1 00:03:39.769 EAL: Detected lcore 71 as core 35 on socket 1 00:03:39.769 EAL: Detected lcore 72 as core 0 on socket 0 00:03:39.769 EAL: Detected lcore 73 as core 1 on socket 0 00:03:39.769 EAL: Detected lcore 74 as core 2 on socket 0 00:03:39.769 EAL: Detected lcore 75 as core 3 on socket 0 00:03:39.769 EAL: Detected lcore 76 as core 4 on socket 0 00:03:39.769 EAL: Detected lcore 77 as core 5 on socket 0 00:03:39.769 EAL: Detected lcore 78 as core 6 on socket 0 00:03:39.769 EAL: Detected lcore 79 as core 7 on socket 0 00:03:39.769 EAL: Detected lcore 80 as core 8 on socket 0 00:03:39.769 EAL: Detected lcore 81 as core 9 on socket 0 00:03:39.769 EAL: Detected lcore 82 as core 10 on socket 0 00:03:39.769 EAL: Detected lcore 83 as core 11 on socket 0 00:03:39.769 EAL: Detected lcore 84 as core 12 on socket 0 00:03:39.769 EAL: Detected lcore 85 as core 13 on socket 0 00:03:39.769 EAL: Detected lcore 86 as core 14 on socket 0 00:03:39.769 EAL: Detected lcore 87 as core 15 on socket 0 00:03:39.769 EAL: Detected lcore 88 as core 16 on socket 0 00:03:39.769 EAL: Detected lcore 89 as core 17 on socket 0 00:03:39.769 EAL: Detected lcore 90 as core 18 on socket 0 00:03:39.769 EAL: Detected lcore 91 as core 19 on socket 0 00:03:39.769 EAL: Detected lcore 92 as core 20 on socket 0 00:03:39.769 EAL: Detected lcore 93 as core 21 on socket 0 00:03:39.769 EAL: Detected lcore 94 as core 22 on socket 0 00:03:39.769 EAL: Detected lcore 95 as core 23 on socket 0 00:03:39.769 EAL: Detected lcore 96 as core 24 on socket 0 00:03:39.769 EAL: Detected lcore 97 as core 25 on socket 0 00:03:39.769 EAL: Detected lcore 98 as core 26 on socket 0 00:03:39.769 EAL: Detected lcore 99 as core 27 on socket 0 00:03:39.769 EAL: Detected lcore 100 as core 28 on socket 0 00:03:39.769 EAL: Detected lcore 101 as core 29 on socket 0 00:03:39.769 EAL: Detected lcore 102 as core 30 on socket 0 00:03:39.769 EAL: Detected lcore 103 as core 31 on socket 0 00:03:39.769 EAL: Detected lcore 104 as core 32 on socket 0 00:03:39.769 EAL: Detected lcore 105 as core 33 on socket 0 00:03:39.769 EAL: Detected lcore 106 as core 34 on socket 0 00:03:39.769 EAL: Detected lcore 107 as core 35 on socket 0 00:03:39.769 EAL: Detected lcore 108 as core 0 on socket 1 00:03:39.769 EAL: Detected lcore 109 as core 1 on socket 1 00:03:39.769 EAL: Detected lcore 110 as core 2 on socket 1 00:03:39.769 EAL: Detected lcore 111 as core 3 on socket 1 00:03:39.769 EAL: Detected lcore 112 as core 4 on socket 1 00:03:39.769 EAL: Detected lcore 113 as core 5 on socket 1 00:03:39.769 EAL: Detected lcore 114 as core 6 on socket 1 00:03:39.769 EAL: Detected lcore 115 as core 7 on socket 1 00:03:39.769 EAL: Detected lcore 116 as core 8 on socket 1 00:03:39.769 EAL: Detected lcore 117 as core 9 on socket 1 00:03:39.769 EAL: Detected lcore 118 as core 10 on socket 1 00:03:39.769 EAL: Detected lcore 119 as core 11 on socket 1 00:03:39.769 EAL: Detected lcore 120 as core 12 on socket 1 00:03:39.769 EAL: Detected lcore 121 as core 13 on socket 1 00:03:39.769 EAL: Detected lcore 122 as core 14 on socket 1 00:03:39.769 EAL: Detected lcore 123 as core 15 on socket 1 00:03:39.769 EAL: Detected lcore 124 as core 16 on socket 1 00:03:39.769 EAL: Detected lcore 125 as core 17 on socket 1 00:03:39.769 EAL: Detected lcore 126 as core 18 on socket 1 00:03:39.769 EAL: Detected lcore 127 as core 19 on socket 1 00:03:39.769 EAL: Skipped lcore 128 as core 20 on socket 1 00:03:39.769 EAL: Skipped lcore 129 as core 21 on socket 1 00:03:39.769 EAL: Skipped lcore 130 as core 22 on socket 1 00:03:39.769 EAL: Skipped lcore 131 as core 23 on socket 1 00:03:39.769 EAL: Skipped lcore 132 as core 24 on socket 1 00:03:39.769 EAL: Skipped lcore 133 as core 25 on socket 1 00:03:39.769 EAL: Skipped lcore 134 as core 26 on socket 1 00:03:39.769 EAL: Skipped lcore 135 as core 27 on socket 1 00:03:39.769 EAL: Skipped lcore 136 as core 28 on socket 1 00:03:39.769 EAL: Skipped lcore 137 as core 29 on socket 1 00:03:39.769 EAL: Skipped lcore 138 as core 30 on socket 1 00:03:39.769 EAL: Skipped lcore 139 as core 31 on socket 1 00:03:39.769 EAL: Skipped lcore 140 as core 32 on socket 1 00:03:39.769 EAL: Skipped lcore 141 as core 33 on socket 1 00:03:39.769 EAL: Skipped lcore 142 as core 34 on socket 1 00:03:39.769 EAL: Skipped lcore 143 as core 35 on socket 1 00:03:39.769 EAL: Maximum logical cores by configuration: 128 00:03:39.769 EAL: Detected CPU lcores: 128 00:03:39.769 EAL: Detected NUMA nodes: 2 00:03:39.769 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:39.769 EAL: Detected shared linkage of DPDK 00:03:39.769 EAL: No shared files mode enabled, IPC will be disabled 00:03:39.769 EAL: Bus pci wants IOVA as 'DC' 00:03:39.769 EAL: Buses did not request a specific IOVA mode. 00:03:39.769 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:39.769 EAL: Selected IOVA mode 'VA' 00:03:39.769 EAL: Probing VFIO support... 00:03:39.769 EAL: IOMMU type 1 (Type 1) is supported 00:03:39.769 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:39.769 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:39.769 EAL: VFIO support initialized 00:03:39.769 EAL: Ask a virtual area of 0x2e000 bytes 00:03:39.769 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:39.769 EAL: Setting up physically contiguous memory... 00:03:39.769 EAL: Setting maximum number of open files to 524288 00:03:39.769 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:39.769 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:39.769 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:39.769 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.769 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:39.769 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:39.769 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.769 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:39.769 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:39.769 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.769 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:39.769 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:39.769 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.769 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:39.769 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:39.769 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.770 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:39.770 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:39.770 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.770 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:39.770 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:39.770 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.770 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:39.770 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:39.770 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.770 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:39.770 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:39.770 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:39.770 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.770 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:39.770 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:39.770 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.770 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:39.770 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:39.770 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.770 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:39.770 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:39.770 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.770 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:39.770 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:39.770 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.770 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:39.770 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:39.770 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.770 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:39.770 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:39.770 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.770 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:39.770 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:39.770 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.770 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:39.770 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:39.770 EAL: Hugepages will be freed exactly as allocated. 00:03:39.770 EAL: No shared files mode enabled, IPC is disabled 00:03:39.770 EAL: No shared files mode enabled, IPC is disabled 00:03:39.770 EAL: TSC frequency is ~2400000 KHz 00:03:39.770 EAL: Main lcore 0 is ready (tid=7f388c87aa40;cpuset=[0]) 00:03:39.770 EAL: Trying to obtain current memory policy. 00:03:39.770 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.770 EAL: Restoring previous memory policy: 0 00:03:39.770 EAL: request: mp_malloc_sync 00:03:39.770 EAL: No shared files mode enabled, IPC is disabled 00:03:39.770 EAL: Heap on socket 0 was expanded by 2MB 00:03:39.770 EAL: No shared files mode enabled, IPC is disabled 00:03:40.030 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:40.030 EAL: Mem event callback 'spdk:(nil)' registered 00:03:40.030 00:03:40.030 00:03:40.030 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.030 http://cunit.sourceforge.net/ 00:03:40.030 00:03:40.030 00:03:40.030 Suite: components_suite 00:03:40.289 Test: vtophys_malloc_test ...passed 00:03:40.289 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:40.289 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:40.289 EAL: Restoring previous memory policy: 4 00:03:40.289 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.289 EAL: request: mp_malloc_sync 00:03:40.289 EAL: No shared files mode enabled, IPC is disabled 00:03:40.289 EAL: Heap on socket 0 was expanded by 4MB 00:03:40.289 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.289 EAL: request: mp_malloc_sync 00:03:40.289 EAL: No shared files mode enabled, IPC is disabled 00:03:40.289 EAL: Heap on socket 0 was shrunk by 4MB 00:03:40.289 EAL: Trying to obtain current memory policy. 00:03:40.289 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:40.289 EAL: Restoring previous memory policy: 4 00:03:40.289 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.289 EAL: request: mp_malloc_sync 00:03:40.289 EAL: No shared files mode enabled, IPC is disabled 00:03:40.289 EAL: Heap on socket 0 was expanded by 6MB 00:03:40.289 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.289 EAL: request: mp_malloc_sync 00:03:40.289 EAL: No shared files mode enabled, IPC is disabled 00:03:40.289 EAL: Heap on socket 0 was shrunk by 6MB 00:03:40.289 EAL: Trying to obtain current memory policy. 00:03:40.289 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:40.289 EAL: Restoring previous memory policy: 4 00:03:40.289 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.289 EAL: request: mp_malloc_sync 00:03:40.289 EAL: No shared files mode enabled, IPC is disabled 00:03:40.289 EAL: Heap on socket 0 was expanded by 10MB 00:03:40.289 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.289 EAL: request: mp_malloc_sync 00:03:40.289 EAL: No shared files mode enabled, IPC is disabled 00:03:40.289 EAL: Heap on socket 0 was shrunk by 10MB 00:03:40.289 EAL: Trying to obtain current memory policy. 00:03:40.289 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:40.289 EAL: Restoring previous memory policy: 4 00:03:40.289 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.289 EAL: request: mp_malloc_sync 00:03:40.289 EAL: No shared files mode enabled, IPC is disabled 00:03:40.289 EAL: Heap on socket 0 was expanded by 18MB 00:03:40.290 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.290 EAL: request: mp_malloc_sync 00:03:40.290 EAL: No shared files mode enabled, IPC is disabled 00:03:40.290 EAL: Heap on socket 0 was shrunk by 18MB 00:03:40.290 EAL: Trying to obtain current memory policy. 00:03:40.290 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:40.290 EAL: Restoring previous memory policy: 4 00:03:40.290 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.290 EAL: request: mp_malloc_sync 00:03:40.290 EAL: No shared files mode enabled, IPC is disabled 00:03:40.290 EAL: Heap on socket 0 was expanded by 34MB 00:03:40.290 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.290 EAL: request: mp_malloc_sync 00:03:40.290 EAL: No shared files mode enabled, IPC is disabled 00:03:40.290 EAL: Heap on socket 0 was shrunk by 34MB 00:03:40.290 EAL: Trying to obtain current memory policy. 00:03:40.290 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:40.290 EAL: Restoring previous memory policy: 4 00:03:40.290 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.290 EAL: request: mp_malloc_sync 00:03:40.290 EAL: No shared files mode enabled, IPC is disabled 00:03:40.290 EAL: Heap on socket 0 was expanded by 66MB 00:03:40.549 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.549 EAL: request: mp_malloc_sync 00:03:40.549 EAL: No shared files mode enabled, IPC is disabled 00:03:40.549 EAL: Heap on socket 0 was shrunk by 66MB 00:03:40.550 EAL: Trying to obtain current memory policy. 00:03:40.550 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:40.550 EAL: Restoring previous memory policy: 4 00:03:40.550 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.550 EAL: request: mp_malloc_sync 00:03:40.550 EAL: No shared files mode enabled, IPC is disabled 00:03:40.550 EAL: Heap on socket 0 was expanded by 130MB 00:03:40.810 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.810 EAL: request: mp_malloc_sync 00:03:40.810 EAL: No shared files mode enabled, IPC is disabled 00:03:40.810 EAL: Heap on socket 0 was shrunk by 130MB 00:03:40.810 EAL: Trying to obtain current memory policy. 00:03:40.810 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:40.810 EAL: Restoring previous memory policy: 4 00:03:40.810 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.810 EAL: request: mp_malloc_sync 00:03:40.810 EAL: No shared files mode enabled, IPC is disabled 00:03:40.810 EAL: Heap on socket 0 was expanded by 258MB 00:03:41.379 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.379 EAL: request: mp_malloc_sync 00:03:41.379 EAL: No shared files mode enabled, IPC is disabled 00:03:41.379 EAL: Heap on socket 0 was shrunk by 258MB 00:03:41.639 EAL: Trying to obtain current memory policy. 00:03:41.639 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.639 EAL: Restoring previous memory policy: 4 00:03:41.639 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.639 EAL: request: mp_malloc_sync 00:03:41.639 EAL: No shared files mode enabled, IPC is disabled 00:03:41.639 EAL: Heap on socket 0 was expanded by 514MB 00:03:42.223 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.223 EAL: request: mp_malloc_sync 00:03:42.223 EAL: No shared files mode enabled, IPC is disabled 00:03:42.223 EAL: Heap on socket 0 was shrunk by 514MB 00:03:42.794 EAL: Trying to obtain current memory policy. 00:03:42.794 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.054 EAL: Restoring previous memory policy: 4 00:03:43.054 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.054 EAL: request: mp_malloc_sync 00:03:43.054 EAL: No shared files mode enabled, IPC is disabled 00:03:43.054 EAL: Heap on socket 0 was expanded by 1026MB 00:03:44.438 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.438 EAL: request: mp_malloc_sync 00:03:44.438 EAL: No shared files mode enabled, IPC is disabled 00:03:44.438 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:45.379 passed 00:03:45.379 00:03:45.379 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.379 suites 1 1 n/a 0 0 00:03:45.379 tests 2 2 2 0 0 00:03:45.379 asserts 497 497 497 0 n/a 00:03:45.379 00:03:45.379 Elapsed time = 5.404 seconds 00:03:45.379 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.379 EAL: request: mp_malloc_sync 00:03:45.379 EAL: No shared files mode enabled, IPC is disabled 00:03:45.379 EAL: Heap on socket 0 was shrunk by 2MB 00:03:45.379 EAL: No shared files mode enabled, IPC is disabled 00:03:45.379 EAL: No shared files mode enabled, IPC is disabled 00:03:45.379 EAL: No shared files mode enabled, IPC is disabled 00:03:45.379 00:03:45.379 real 0m5.680s 00:03:45.379 user 0m4.909s 00:03:45.379 sys 0m0.722s 00:03:45.379 13:08:53 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:45.379 13:08:53 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:45.379 ************************************ 00:03:45.379 END TEST env_vtophys 00:03:45.380 ************************************ 00:03:45.380 13:08:53 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:45.380 13:08:53 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:45.380 13:08:53 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:45.380 13:08:53 env -- common/autotest_common.sh@10 -- # set +x 00:03:45.380 ************************************ 00:03:45.380 START TEST env_pci 00:03:45.380 ************************************ 00:03:45.380 13:08:53 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:45.380 00:03:45.380 00:03:45.380 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.380 http://cunit.sourceforge.net/ 00:03:45.380 00:03:45.380 00:03:45.380 Suite: pci 00:03:45.648 Test: pci_hook ...[2024-11-07 13:08:53.386378] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3578318 has claimed it 00:03:45.648 EAL: Cannot find device (10000:00:01.0) 00:03:45.648 EAL: Failed to attach device on primary process 00:03:45.648 passed 00:03:45.648 00:03:45.648 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.648 suites 1 1 n/a 0 0 00:03:45.648 tests 1 1 1 0 0 00:03:45.648 asserts 25 25 25 0 n/a 00:03:45.648 00:03:45.648 Elapsed time = 0.035 seconds 00:03:45.648 00:03:45.648 real 0m0.093s 00:03:45.648 user 0m0.037s 00:03:45.648 sys 0m0.056s 00:03:45.648 13:08:53 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:45.648 13:08:53 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:45.648 ************************************ 00:03:45.648 END TEST env_pci 00:03:45.648 ************************************ 00:03:45.648 13:08:53 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:45.648 13:08:53 env -- env/env.sh@15 -- # uname 00:03:45.648 13:08:53 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:45.648 13:08:53 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:45.648 13:08:53 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:45.648 13:08:53 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:03:45.648 13:08:53 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:45.648 13:08:53 env -- common/autotest_common.sh@10 -- # set +x 00:03:45.648 ************************************ 00:03:45.648 START TEST env_dpdk_post_init 00:03:45.648 ************************************ 00:03:45.648 13:08:53 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:45.648 EAL: Detected CPU lcores: 128 00:03:45.648 EAL: Detected NUMA nodes: 2 00:03:45.648 EAL: Detected shared linkage of DPDK 00:03:45.648 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:45.911 EAL: Selected IOVA mode 'VA' 00:03:45.911 EAL: VFIO support initialized 00:03:45.911 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:45.911 EAL: Using IOMMU type 1 (Type 1) 00:03:46.172 EAL: Ignore mapping IO port bar(1) 00:03:46.172 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:03:46.172 EAL: Ignore mapping IO port bar(1) 00:03:46.433 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:03:46.433 EAL: Ignore mapping IO port bar(1) 00:03:46.693 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:03:46.693 EAL: Ignore mapping IO port bar(1) 00:03:46.693 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:03:46.954 EAL: Ignore mapping IO port bar(1) 00:03:46.954 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:03:47.214 EAL: Ignore mapping IO port bar(1) 00:03:47.214 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:03:47.475 EAL: Ignore mapping IO port bar(1) 00:03:47.475 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:03:47.736 EAL: Ignore mapping IO port bar(1) 00:03:47.736 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:03:47.996 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:03:47.996 EAL: Ignore mapping IO port bar(1) 00:03:48.256 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:03:48.256 EAL: Ignore mapping IO port bar(1) 00:03:48.518 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:03:48.518 EAL: Ignore mapping IO port bar(1) 00:03:48.518 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:03:48.779 EAL: Ignore mapping IO port bar(1) 00:03:48.779 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:03:49.040 EAL: Ignore mapping IO port bar(1) 00:03:49.040 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:03:49.300 EAL: Ignore mapping IO port bar(1) 00:03:49.300 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:03:49.300 EAL: Ignore mapping IO port bar(1) 00:03:49.561 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:03:49.561 EAL: Ignore mapping IO port bar(1) 00:03:49.822 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:03:49.822 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:03:49.822 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:03:49.822 Starting DPDK initialization... 00:03:49.822 Starting SPDK post initialization... 00:03:49.822 SPDK NVMe probe 00:03:49.822 Attaching to 0000:65:00.0 00:03:49.822 Attached to 0000:65:00.0 00:03:49.822 Cleaning up... 00:03:51.734 00:03:51.734 real 0m5.879s 00:03:51.734 user 0m0.161s 00:03:51.734 sys 0m0.272s 00:03:51.734 13:08:59 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:51.734 13:08:59 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:51.734 ************************************ 00:03:51.734 END TEST env_dpdk_post_init 00:03:51.734 ************************************ 00:03:51.734 13:08:59 env -- env/env.sh@26 -- # uname 00:03:51.734 13:08:59 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:51.734 13:08:59 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:51.734 13:08:59 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:51.734 13:08:59 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:51.734 13:08:59 env -- common/autotest_common.sh@10 -- # set +x 00:03:51.734 ************************************ 00:03:51.734 START TEST env_mem_callbacks 00:03:51.734 ************************************ 00:03:51.734 13:08:59 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:51.734 EAL: Detected CPU lcores: 128 00:03:51.734 EAL: Detected NUMA nodes: 2 00:03:51.734 EAL: Detected shared linkage of DPDK 00:03:51.734 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:51.734 EAL: Selected IOVA mode 'VA' 00:03:51.734 EAL: VFIO support initialized 00:03:51.734 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:51.734 00:03:51.734 00:03:51.734 CUnit - A unit testing framework for C - Version 2.1-3 00:03:51.734 http://cunit.sourceforge.net/ 00:03:51.734 00:03:51.734 00:03:51.734 Suite: memory 00:03:51.734 Test: test ... 00:03:51.734 register 0x200000200000 2097152 00:03:51.734 malloc 3145728 00:03:51.734 register 0x200000400000 4194304 00:03:51.734 buf 0x2000004fffc0 len 3145728 PASSED 00:03:51.734 malloc 64 00:03:51.734 buf 0x2000004ffec0 len 64 PASSED 00:03:51.734 malloc 4194304 00:03:51.734 register 0x200000800000 6291456 00:03:51.734 buf 0x2000009fffc0 len 4194304 PASSED 00:03:51.734 free 0x2000004fffc0 3145728 00:03:51.734 free 0x2000004ffec0 64 00:03:51.734 unregister 0x200000400000 4194304 PASSED 00:03:51.734 free 0x2000009fffc0 4194304 00:03:51.734 unregister 0x200000800000 6291456 PASSED 00:03:51.734 malloc 8388608 00:03:51.734 register 0x200000400000 10485760 00:03:51.734 buf 0x2000005fffc0 len 8388608 PASSED 00:03:51.734 free 0x2000005fffc0 8388608 00:03:51.734 unregister 0x200000400000 10485760 PASSED 00:03:51.734 passed 00:03:51.734 00:03:51.734 Run Summary: Type Total Ran Passed Failed Inactive 00:03:51.734 suites 1 1 n/a 0 0 00:03:51.734 tests 1 1 1 0 0 00:03:51.734 asserts 15 15 15 0 n/a 00:03:51.734 00:03:51.734 Elapsed time = 0.047 seconds 00:03:51.734 00:03:51.734 real 0m0.183s 00:03:51.734 user 0m0.080s 00:03:51.734 sys 0m0.102s 00:03:51.734 13:08:59 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:51.734 13:08:59 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:51.734 ************************************ 00:03:51.734 END TEST env_mem_callbacks 00:03:51.734 ************************************ 00:03:51.996 00:03:51.996 real 0m12.678s 00:03:51.996 user 0m5.675s 00:03:51.996 sys 0m1.540s 00:03:51.996 13:08:59 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:51.996 13:08:59 env -- common/autotest_common.sh@10 -- # set +x 00:03:51.996 ************************************ 00:03:51.996 END TEST env 00:03:51.996 ************************************ 00:03:51.996 13:08:59 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:51.996 13:08:59 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:51.996 13:08:59 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:51.996 13:08:59 -- common/autotest_common.sh@10 -- # set +x 00:03:51.996 ************************************ 00:03:51.996 START TEST rpc 00:03:51.996 ************************************ 00:03:51.996 13:08:59 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:51.996 * Looking for test storage... 00:03:51.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:51.996 13:08:59 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:51.996 13:08:59 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:51.996 13:08:59 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:52.257 13:09:00 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:52.257 13:09:00 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:52.257 13:09:00 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:52.257 13:09:00 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:52.257 13:09:00 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:52.257 13:09:00 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:52.257 13:09:00 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:52.257 13:09:00 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:52.257 13:09:00 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:52.257 13:09:00 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:52.257 13:09:00 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:52.257 13:09:00 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:52.257 13:09:00 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:52.257 13:09:00 rpc -- scripts/common.sh@345 -- # : 1 00:03:52.257 13:09:00 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:52.257 13:09:00 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:52.257 13:09:00 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:52.257 13:09:00 rpc -- scripts/common.sh@353 -- # local d=1 00:03:52.257 13:09:00 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:52.257 13:09:00 rpc -- scripts/common.sh@355 -- # echo 1 00:03:52.257 13:09:00 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:52.257 13:09:00 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:52.257 13:09:00 rpc -- scripts/common.sh@353 -- # local d=2 00:03:52.257 13:09:00 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:52.257 13:09:00 rpc -- scripts/common.sh@355 -- # echo 2 00:03:52.257 13:09:00 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:52.257 13:09:00 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:52.257 13:09:00 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:52.257 13:09:00 rpc -- scripts/common.sh@368 -- # return 0 00:03:52.257 13:09:00 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:52.257 13:09:00 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:52.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.257 --rc genhtml_branch_coverage=1 00:03:52.257 --rc genhtml_function_coverage=1 00:03:52.257 --rc genhtml_legend=1 00:03:52.257 --rc geninfo_all_blocks=1 00:03:52.257 --rc geninfo_unexecuted_blocks=1 00:03:52.257 00:03:52.257 ' 00:03:52.257 13:09:00 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:52.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.257 --rc genhtml_branch_coverage=1 00:03:52.257 --rc genhtml_function_coverage=1 00:03:52.257 --rc genhtml_legend=1 00:03:52.257 --rc geninfo_all_blocks=1 00:03:52.257 --rc geninfo_unexecuted_blocks=1 00:03:52.257 00:03:52.257 ' 00:03:52.257 13:09:00 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:52.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.257 --rc genhtml_branch_coverage=1 00:03:52.257 --rc genhtml_function_coverage=1 00:03:52.257 --rc genhtml_legend=1 00:03:52.257 --rc geninfo_all_blocks=1 00:03:52.257 --rc geninfo_unexecuted_blocks=1 00:03:52.257 00:03:52.257 ' 00:03:52.257 13:09:00 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:52.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.257 --rc genhtml_branch_coverage=1 00:03:52.257 --rc genhtml_function_coverage=1 00:03:52.257 --rc genhtml_legend=1 00:03:52.257 --rc geninfo_all_blocks=1 00:03:52.257 --rc geninfo_unexecuted_blocks=1 00:03:52.257 00:03:52.257 ' 00:03:52.257 13:09:00 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3579788 00:03:52.257 13:09:00 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:52.257 13:09:00 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3579788 00:03:52.257 13:09:00 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:52.257 13:09:00 rpc -- common/autotest_common.sh@833 -- # '[' -z 3579788 ']' 00:03:52.257 13:09:00 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:52.257 13:09:00 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:52.257 13:09:00 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:52.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:52.257 13:09:00 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:52.257 13:09:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:52.257 [2024-11-07 13:09:00.116880] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:03:52.257 [2024-11-07 13:09:00.116994] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3579788 ] 00:03:52.257 [2024-11-07 13:09:00.254812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:52.518 [2024-11-07 13:09:00.349775] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:52.518 [2024-11-07 13:09:00.349823] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3579788' to capture a snapshot of events at runtime. 00:03:52.518 [2024-11-07 13:09:00.349839] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:52.518 [2024-11-07 13:09:00.349849] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:52.518 [2024-11-07 13:09:00.349870] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3579788 for offline analysis/debug. 00:03:52.518 [2024-11-07 13:09:00.351071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:53.090 13:09:00 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:53.090 13:09:00 rpc -- common/autotest_common.sh@866 -- # return 0 00:03:53.090 13:09:00 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:53.091 13:09:00 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:53.091 13:09:00 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:53.091 13:09:00 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:53.091 13:09:00 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:53.091 13:09:00 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:53.091 13:09:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:53.091 ************************************ 00:03:53.091 START TEST rpc_integrity 00:03:53.091 ************************************ 00:03:53.091 13:09:01 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:03:53.091 13:09:01 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:53.091 13:09:01 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:53.091 13:09:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.091 13:09:01 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:53.091 13:09:01 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:53.091 13:09:01 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:53.091 13:09:01 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:53.091 13:09:01 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:53.091 13:09:01 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:53.091 13:09:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.352 13:09:01 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:53.352 13:09:01 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:53.352 13:09:01 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:53.352 13:09:01 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:53.352 13:09:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.352 13:09:01 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:53.352 13:09:01 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:53.352 { 00:03:53.352 "name": "Malloc0", 00:03:53.352 "aliases": [ 00:03:53.352 "09586271-0359-4250-8a84-bb4feecdf059" 00:03:53.352 ], 00:03:53.352 "product_name": "Malloc disk", 00:03:53.352 "block_size": 512, 00:03:53.352 "num_blocks": 16384, 00:03:53.352 "uuid": "09586271-0359-4250-8a84-bb4feecdf059", 00:03:53.352 "assigned_rate_limits": { 00:03:53.352 "rw_ios_per_sec": 0, 00:03:53.352 "rw_mbytes_per_sec": 0, 00:03:53.352 "r_mbytes_per_sec": 0, 00:03:53.352 "w_mbytes_per_sec": 0 00:03:53.352 }, 00:03:53.352 "claimed": false, 00:03:53.352 "zoned": false, 00:03:53.352 "supported_io_types": { 00:03:53.352 "read": true, 00:03:53.352 "write": true, 00:03:53.352 "unmap": true, 00:03:53.352 "flush": true, 00:03:53.352 "reset": true, 00:03:53.352 "nvme_admin": false, 00:03:53.352 "nvme_io": false, 00:03:53.352 "nvme_io_md": false, 00:03:53.352 "write_zeroes": true, 00:03:53.352 "zcopy": true, 00:03:53.352 "get_zone_info": false, 00:03:53.352 "zone_management": false, 00:03:53.352 "zone_append": false, 00:03:53.352 "compare": false, 00:03:53.352 "compare_and_write": false, 00:03:53.352 "abort": true, 00:03:53.352 "seek_hole": false, 00:03:53.352 "seek_data": false, 00:03:53.352 "copy": true, 00:03:53.352 "nvme_iov_md": false 00:03:53.352 }, 00:03:53.352 "memory_domains": [ 00:03:53.352 { 00:03:53.352 "dma_device_id": "system", 00:03:53.352 "dma_device_type": 1 00:03:53.352 }, 00:03:53.352 { 00:03:53.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:53.352 "dma_device_type": 2 00:03:53.352 } 00:03:53.352 ], 00:03:53.352 "driver_specific": {} 00:03:53.352 } 00:03:53.352 ]' 00:03:53.352 13:09:01 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:53.352 13:09:01 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:53.352 13:09:01 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:53.352 13:09:01 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:53.352 13:09:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.352 [2024-11-07 13:09:01.175955] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:53.352 [2024-11-07 13:09:01.176022] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:53.352 [2024-11-07 13:09:01.176048] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600001fe80 00:03:53.352 [2024-11-07 13:09:01.176061] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:53.352 [2024-11-07 13:09:01.178336] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:53.352 [2024-11-07 13:09:01.178367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:53.352 Passthru0 00:03:53.352 13:09:01 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:53.352 13:09:01 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:53.352 13:09:01 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:53.352 13:09:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.352 13:09:01 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:53.352 13:09:01 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:53.352 { 00:03:53.352 "name": "Malloc0", 00:03:53.352 "aliases": [ 00:03:53.352 "09586271-0359-4250-8a84-bb4feecdf059" 00:03:53.352 ], 00:03:53.352 "product_name": "Malloc disk", 00:03:53.352 "block_size": 512, 00:03:53.352 "num_blocks": 16384, 00:03:53.352 "uuid": "09586271-0359-4250-8a84-bb4feecdf059", 00:03:53.352 "assigned_rate_limits": { 00:03:53.352 "rw_ios_per_sec": 0, 00:03:53.352 "rw_mbytes_per_sec": 0, 00:03:53.352 "r_mbytes_per_sec": 0, 00:03:53.352 "w_mbytes_per_sec": 0 00:03:53.352 }, 00:03:53.352 "claimed": true, 00:03:53.352 "claim_type": "exclusive_write", 00:03:53.352 "zoned": false, 00:03:53.352 "supported_io_types": { 00:03:53.352 "read": true, 00:03:53.352 "write": true, 00:03:53.352 "unmap": true, 00:03:53.352 "flush": true, 00:03:53.352 "reset": true, 00:03:53.352 "nvme_admin": false, 00:03:53.352 "nvme_io": false, 00:03:53.352 "nvme_io_md": false, 00:03:53.352 "write_zeroes": true, 00:03:53.352 "zcopy": true, 00:03:53.352 "get_zone_info": false, 00:03:53.352 "zone_management": false, 00:03:53.352 "zone_append": false, 00:03:53.352 "compare": false, 00:03:53.352 "compare_and_write": false, 00:03:53.352 "abort": true, 00:03:53.352 "seek_hole": false, 00:03:53.352 "seek_data": false, 00:03:53.352 "copy": true, 00:03:53.352 "nvme_iov_md": false 00:03:53.352 }, 00:03:53.352 "memory_domains": [ 00:03:53.352 { 00:03:53.352 "dma_device_id": "system", 00:03:53.352 "dma_device_type": 1 00:03:53.352 }, 00:03:53.352 { 00:03:53.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:53.352 "dma_device_type": 2 00:03:53.352 } 00:03:53.352 ], 00:03:53.352 "driver_specific": {} 00:03:53.352 }, 00:03:53.352 { 00:03:53.352 "name": "Passthru0", 00:03:53.352 "aliases": [ 00:03:53.352 "6ca68af3-0494-522d-a69c-e18217d759b6" 00:03:53.352 ], 00:03:53.352 "product_name": "passthru", 00:03:53.352 "block_size": 512, 00:03:53.352 "num_blocks": 16384, 00:03:53.352 "uuid": "6ca68af3-0494-522d-a69c-e18217d759b6", 00:03:53.352 "assigned_rate_limits": { 00:03:53.352 "rw_ios_per_sec": 0, 00:03:53.352 "rw_mbytes_per_sec": 0, 00:03:53.352 "r_mbytes_per_sec": 0, 00:03:53.352 "w_mbytes_per_sec": 0 00:03:53.352 }, 00:03:53.352 "claimed": false, 00:03:53.352 "zoned": false, 00:03:53.353 "supported_io_types": { 00:03:53.353 "read": true, 00:03:53.353 "write": true, 00:03:53.353 "unmap": true, 00:03:53.353 "flush": true, 00:03:53.353 "reset": true, 00:03:53.353 "nvme_admin": false, 00:03:53.353 "nvme_io": false, 00:03:53.353 "nvme_io_md": false, 00:03:53.353 "write_zeroes": true, 00:03:53.353 "zcopy": true, 00:03:53.353 "get_zone_info": false, 00:03:53.353 "zone_management": false, 00:03:53.353 "zone_append": false, 00:03:53.353 "compare": false, 00:03:53.353 "compare_and_write": false, 00:03:53.353 "abort": true, 00:03:53.353 "seek_hole": false, 00:03:53.353 "seek_data": false, 00:03:53.353 "copy": true, 00:03:53.353 "nvme_iov_md": false 00:03:53.353 }, 00:03:53.353 "memory_domains": [ 00:03:53.353 { 00:03:53.353 "dma_device_id": "system", 00:03:53.353 "dma_device_type": 1 00:03:53.353 }, 00:03:53.353 { 00:03:53.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:53.353 "dma_device_type": 2 00:03:53.353 } 00:03:53.353 ], 00:03:53.353 "driver_specific": { 00:03:53.353 "passthru": { 00:03:53.353 "name": "Passthru0", 00:03:53.353 "base_bdev_name": "Malloc0" 00:03:53.353 } 00:03:53.353 } 00:03:53.353 } 00:03:53.353 ]' 00:03:53.353 13:09:01 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:53.353 13:09:01 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:53.353 13:09:01 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:53.353 13:09:01 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:53.353 13:09:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.353 13:09:01 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:53.353 13:09:01 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:53.353 13:09:01 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:53.353 13:09:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.353 13:09:01 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:53.353 13:09:01 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:53.353 13:09:01 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:53.353 13:09:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.353 13:09:01 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:53.353 13:09:01 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:53.353 13:09:01 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:53.353 13:09:01 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:53.353 00:03:53.353 real 0m0.322s 00:03:53.353 user 0m0.197s 00:03:53.353 sys 0m0.039s 00:03:53.353 13:09:01 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:53.353 13:09:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.353 ************************************ 00:03:53.353 END TEST rpc_integrity 00:03:53.353 ************************************ 00:03:53.613 13:09:01 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:53.613 13:09:01 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:53.613 13:09:01 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:53.613 13:09:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:53.613 ************************************ 00:03:53.613 START TEST rpc_plugins 00:03:53.613 ************************************ 00:03:53.613 13:09:01 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:03:53.613 13:09:01 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:53.613 13:09:01 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:53.613 13:09:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:53.613 13:09:01 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:53.613 13:09:01 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:53.613 13:09:01 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:53.613 13:09:01 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:53.613 13:09:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:53.614 13:09:01 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:53.614 13:09:01 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:53.614 { 00:03:53.614 "name": "Malloc1", 00:03:53.614 "aliases": [ 00:03:53.614 "9b63cf5a-1a78-4681-a802-4d13dc379c8d" 00:03:53.614 ], 00:03:53.614 "product_name": "Malloc disk", 00:03:53.614 "block_size": 4096, 00:03:53.614 "num_blocks": 256, 00:03:53.614 "uuid": "9b63cf5a-1a78-4681-a802-4d13dc379c8d", 00:03:53.614 "assigned_rate_limits": { 00:03:53.614 "rw_ios_per_sec": 0, 00:03:53.614 "rw_mbytes_per_sec": 0, 00:03:53.614 "r_mbytes_per_sec": 0, 00:03:53.614 "w_mbytes_per_sec": 0 00:03:53.614 }, 00:03:53.614 "claimed": false, 00:03:53.614 "zoned": false, 00:03:53.614 "supported_io_types": { 00:03:53.614 "read": true, 00:03:53.614 "write": true, 00:03:53.614 "unmap": true, 00:03:53.614 "flush": true, 00:03:53.614 "reset": true, 00:03:53.614 "nvme_admin": false, 00:03:53.614 "nvme_io": false, 00:03:53.614 "nvme_io_md": false, 00:03:53.614 "write_zeroes": true, 00:03:53.614 "zcopy": true, 00:03:53.614 "get_zone_info": false, 00:03:53.614 "zone_management": false, 00:03:53.614 "zone_append": false, 00:03:53.614 "compare": false, 00:03:53.614 "compare_and_write": false, 00:03:53.614 "abort": true, 00:03:53.614 "seek_hole": false, 00:03:53.614 "seek_data": false, 00:03:53.614 "copy": true, 00:03:53.614 "nvme_iov_md": false 00:03:53.614 }, 00:03:53.614 "memory_domains": [ 00:03:53.614 { 00:03:53.614 "dma_device_id": "system", 00:03:53.614 "dma_device_type": 1 00:03:53.614 }, 00:03:53.614 { 00:03:53.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:53.614 "dma_device_type": 2 00:03:53.614 } 00:03:53.614 ], 00:03:53.614 "driver_specific": {} 00:03:53.614 } 00:03:53.614 ]' 00:03:53.614 13:09:01 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:53.614 13:09:01 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:53.614 13:09:01 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:53.614 13:09:01 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:53.614 13:09:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:53.614 13:09:01 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:53.614 13:09:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:53.614 13:09:01 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:53.614 13:09:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:53.614 13:09:01 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:53.614 13:09:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:53.614 13:09:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:53.614 13:09:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:53.614 00:03:53.614 real 0m0.154s 00:03:53.614 user 0m0.100s 00:03:53.614 sys 0m0.017s 00:03:53.614 13:09:01 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:53.614 13:09:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:53.614 ************************************ 00:03:53.614 END TEST rpc_plugins 00:03:53.614 ************************************ 00:03:53.614 13:09:01 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:53.875 13:09:01 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:53.875 13:09:01 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:53.875 13:09:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:53.875 ************************************ 00:03:53.875 START TEST rpc_trace_cmd_test 00:03:53.875 ************************************ 00:03:53.875 13:09:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:03:53.875 13:09:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:53.875 13:09:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:53.875 13:09:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:53.875 13:09:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:53.875 13:09:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:53.875 13:09:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:53.875 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3579788", 00:03:53.875 "tpoint_group_mask": "0x8", 00:03:53.875 "iscsi_conn": { 00:03:53.875 "mask": "0x2", 00:03:53.875 "tpoint_mask": "0x0" 00:03:53.875 }, 00:03:53.875 "scsi": { 00:03:53.875 "mask": "0x4", 00:03:53.875 "tpoint_mask": "0x0" 00:03:53.875 }, 00:03:53.875 "bdev": { 00:03:53.875 "mask": "0x8", 00:03:53.875 "tpoint_mask": "0xffffffffffffffff" 00:03:53.875 }, 00:03:53.875 "nvmf_rdma": { 00:03:53.875 "mask": "0x10", 00:03:53.875 "tpoint_mask": "0x0" 00:03:53.875 }, 00:03:53.875 "nvmf_tcp": { 00:03:53.875 "mask": "0x20", 00:03:53.875 "tpoint_mask": "0x0" 00:03:53.875 }, 00:03:53.875 "ftl": { 00:03:53.875 "mask": "0x40", 00:03:53.875 "tpoint_mask": "0x0" 00:03:53.875 }, 00:03:53.875 "blobfs": { 00:03:53.875 "mask": "0x80", 00:03:53.875 "tpoint_mask": "0x0" 00:03:53.875 }, 00:03:53.875 "dsa": { 00:03:53.875 "mask": "0x200", 00:03:53.875 "tpoint_mask": "0x0" 00:03:53.875 }, 00:03:53.875 "thread": { 00:03:53.875 "mask": "0x400", 00:03:53.875 "tpoint_mask": "0x0" 00:03:53.875 }, 00:03:53.875 "nvme_pcie": { 00:03:53.875 "mask": "0x800", 00:03:53.875 "tpoint_mask": "0x0" 00:03:53.875 }, 00:03:53.875 "iaa": { 00:03:53.875 "mask": "0x1000", 00:03:53.875 "tpoint_mask": "0x0" 00:03:53.875 }, 00:03:53.875 "nvme_tcp": { 00:03:53.875 "mask": "0x2000", 00:03:53.875 "tpoint_mask": "0x0" 00:03:53.875 }, 00:03:53.875 "bdev_nvme": { 00:03:53.875 "mask": "0x4000", 00:03:53.875 "tpoint_mask": "0x0" 00:03:53.875 }, 00:03:53.875 "sock": { 00:03:53.875 "mask": "0x8000", 00:03:53.875 "tpoint_mask": "0x0" 00:03:53.875 }, 00:03:53.875 "blob": { 00:03:53.875 "mask": "0x10000", 00:03:53.875 "tpoint_mask": "0x0" 00:03:53.875 }, 00:03:53.875 "bdev_raid": { 00:03:53.875 "mask": "0x20000", 00:03:53.875 "tpoint_mask": "0x0" 00:03:53.875 }, 00:03:53.875 "scheduler": { 00:03:53.875 "mask": "0x40000", 00:03:53.875 "tpoint_mask": "0x0" 00:03:53.875 } 00:03:53.875 }' 00:03:53.875 13:09:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:53.875 13:09:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:53.875 13:09:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:53.875 13:09:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:53.875 13:09:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:53.875 13:09:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:53.875 13:09:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:53.875 13:09:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:53.875 13:09:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:54.137 13:09:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:54.137 00:03:54.137 real 0m0.251s 00:03:54.137 user 0m0.209s 00:03:54.137 sys 0m0.033s 00:03:54.137 13:09:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:54.137 13:09:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:54.137 ************************************ 00:03:54.137 END TEST rpc_trace_cmd_test 00:03:54.137 ************************************ 00:03:54.137 13:09:01 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:54.137 13:09:01 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:54.137 13:09:01 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:54.137 13:09:01 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:54.137 13:09:01 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:54.137 13:09:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:54.137 ************************************ 00:03:54.137 START TEST rpc_daemon_integrity 00:03:54.137 ************************************ 00:03:54.137 13:09:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:03:54.137 13:09:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:54.137 13:09:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:54.137 13:09:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:54.137 13:09:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:54.137 13:09:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:54.137 13:09:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:54.137 13:09:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:54.137 13:09:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:54.137 13:09:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:54.137 13:09:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:54.137 13:09:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:54.137 13:09:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:54.137 13:09:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:54.137 13:09:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:54.137 13:09:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:54.137 13:09:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:54.137 13:09:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:54.137 { 00:03:54.137 "name": "Malloc2", 00:03:54.137 "aliases": [ 00:03:54.137 "bb73af44-0a50-4688-87bf-54eafcfaae82" 00:03:54.137 ], 00:03:54.137 "product_name": "Malloc disk", 00:03:54.137 "block_size": 512, 00:03:54.137 "num_blocks": 16384, 00:03:54.137 "uuid": "bb73af44-0a50-4688-87bf-54eafcfaae82", 00:03:54.137 "assigned_rate_limits": { 00:03:54.137 "rw_ios_per_sec": 0, 00:03:54.137 "rw_mbytes_per_sec": 0, 00:03:54.137 "r_mbytes_per_sec": 0, 00:03:54.137 "w_mbytes_per_sec": 0 00:03:54.137 }, 00:03:54.137 "claimed": false, 00:03:54.137 "zoned": false, 00:03:54.137 "supported_io_types": { 00:03:54.137 "read": true, 00:03:54.137 "write": true, 00:03:54.137 "unmap": true, 00:03:54.137 "flush": true, 00:03:54.137 "reset": true, 00:03:54.137 "nvme_admin": false, 00:03:54.137 "nvme_io": false, 00:03:54.137 "nvme_io_md": false, 00:03:54.137 "write_zeroes": true, 00:03:54.137 "zcopy": true, 00:03:54.137 "get_zone_info": false, 00:03:54.137 "zone_management": false, 00:03:54.137 "zone_append": false, 00:03:54.137 "compare": false, 00:03:54.137 "compare_and_write": false, 00:03:54.137 "abort": true, 00:03:54.137 "seek_hole": false, 00:03:54.137 "seek_data": false, 00:03:54.137 "copy": true, 00:03:54.137 "nvme_iov_md": false 00:03:54.137 }, 00:03:54.137 "memory_domains": [ 00:03:54.137 { 00:03:54.137 "dma_device_id": "system", 00:03:54.137 "dma_device_type": 1 00:03:54.137 }, 00:03:54.137 { 00:03:54.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:54.137 "dma_device_type": 2 00:03:54.137 } 00:03:54.137 ], 00:03:54.137 "driver_specific": {} 00:03:54.137 } 00:03:54.137 ]' 00:03:54.137 13:09:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:54.137 13:09:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:54.137 13:09:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:54.137 13:09:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:54.137 13:09:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:54.137 [2024-11-07 13:09:02.130858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:54.137 [2024-11-07 13:09:02.130911] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:54.137 [2024-11-07 13:09:02.130934] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000021080 00:03:54.137 [2024-11-07 13:09:02.130945] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:54.137 [2024-11-07 13:09:02.133152] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:54.137 [2024-11-07 13:09:02.133180] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:54.137 Passthru0 00:03:54.137 13:09:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:54.137 13:09:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:54.137 13:09:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:54.137 13:09:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:54.399 13:09:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:54.399 13:09:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:54.399 { 00:03:54.399 "name": "Malloc2", 00:03:54.399 "aliases": [ 00:03:54.399 "bb73af44-0a50-4688-87bf-54eafcfaae82" 00:03:54.399 ], 00:03:54.399 "product_name": "Malloc disk", 00:03:54.399 "block_size": 512, 00:03:54.399 "num_blocks": 16384, 00:03:54.399 "uuid": "bb73af44-0a50-4688-87bf-54eafcfaae82", 00:03:54.399 "assigned_rate_limits": { 00:03:54.399 "rw_ios_per_sec": 0, 00:03:54.399 "rw_mbytes_per_sec": 0, 00:03:54.399 "r_mbytes_per_sec": 0, 00:03:54.399 "w_mbytes_per_sec": 0 00:03:54.399 }, 00:03:54.399 "claimed": true, 00:03:54.399 "claim_type": "exclusive_write", 00:03:54.399 "zoned": false, 00:03:54.399 "supported_io_types": { 00:03:54.399 "read": true, 00:03:54.399 "write": true, 00:03:54.399 "unmap": true, 00:03:54.399 "flush": true, 00:03:54.399 "reset": true, 00:03:54.399 "nvme_admin": false, 00:03:54.399 "nvme_io": false, 00:03:54.399 "nvme_io_md": false, 00:03:54.399 "write_zeroes": true, 00:03:54.399 "zcopy": true, 00:03:54.399 "get_zone_info": false, 00:03:54.399 "zone_management": false, 00:03:54.399 "zone_append": false, 00:03:54.399 "compare": false, 00:03:54.399 "compare_and_write": false, 00:03:54.399 "abort": true, 00:03:54.399 "seek_hole": false, 00:03:54.399 "seek_data": false, 00:03:54.399 "copy": true, 00:03:54.399 "nvme_iov_md": false 00:03:54.399 }, 00:03:54.399 "memory_domains": [ 00:03:54.399 { 00:03:54.399 "dma_device_id": "system", 00:03:54.399 "dma_device_type": 1 00:03:54.399 }, 00:03:54.399 { 00:03:54.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:54.399 "dma_device_type": 2 00:03:54.399 } 00:03:54.399 ], 00:03:54.399 "driver_specific": {} 00:03:54.399 }, 00:03:54.399 { 00:03:54.399 "name": "Passthru0", 00:03:54.399 "aliases": [ 00:03:54.399 "dbb5e256-beb0-55bc-8a9d-851da0221c8c" 00:03:54.399 ], 00:03:54.399 "product_name": "passthru", 00:03:54.399 "block_size": 512, 00:03:54.399 "num_blocks": 16384, 00:03:54.399 "uuid": "dbb5e256-beb0-55bc-8a9d-851da0221c8c", 00:03:54.399 "assigned_rate_limits": { 00:03:54.399 "rw_ios_per_sec": 0, 00:03:54.399 "rw_mbytes_per_sec": 0, 00:03:54.399 "r_mbytes_per_sec": 0, 00:03:54.399 "w_mbytes_per_sec": 0 00:03:54.399 }, 00:03:54.399 "claimed": false, 00:03:54.399 "zoned": false, 00:03:54.399 "supported_io_types": { 00:03:54.399 "read": true, 00:03:54.399 "write": true, 00:03:54.399 "unmap": true, 00:03:54.399 "flush": true, 00:03:54.399 "reset": true, 00:03:54.399 "nvme_admin": false, 00:03:54.399 "nvme_io": false, 00:03:54.399 "nvme_io_md": false, 00:03:54.399 "write_zeroes": true, 00:03:54.399 "zcopy": true, 00:03:54.399 "get_zone_info": false, 00:03:54.399 "zone_management": false, 00:03:54.399 "zone_append": false, 00:03:54.399 "compare": false, 00:03:54.399 "compare_and_write": false, 00:03:54.399 "abort": true, 00:03:54.399 "seek_hole": false, 00:03:54.399 "seek_data": false, 00:03:54.399 "copy": true, 00:03:54.399 "nvme_iov_md": false 00:03:54.399 }, 00:03:54.399 "memory_domains": [ 00:03:54.399 { 00:03:54.399 "dma_device_id": "system", 00:03:54.399 "dma_device_type": 1 00:03:54.399 }, 00:03:54.399 { 00:03:54.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:54.399 "dma_device_type": 2 00:03:54.399 } 00:03:54.399 ], 00:03:54.399 "driver_specific": { 00:03:54.399 "passthru": { 00:03:54.399 "name": "Passthru0", 00:03:54.399 "base_bdev_name": "Malloc2" 00:03:54.399 } 00:03:54.399 } 00:03:54.399 } 00:03:54.399 ]' 00:03:54.399 13:09:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:54.399 13:09:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:54.399 13:09:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:54.399 13:09:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:54.399 13:09:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:54.399 13:09:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:54.399 13:09:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:54.399 13:09:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:54.399 13:09:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:54.399 13:09:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:54.399 13:09:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:54.399 13:09:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:54.399 13:09:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:54.399 13:09:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:54.399 13:09:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:54.399 13:09:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:54.399 13:09:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:54.399 00:03:54.399 real 0m0.324s 00:03:54.399 user 0m0.189s 00:03:54.399 sys 0m0.049s 00:03:54.399 13:09:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:54.399 13:09:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:54.399 ************************************ 00:03:54.399 END TEST rpc_daemon_integrity 00:03:54.399 ************************************ 00:03:54.399 13:09:02 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:54.399 13:09:02 rpc -- rpc/rpc.sh@84 -- # killprocess 3579788 00:03:54.399 13:09:02 rpc -- common/autotest_common.sh@952 -- # '[' -z 3579788 ']' 00:03:54.399 13:09:02 rpc -- common/autotest_common.sh@956 -- # kill -0 3579788 00:03:54.399 13:09:02 rpc -- common/autotest_common.sh@957 -- # uname 00:03:54.399 13:09:02 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:54.399 13:09:02 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3579788 00:03:54.660 13:09:02 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:54.660 13:09:02 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:54.660 13:09:02 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3579788' 00:03:54.660 killing process with pid 3579788 00:03:54.660 13:09:02 rpc -- common/autotest_common.sh@971 -- # kill 3579788 00:03:54.660 13:09:02 rpc -- common/autotest_common.sh@976 -- # wait 3579788 00:03:56.043 00:03:56.043 real 0m4.182s 00:03:56.043 user 0m4.848s 00:03:56.043 sys 0m0.882s 00:03:56.043 13:09:04 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:56.043 13:09:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.043 ************************************ 00:03:56.043 END TEST rpc 00:03:56.043 ************************************ 00:03:56.043 13:09:04 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:56.043 13:09:04 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:56.043 13:09:04 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:56.043 13:09:04 -- common/autotest_common.sh@10 -- # set +x 00:03:56.304 ************************************ 00:03:56.304 START TEST skip_rpc 00:03:56.304 ************************************ 00:03:56.304 13:09:04 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:56.304 * Looking for test storage... 00:03:56.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:56.304 13:09:04 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:56.304 13:09:04 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:56.304 13:09:04 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:56.304 13:09:04 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:56.304 13:09:04 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:56.304 13:09:04 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:56.304 13:09:04 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:56.304 13:09:04 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:56.304 13:09:04 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:56.304 13:09:04 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:56.304 13:09:04 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:56.304 13:09:04 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:56.304 13:09:04 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:56.304 13:09:04 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:56.304 13:09:04 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:56.304 13:09:04 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:56.304 13:09:04 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:56.304 13:09:04 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:56.304 13:09:04 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:56.304 13:09:04 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:56.304 13:09:04 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:56.304 13:09:04 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:56.304 13:09:04 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:56.304 13:09:04 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:56.304 13:09:04 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:56.304 13:09:04 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:56.304 13:09:04 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:56.304 13:09:04 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:56.304 13:09:04 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:56.304 13:09:04 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:56.304 13:09:04 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:56.304 13:09:04 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:56.304 13:09:04 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:56.304 13:09:04 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:56.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.304 --rc genhtml_branch_coverage=1 00:03:56.304 --rc genhtml_function_coverage=1 00:03:56.304 --rc genhtml_legend=1 00:03:56.304 --rc geninfo_all_blocks=1 00:03:56.304 --rc geninfo_unexecuted_blocks=1 00:03:56.304 00:03:56.304 ' 00:03:56.304 13:09:04 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:56.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.304 --rc genhtml_branch_coverage=1 00:03:56.304 --rc genhtml_function_coverage=1 00:03:56.304 --rc genhtml_legend=1 00:03:56.304 --rc geninfo_all_blocks=1 00:03:56.304 --rc geninfo_unexecuted_blocks=1 00:03:56.304 00:03:56.304 ' 00:03:56.304 13:09:04 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:56.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.304 --rc genhtml_branch_coverage=1 00:03:56.304 --rc genhtml_function_coverage=1 00:03:56.304 --rc genhtml_legend=1 00:03:56.304 --rc geninfo_all_blocks=1 00:03:56.304 --rc geninfo_unexecuted_blocks=1 00:03:56.304 00:03:56.304 ' 00:03:56.304 13:09:04 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:56.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.304 --rc genhtml_branch_coverage=1 00:03:56.304 --rc genhtml_function_coverage=1 00:03:56.304 --rc genhtml_legend=1 00:03:56.304 --rc geninfo_all_blocks=1 00:03:56.304 --rc geninfo_unexecuted_blocks=1 00:03:56.304 00:03:56.304 ' 00:03:56.304 13:09:04 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:56.304 13:09:04 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:56.304 13:09:04 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:56.304 13:09:04 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:56.304 13:09:04 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:56.304 13:09:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.304 ************************************ 00:03:56.304 START TEST skip_rpc 00:03:56.304 ************************************ 00:03:56.304 13:09:04 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:03:56.304 13:09:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3580756 00:03:56.304 13:09:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:56.304 13:09:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:56.304 13:09:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:56.565 [2024-11-07 13:09:04.384949] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:03:56.565 [2024-11-07 13:09:04.385056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3580756 ] 00:03:56.565 [2024-11-07 13:09:04.520447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:56.825 [2024-11-07 13:09:04.616167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.110 13:09:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:02.110 13:09:09 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:02.110 13:09:09 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:02.110 13:09:09 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:02.110 13:09:09 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:02.110 13:09:09 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:02.110 13:09:09 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:02.110 13:09:09 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:02.110 13:09:09 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:02.110 13:09:09 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.110 13:09:09 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:02.110 13:09:09 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:02.110 13:09:09 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:02.110 13:09:09 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:02.110 13:09:09 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:02.110 13:09:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:02.110 13:09:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3580756 00:04:02.110 13:09:09 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 3580756 ']' 00:04:02.110 13:09:09 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 3580756 00:04:02.110 13:09:09 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:04:02.110 13:09:09 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:02.110 13:09:09 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3580756 00:04:02.110 13:09:09 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:02.110 13:09:09 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:02.110 13:09:09 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3580756' 00:04:02.110 killing process with pid 3580756 00:04:02.110 13:09:09 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 3580756 00:04:02.110 13:09:09 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 3580756 00:04:03.132 00:04:03.132 real 0m6.679s 00:04:03.132 user 0m6.353s 00:04:03.132 sys 0m0.358s 00:04:03.132 13:09:10 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:03.132 13:09:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.132 ************************************ 00:04:03.132 END TEST skip_rpc 00:04:03.132 ************************************ 00:04:03.132 13:09:11 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:03.132 13:09:11 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:03.132 13:09:11 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:03.132 13:09:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.132 ************************************ 00:04:03.132 START TEST skip_rpc_with_json 00:04:03.132 ************************************ 00:04:03.132 13:09:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:04:03.132 13:09:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:03.132 13:09:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:03.132 13:09:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3582596 00:04:03.132 13:09:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:03.132 13:09:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3582596 00:04:03.132 13:09:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 3582596 ']' 00:04:03.132 13:09:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:03.132 13:09:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:03.132 13:09:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:03.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:03.132 13:09:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:03.132 13:09:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:03.132 [2024-11-07 13:09:11.124236] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:04:03.132 [2024-11-07 13:09:11.124349] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3582596 ] 00:04:03.392 [2024-11-07 13:09:11.260165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.392 [2024-11-07 13:09:11.358369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.334 13:09:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:04.334 13:09:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:04:04.334 13:09:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:04.334 13:09:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:04.334 13:09:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:04.334 [2024-11-07 13:09:12.004295] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:04.334 request: 00:04:04.334 { 00:04:04.334 "trtype": "tcp", 00:04:04.334 "method": "nvmf_get_transports", 00:04:04.334 "req_id": 1 00:04:04.334 } 00:04:04.334 Got JSON-RPC error response 00:04:04.334 response: 00:04:04.334 { 00:04:04.334 "code": -19, 00:04:04.334 "message": "No such device" 00:04:04.334 } 00:04:04.334 13:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:04.334 13:09:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:04.334 13:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:04.334 13:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:04.334 [2024-11-07 13:09:12.016431] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:04.334 13:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:04.334 13:09:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:04.334 13:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:04.334 13:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:04.334 13:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:04.334 13:09:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:04.334 { 00:04:04.335 "subsystems": [ 00:04:04.335 { 00:04:04.335 "subsystem": "fsdev", 00:04:04.335 "config": [ 00:04:04.335 { 00:04:04.335 "method": "fsdev_set_opts", 00:04:04.335 "params": { 00:04:04.335 "fsdev_io_pool_size": 65535, 00:04:04.335 "fsdev_io_cache_size": 256 00:04:04.335 } 00:04:04.335 } 00:04:04.335 ] 00:04:04.335 }, 00:04:04.335 { 00:04:04.335 "subsystem": "keyring", 00:04:04.335 "config": [] 00:04:04.335 }, 00:04:04.335 { 00:04:04.335 "subsystem": "iobuf", 00:04:04.335 "config": [ 00:04:04.335 { 00:04:04.335 "method": "iobuf_set_options", 00:04:04.335 "params": { 00:04:04.335 "small_pool_count": 8192, 00:04:04.335 "large_pool_count": 1024, 00:04:04.335 "small_bufsize": 8192, 00:04:04.335 "large_bufsize": 135168, 00:04:04.335 "enable_numa": false 00:04:04.335 } 00:04:04.335 } 00:04:04.335 ] 00:04:04.335 }, 00:04:04.335 { 00:04:04.335 "subsystem": "sock", 00:04:04.335 "config": [ 00:04:04.335 { 00:04:04.335 "method": "sock_set_default_impl", 00:04:04.335 "params": { 00:04:04.335 "impl_name": "posix" 00:04:04.335 } 00:04:04.335 }, 00:04:04.335 { 00:04:04.335 "method": "sock_impl_set_options", 00:04:04.335 "params": { 00:04:04.335 "impl_name": "ssl", 00:04:04.335 "recv_buf_size": 4096, 00:04:04.335 "send_buf_size": 4096, 00:04:04.335 "enable_recv_pipe": true, 00:04:04.335 "enable_quickack": false, 00:04:04.335 "enable_placement_id": 0, 00:04:04.335 "enable_zerocopy_send_server": true, 00:04:04.335 "enable_zerocopy_send_client": false, 00:04:04.335 "zerocopy_threshold": 0, 00:04:04.335 "tls_version": 0, 00:04:04.335 "enable_ktls": false 00:04:04.335 } 00:04:04.335 }, 00:04:04.335 { 00:04:04.335 "method": "sock_impl_set_options", 00:04:04.335 "params": { 00:04:04.335 "impl_name": "posix", 00:04:04.335 "recv_buf_size": 2097152, 00:04:04.335 "send_buf_size": 2097152, 00:04:04.335 "enable_recv_pipe": true, 00:04:04.335 "enable_quickack": false, 00:04:04.335 "enable_placement_id": 0, 00:04:04.335 "enable_zerocopy_send_server": true, 00:04:04.335 "enable_zerocopy_send_client": false, 00:04:04.335 "zerocopy_threshold": 0, 00:04:04.335 "tls_version": 0, 00:04:04.335 "enable_ktls": false 00:04:04.335 } 00:04:04.335 } 00:04:04.335 ] 00:04:04.335 }, 00:04:04.335 { 00:04:04.335 "subsystem": "vmd", 00:04:04.335 "config": [] 00:04:04.335 }, 00:04:04.335 { 00:04:04.335 "subsystem": "accel", 00:04:04.335 "config": [ 00:04:04.335 { 00:04:04.335 "method": "accel_set_options", 00:04:04.335 "params": { 00:04:04.335 "small_cache_size": 128, 00:04:04.335 "large_cache_size": 16, 00:04:04.335 "task_count": 2048, 00:04:04.335 "sequence_count": 2048, 00:04:04.335 "buf_count": 2048 00:04:04.335 } 00:04:04.335 } 00:04:04.335 ] 00:04:04.335 }, 00:04:04.335 { 00:04:04.335 "subsystem": "bdev", 00:04:04.335 "config": [ 00:04:04.335 { 00:04:04.335 "method": "bdev_set_options", 00:04:04.335 "params": { 00:04:04.335 "bdev_io_pool_size": 65535, 00:04:04.335 "bdev_io_cache_size": 256, 00:04:04.335 "bdev_auto_examine": true, 00:04:04.335 "iobuf_small_cache_size": 128, 00:04:04.335 "iobuf_large_cache_size": 16 00:04:04.335 } 00:04:04.335 }, 00:04:04.335 { 00:04:04.335 "method": "bdev_raid_set_options", 00:04:04.335 "params": { 00:04:04.335 "process_window_size_kb": 1024, 00:04:04.335 "process_max_bandwidth_mb_sec": 0 00:04:04.335 } 00:04:04.335 }, 00:04:04.335 { 00:04:04.335 "method": "bdev_iscsi_set_options", 00:04:04.335 "params": { 00:04:04.335 "timeout_sec": 30 00:04:04.335 } 00:04:04.335 }, 00:04:04.335 { 00:04:04.335 "method": "bdev_nvme_set_options", 00:04:04.335 "params": { 00:04:04.335 "action_on_timeout": "none", 00:04:04.335 "timeout_us": 0, 00:04:04.335 "timeout_admin_us": 0, 00:04:04.335 "keep_alive_timeout_ms": 10000, 00:04:04.335 "arbitration_burst": 0, 00:04:04.335 "low_priority_weight": 0, 00:04:04.335 "medium_priority_weight": 0, 00:04:04.335 "high_priority_weight": 0, 00:04:04.335 "nvme_adminq_poll_period_us": 10000, 00:04:04.335 "nvme_ioq_poll_period_us": 0, 00:04:04.335 "io_queue_requests": 0, 00:04:04.335 "delay_cmd_submit": true, 00:04:04.335 "transport_retry_count": 4, 00:04:04.335 "bdev_retry_count": 3, 00:04:04.335 "transport_ack_timeout": 0, 00:04:04.335 "ctrlr_loss_timeout_sec": 0, 00:04:04.335 "reconnect_delay_sec": 0, 00:04:04.335 "fast_io_fail_timeout_sec": 0, 00:04:04.335 "disable_auto_failback": false, 00:04:04.335 "generate_uuids": false, 00:04:04.335 "transport_tos": 0, 00:04:04.335 "nvme_error_stat": false, 00:04:04.335 "rdma_srq_size": 0, 00:04:04.335 "io_path_stat": false, 00:04:04.335 "allow_accel_sequence": false, 00:04:04.335 "rdma_max_cq_size": 0, 00:04:04.335 "rdma_cm_event_timeout_ms": 0, 00:04:04.335 "dhchap_digests": [ 00:04:04.335 "sha256", 00:04:04.335 "sha384", 00:04:04.335 "sha512" 00:04:04.335 ], 00:04:04.335 "dhchap_dhgroups": [ 00:04:04.335 "null", 00:04:04.335 "ffdhe2048", 00:04:04.335 "ffdhe3072", 00:04:04.335 "ffdhe4096", 00:04:04.335 "ffdhe6144", 00:04:04.335 "ffdhe8192" 00:04:04.335 ] 00:04:04.335 } 00:04:04.335 }, 00:04:04.335 { 00:04:04.335 "method": "bdev_nvme_set_hotplug", 00:04:04.335 "params": { 00:04:04.335 "period_us": 100000, 00:04:04.335 "enable": false 00:04:04.335 } 00:04:04.335 }, 00:04:04.335 { 00:04:04.335 "method": "bdev_wait_for_examine" 00:04:04.335 } 00:04:04.335 ] 00:04:04.335 }, 00:04:04.335 { 00:04:04.335 "subsystem": "scsi", 00:04:04.335 "config": null 00:04:04.335 }, 00:04:04.335 { 00:04:04.335 "subsystem": "scheduler", 00:04:04.335 "config": [ 00:04:04.335 { 00:04:04.335 "method": "framework_set_scheduler", 00:04:04.335 "params": { 00:04:04.335 "name": "static" 00:04:04.335 } 00:04:04.335 } 00:04:04.335 ] 00:04:04.335 }, 00:04:04.335 { 00:04:04.335 "subsystem": "vhost_scsi", 00:04:04.335 "config": [] 00:04:04.335 }, 00:04:04.335 { 00:04:04.335 "subsystem": "vhost_blk", 00:04:04.335 "config": [] 00:04:04.335 }, 00:04:04.335 { 00:04:04.335 "subsystem": "ublk", 00:04:04.335 "config": [] 00:04:04.335 }, 00:04:04.335 { 00:04:04.335 "subsystem": "nbd", 00:04:04.335 "config": [] 00:04:04.335 }, 00:04:04.335 { 00:04:04.335 "subsystem": "nvmf", 00:04:04.335 "config": [ 00:04:04.335 { 00:04:04.335 "method": "nvmf_set_config", 00:04:04.335 "params": { 00:04:04.335 "discovery_filter": "match_any", 00:04:04.335 "admin_cmd_passthru": { 00:04:04.335 "identify_ctrlr": false 00:04:04.335 }, 00:04:04.335 "dhchap_digests": [ 00:04:04.335 "sha256", 00:04:04.335 "sha384", 00:04:04.335 "sha512" 00:04:04.335 ], 00:04:04.335 "dhchap_dhgroups": [ 00:04:04.335 "null", 00:04:04.335 "ffdhe2048", 00:04:04.335 "ffdhe3072", 00:04:04.335 "ffdhe4096", 00:04:04.335 "ffdhe6144", 00:04:04.335 "ffdhe8192" 00:04:04.335 ] 00:04:04.335 } 00:04:04.335 }, 00:04:04.335 { 00:04:04.335 "method": "nvmf_set_max_subsystems", 00:04:04.335 "params": { 00:04:04.335 "max_subsystems": 1024 00:04:04.335 } 00:04:04.335 }, 00:04:04.335 { 00:04:04.335 "method": "nvmf_set_crdt", 00:04:04.335 "params": { 00:04:04.335 "crdt1": 0, 00:04:04.335 "crdt2": 0, 00:04:04.335 "crdt3": 0 00:04:04.335 } 00:04:04.335 }, 00:04:04.335 { 00:04:04.335 "method": "nvmf_create_transport", 00:04:04.335 "params": { 00:04:04.335 "trtype": "TCP", 00:04:04.335 "max_queue_depth": 128, 00:04:04.335 "max_io_qpairs_per_ctrlr": 127, 00:04:04.335 "in_capsule_data_size": 4096, 00:04:04.335 "max_io_size": 131072, 00:04:04.335 "io_unit_size": 131072, 00:04:04.335 "max_aq_depth": 128, 00:04:04.335 "num_shared_buffers": 511, 00:04:04.335 "buf_cache_size": 4294967295, 00:04:04.335 "dif_insert_or_strip": false, 00:04:04.335 "zcopy": false, 00:04:04.335 "c2h_success": true, 00:04:04.335 "sock_priority": 0, 00:04:04.335 "abort_timeout_sec": 1, 00:04:04.335 "ack_timeout": 0, 00:04:04.335 "data_wr_pool_size": 0 00:04:04.335 } 00:04:04.335 } 00:04:04.335 ] 00:04:04.335 }, 00:04:04.335 { 00:04:04.335 "subsystem": "iscsi", 00:04:04.335 "config": [ 00:04:04.335 { 00:04:04.335 "method": "iscsi_set_options", 00:04:04.335 "params": { 00:04:04.335 "node_base": "iqn.2016-06.io.spdk", 00:04:04.335 "max_sessions": 128, 00:04:04.335 "max_connections_per_session": 2, 00:04:04.335 "max_queue_depth": 64, 00:04:04.335 "default_time2wait": 2, 00:04:04.335 "default_time2retain": 20, 00:04:04.335 "first_burst_length": 8192, 00:04:04.335 "immediate_data": true, 00:04:04.335 "allow_duplicated_isid": false, 00:04:04.335 "error_recovery_level": 0, 00:04:04.335 "nop_timeout": 60, 00:04:04.335 "nop_in_interval": 30, 00:04:04.335 "disable_chap": false, 00:04:04.335 "require_chap": false, 00:04:04.335 "mutual_chap": false, 00:04:04.335 "chap_group": 0, 00:04:04.335 "max_large_datain_per_connection": 64, 00:04:04.335 "max_r2t_per_connection": 4, 00:04:04.335 "pdu_pool_size": 36864, 00:04:04.335 "immediate_data_pool_size": 16384, 00:04:04.335 "data_out_pool_size": 2048 00:04:04.335 } 00:04:04.336 } 00:04:04.336 ] 00:04:04.336 } 00:04:04.336 ] 00:04:04.336 } 00:04:04.336 13:09:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:04.336 13:09:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3582596 00:04:04.336 13:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 3582596 ']' 00:04:04.336 13:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 3582596 00:04:04.336 13:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:04.336 13:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:04.336 13:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3582596 00:04:04.336 13:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:04.336 13:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:04.336 13:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3582596' 00:04:04.336 killing process with pid 3582596 00:04:04.336 13:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 3582596 00:04:04.336 13:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 3582596 00:04:06.250 13:09:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3583262 00:04:06.250 13:09:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:06.250 13:09:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:11.537 13:09:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3583262 00:04:11.537 13:09:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 3583262 ']' 00:04:11.537 13:09:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 3583262 00:04:11.537 13:09:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:11.537 13:09:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:11.537 13:09:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3583262 00:04:11.537 13:09:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:11.537 13:09:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:11.537 13:09:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3583262' 00:04:11.537 killing process with pid 3583262 00:04:11.537 13:09:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 3583262 00:04:11.537 13:09:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 3583262 00:04:12.922 13:09:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:12.922 13:09:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:12.922 00:04:12.922 real 0m9.468s 00:04:12.922 user 0m9.108s 00:04:12.922 sys 0m0.805s 00:04:12.922 13:09:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:12.922 13:09:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:12.922 ************************************ 00:04:12.922 END TEST skip_rpc_with_json 00:04:12.922 ************************************ 00:04:12.922 13:09:20 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:12.922 13:09:20 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:12.922 13:09:20 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:12.922 13:09:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.922 ************************************ 00:04:12.922 START TEST skip_rpc_with_delay 00:04:12.922 ************************************ 00:04:12.922 13:09:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:04:12.922 13:09:20 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:12.922 13:09:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:12.922 13:09:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:12.922 13:09:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:12.922 13:09:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:12.922 13:09:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:12.922 13:09:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:12.922 13:09:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:12.922 13:09:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:12.922 13:09:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:12.922 13:09:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:12.922 13:09:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:12.922 [2024-11-07 13:09:20.688820] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:12.922 13:09:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:12.922 13:09:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:12.922 13:09:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:12.922 13:09:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:12.922 00:04:12.922 real 0m0.160s 00:04:12.922 user 0m0.094s 00:04:12.922 sys 0m0.065s 00:04:12.922 13:09:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:12.922 13:09:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:12.922 ************************************ 00:04:12.922 END TEST skip_rpc_with_delay 00:04:12.922 ************************************ 00:04:12.922 13:09:20 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:12.922 13:09:20 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:12.922 13:09:20 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:12.922 13:09:20 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:12.922 13:09:20 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:12.922 13:09:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.922 ************************************ 00:04:12.922 START TEST exit_on_failed_rpc_init 00:04:12.922 ************************************ 00:04:12.922 13:09:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:04:12.922 13:09:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3584661 00:04:12.922 13:09:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3584661 00:04:12.922 13:09:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:12.922 13:09:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 3584661 ']' 00:04:12.922 13:09:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:12.922 13:09:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:12.922 13:09:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:12.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:12.922 13:09:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:12.922 13:09:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:13.183 [2024-11-07 13:09:20.930947] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:04:13.183 [2024-11-07 13:09:20.931054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3584661 ] 00:04:13.183 [2024-11-07 13:09:21.072723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.183 [2024-11-07 13:09:21.172338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.125 13:09:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:14.125 13:09:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:04:14.125 13:09:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:14.125 13:09:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:14.125 13:09:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:14.125 13:09:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:14.125 13:09:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.125 13:09:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:14.126 13:09:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.126 13:09:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:14.126 13:09:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.126 13:09:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:14.126 13:09:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.126 13:09:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:14.126 13:09:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:14.126 [2024-11-07 13:09:21.916428] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:04:14.126 [2024-11-07 13:09:21.916537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3584995 ] 00:04:14.126 [2024-11-07 13:09:22.069301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.386 [2024-11-07 13:09:22.165875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:14.386 [2024-11-07 13:09:22.165959] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:14.386 [2024-11-07 13:09:22.165977] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:14.386 [2024-11-07 13:09:22.165988] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:14.386 13:09:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:14.386 13:09:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:14.386 13:09:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:14.386 13:09:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:14.386 13:09:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:14.386 13:09:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:14.386 13:09:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:14.386 13:09:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3584661 00:04:14.386 13:09:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 3584661 ']' 00:04:14.386 13:09:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 3584661 00:04:14.386 13:09:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:04:14.386 13:09:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:14.386 13:09:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3584661 00:04:14.646 13:09:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:14.646 13:09:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:14.646 13:09:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3584661' 00:04:14.646 killing process with pid 3584661 00:04:14.646 13:09:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 3584661 00:04:14.646 13:09:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 3584661 00:04:16.029 00:04:16.029 real 0m3.189s 00:04:16.029 user 0m3.477s 00:04:16.029 sys 0m0.623s 00:04:16.029 13:09:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:16.029 13:09:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:16.029 ************************************ 00:04:16.029 END TEST exit_on_failed_rpc_init 00:04:16.029 ************************************ 00:04:16.289 13:09:24 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:16.289 00:04:16.289 real 0m19.991s 00:04:16.289 user 0m19.256s 00:04:16.289 sys 0m2.148s 00:04:16.289 13:09:24 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:16.289 13:09:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.289 ************************************ 00:04:16.289 END TEST skip_rpc 00:04:16.289 ************************************ 00:04:16.289 13:09:24 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:16.289 13:09:24 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:16.289 13:09:24 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:16.289 13:09:24 -- common/autotest_common.sh@10 -- # set +x 00:04:16.289 ************************************ 00:04:16.289 START TEST rpc_client 00:04:16.289 ************************************ 00:04:16.289 13:09:24 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:16.289 * Looking for test storage... 00:04:16.289 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:16.289 13:09:24 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:16.289 13:09:24 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:04:16.289 13:09:24 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:16.549 13:09:24 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:16.549 13:09:24 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:16.549 13:09:24 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:16.549 13:09:24 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:16.549 13:09:24 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:16.549 13:09:24 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:16.549 13:09:24 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:16.549 13:09:24 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:16.549 13:09:24 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:16.549 13:09:24 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:16.549 13:09:24 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:16.549 13:09:24 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:16.549 13:09:24 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:16.549 13:09:24 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:16.549 13:09:24 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:16.549 13:09:24 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:16.549 13:09:24 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:16.549 13:09:24 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:16.549 13:09:24 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:16.549 13:09:24 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:16.549 13:09:24 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:16.549 13:09:24 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:16.549 13:09:24 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:16.549 13:09:24 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:16.549 13:09:24 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:16.549 13:09:24 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:16.549 13:09:24 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:16.549 13:09:24 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:16.549 13:09:24 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:16.549 13:09:24 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:16.549 13:09:24 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:16.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.549 --rc genhtml_branch_coverage=1 00:04:16.549 --rc genhtml_function_coverage=1 00:04:16.549 --rc genhtml_legend=1 00:04:16.549 --rc geninfo_all_blocks=1 00:04:16.549 --rc geninfo_unexecuted_blocks=1 00:04:16.549 00:04:16.549 ' 00:04:16.549 13:09:24 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:16.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.550 --rc genhtml_branch_coverage=1 00:04:16.550 --rc genhtml_function_coverage=1 00:04:16.550 --rc genhtml_legend=1 00:04:16.550 --rc geninfo_all_blocks=1 00:04:16.550 --rc geninfo_unexecuted_blocks=1 00:04:16.550 00:04:16.550 ' 00:04:16.550 13:09:24 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:16.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.550 --rc genhtml_branch_coverage=1 00:04:16.550 --rc genhtml_function_coverage=1 00:04:16.550 --rc genhtml_legend=1 00:04:16.550 --rc geninfo_all_blocks=1 00:04:16.550 --rc geninfo_unexecuted_blocks=1 00:04:16.550 00:04:16.550 ' 00:04:16.550 13:09:24 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:16.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.550 --rc genhtml_branch_coverage=1 00:04:16.550 --rc genhtml_function_coverage=1 00:04:16.550 --rc genhtml_legend=1 00:04:16.550 --rc geninfo_all_blocks=1 00:04:16.550 --rc geninfo_unexecuted_blocks=1 00:04:16.550 00:04:16.550 ' 00:04:16.550 13:09:24 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:16.550 OK 00:04:16.550 13:09:24 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:16.550 00:04:16.550 real 0m0.260s 00:04:16.550 user 0m0.149s 00:04:16.550 sys 0m0.121s 00:04:16.550 13:09:24 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:16.550 13:09:24 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:16.550 ************************************ 00:04:16.550 END TEST rpc_client 00:04:16.550 ************************************ 00:04:16.550 13:09:24 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:16.550 13:09:24 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:16.550 13:09:24 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:16.550 13:09:24 -- common/autotest_common.sh@10 -- # set +x 00:04:16.550 ************************************ 00:04:16.550 START TEST json_config 00:04:16.550 ************************************ 00:04:16.550 13:09:24 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:16.811 13:09:24 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:16.811 13:09:24 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:16.811 13:09:24 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:16.811 13:09:24 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:16.811 13:09:24 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:16.811 13:09:24 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:16.811 13:09:24 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:16.811 13:09:24 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:16.811 13:09:24 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:16.811 13:09:24 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:16.811 13:09:24 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:16.811 13:09:24 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:16.811 13:09:24 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:16.811 13:09:24 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:16.811 13:09:24 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:16.811 13:09:24 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:16.811 13:09:24 json_config -- scripts/common.sh@345 -- # : 1 00:04:16.811 13:09:24 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:16.811 13:09:24 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:16.811 13:09:24 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:16.811 13:09:24 json_config -- scripts/common.sh@353 -- # local d=1 00:04:16.811 13:09:24 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:16.811 13:09:24 json_config -- scripts/common.sh@355 -- # echo 1 00:04:16.811 13:09:24 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:16.811 13:09:24 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:16.811 13:09:24 json_config -- scripts/common.sh@353 -- # local d=2 00:04:16.811 13:09:24 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:16.811 13:09:24 json_config -- scripts/common.sh@355 -- # echo 2 00:04:16.811 13:09:24 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:16.811 13:09:24 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:16.811 13:09:24 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:16.811 13:09:24 json_config -- scripts/common.sh@368 -- # return 0 00:04:16.811 13:09:24 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:16.811 13:09:24 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:16.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.811 --rc genhtml_branch_coverage=1 00:04:16.811 --rc genhtml_function_coverage=1 00:04:16.811 --rc genhtml_legend=1 00:04:16.811 --rc geninfo_all_blocks=1 00:04:16.811 --rc geninfo_unexecuted_blocks=1 00:04:16.811 00:04:16.811 ' 00:04:16.811 13:09:24 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:16.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.811 --rc genhtml_branch_coverage=1 00:04:16.811 --rc genhtml_function_coverage=1 00:04:16.811 --rc genhtml_legend=1 00:04:16.811 --rc geninfo_all_blocks=1 00:04:16.811 --rc geninfo_unexecuted_blocks=1 00:04:16.811 00:04:16.811 ' 00:04:16.811 13:09:24 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:16.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.811 --rc genhtml_branch_coverage=1 00:04:16.811 --rc genhtml_function_coverage=1 00:04:16.811 --rc genhtml_legend=1 00:04:16.811 --rc geninfo_all_blocks=1 00:04:16.811 --rc geninfo_unexecuted_blocks=1 00:04:16.811 00:04:16.811 ' 00:04:16.811 13:09:24 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:16.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.811 --rc genhtml_branch_coverage=1 00:04:16.811 --rc genhtml_function_coverage=1 00:04:16.811 --rc genhtml_legend=1 00:04:16.811 --rc geninfo_all_blocks=1 00:04:16.811 --rc geninfo_unexecuted_blocks=1 00:04:16.811 00:04:16.811 ' 00:04:16.811 13:09:24 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:16.811 13:09:24 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:16.811 13:09:24 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:16.811 13:09:24 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:16.811 13:09:24 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:16.811 13:09:24 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:16.811 13:09:24 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:16.811 13:09:24 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:16.811 13:09:24 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:16.811 13:09:24 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:16.811 13:09:24 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:16.811 13:09:24 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:16.811 13:09:24 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:16.811 13:09:24 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:16.811 13:09:24 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:16.811 13:09:24 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:16.811 13:09:24 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:16.811 13:09:24 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:16.811 13:09:24 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:16.811 13:09:24 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:16.811 13:09:24 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:16.811 13:09:24 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:16.811 13:09:24 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:16.811 13:09:24 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:16.811 13:09:24 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:16.812 13:09:24 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:16.812 13:09:24 json_config -- paths/export.sh@5 -- # export PATH 00:04:16.812 13:09:24 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:16.812 13:09:24 json_config -- nvmf/common.sh@51 -- # : 0 00:04:16.812 13:09:24 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:16.812 13:09:24 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:16.812 13:09:24 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:16.812 13:09:24 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:16.812 13:09:24 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:16.812 13:09:24 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:16.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:16.812 13:09:24 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:16.812 13:09:24 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:16.812 13:09:24 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:16.812 13:09:24 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:16.812 13:09:24 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:16.812 13:09:24 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:16.812 13:09:24 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:16.812 13:09:24 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:16.812 13:09:24 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:16.812 13:09:24 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:16.812 13:09:24 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:16.812 13:09:24 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:16.812 13:09:24 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:16.812 13:09:24 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:16.812 13:09:24 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:16.812 13:09:24 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:16.812 13:09:24 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:16.812 13:09:24 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:16.812 13:09:24 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:16.812 INFO: JSON configuration test init 00:04:16.812 13:09:24 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:16.812 13:09:24 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:16.812 13:09:24 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:16.812 13:09:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.812 13:09:24 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:16.812 13:09:24 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:16.812 13:09:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.812 13:09:24 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:16.812 13:09:24 json_config -- json_config/common.sh@9 -- # local app=target 00:04:16.812 13:09:24 json_config -- json_config/common.sh@10 -- # shift 00:04:16.812 13:09:24 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:16.812 13:09:24 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:16.812 13:09:24 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:16.812 13:09:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:16.812 13:09:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:16.812 13:09:24 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3585598 00:04:16.812 13:09:24 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:16.812 Waiting for target to run... 00:04:16.812 13:09:24 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:16.812 13:09:24 json_config -- json_config/common.sh@25 -- # waitforlisten 3585598 /var/tmp/spdk_tgt.sock 00:04:16.812 13:09:24 json_config -- common/autotest_common.sh@833 -- # '[' -z 3585598 ']' 00:04:16.812 13:09:24 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:16.812 13:09:24 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:16.812 13:09:24 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:16.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:16.812 13:09:24 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:16.812 13:09:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.812 [2024-11-07 13:09:24.806900] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:04:16.812 [2024-11-07 13:09:24.807033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3585598 ] 00:04:17.382 [2024-11-07 13:09:25.272881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.383 [2024-11-07 13:09:25.370793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.643 13:09:25 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:17.643 13:09:25 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:17.643 13:09:25 json_config -- json_config/common.sh@26 -- # echo '' 00:04:17.643 00:04:17.643 13:09:25 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:17.643 13:09:25 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:17.643 13:09:25 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:17.643 13:09:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.643 13:09:25 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:17.643 13:09:25 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:17.643 13:09:25 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:17.643 13:09:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.643 13:09:25 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:17.643 13:09:25 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:17.643 13:09:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:19.027 13:09:26 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:19.027 13:09:26 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:19.027 13:09:26 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:19.027 13:09:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.027 13:09:26 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:19.027 13:09:26 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:19.027 13:09:26 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:19.027 13:09:26 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:19.027 13:09:26 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:19.027 13:09:26 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:19.027 13:09:26 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:19.027 13:09:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:19.027 13:09:26 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:19.027 13:09:26 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:19.027 13:09:26 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:19.027 13:09:26 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:19.027 13:09:26 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:19.027 13:09:26 json_config -- json_config/json_config.sh@54 -- # sort 00:04:19.027 13:09:26 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:19.027 13:09:26 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:19.027 13:09:26 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:19.027 13:09:26 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:19.027 13:09:26 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:19.027 13:09:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.027 13:09:26 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:19.027 13:09:26 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:19.027 13:09:26 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:19.027 13:09:26 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:19.027 13:09:26 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:19.027 13:09:26 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:19.027 13:09:26 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:19.027 13:09:26 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:19.027 13:09:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.027 13:09:26 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:19.027 13:09:26 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:19.027 13:09:26 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:19.027 13:09:26 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:19.027 13:09:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:19.287 MallocForNvmf0 00:04:19.287 13:09:27 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:19.287 13:09:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:19.287 MallocForNvmf1 00:04:19.287 13:09:27 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:19.287 13:09:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:19.547 [2024-11-07 13:09:27.436829] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:19.547 13:09:27 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:19.547 13:09:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:19.806 13:09:27 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:19.806 13:09:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:20.067 13:09:27 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:20.067 13:09:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:20.067 13:09:27 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:20.067 13:09:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:20.327 [2024-11-07 13:09:28.143283] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:20.327 13:09:28 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:20.327 13:09:28 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:20.327 13:09:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.327 13:09:28 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:20.327 13:09:28 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:20.327 13:09:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.327 13:09:28 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:20.327 13:09:28 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:20.327 13:09:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:20.587 MallocBdevForConfigChangeCheck 00:04:20.587 13:09:28 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:20.587 13:09:28 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:20.587 13:09:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.587 13:09:28 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:20.587 13:09:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:20.847 13:09:28 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:20.847 INFO: shutting down applications... 00:04:20.847 13:09:28 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:20.847 13:09:28 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:20.847 13:09:28 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:20.847 13:09:28 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:21.417 Calling clear_iscsi_subsystem 00:04:21.417 Calling clear_nvmf_subsystem 00:04:21.417 Calling clear_nbd_subsystem 00:04:21.417 Calling clear_ublk_subsystem 00:04:21.417 Calling clear_vhost_blk_subsystem 00:04:21.417 Calling clear_vhost_scsi_subsystem 00:04:21.417 Calling clear_bdev_subsystem 00:04:21.417 13:09:29 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:21.417 13:09:29 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:21.417 13:09:29 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:21.417 13:09:29 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:21.417 13:09:29 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:21.417 13:09:29 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:21.677 13:09:29 json_config -- json_config/json_config.sh@352 -- # break 00:04:21.677 13:09:29 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:21.677 13:09:29 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:21.677 13:09:29 json_config -- json_config/common.sh@31 -- # local app=target 00:04:21.677 13:09:29 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:21.677 13:09:29 json_config -- json_config/common.sh@35 -- # [[ -n 3585598 ]] 00:04:21.677 13:09:29 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3585598 00:04:21.677 13:09:29 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:21.677 13:09:29 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:21.677 13:09:29 json_config -- json_config/common.sh@41 -- # kill -0 3585598 00:04:21.677 13:09:29 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:22.246 13:09:30 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:22.246 13:09:30 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:22.246 13:09:30 json_config -- json_config/common.sh@41 -- # kill -0 3585598 00:04:22.246 13:09:30 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:22.817 13:09:30 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:22.817 13:09:30 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:22.817 13:09:30 json_config -- json_config/common.sh@41 -- # kill -0 3585598 00:04:22.817 13:09:30 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:22.817 13:09:30 json_config -- json_config/common.sh@43 -- # break 00:04:22.817 13:09:30 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:22.817 13:09:30 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:22.817 SPDK target shutdown done 00:04:22.817 13:09:30 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:22.817 INFO: relaunching applications... 00:04:22.817 13:09:30 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:22.817 13:09:30 json_config -- json_config/common.sh@9 -- # local app=target 00:04:22.817 13:09:30 json_config -- json_config/common.sh@10 -- # shift 00:04:22.817 13:09:30 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:22.817 13:09:30 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:22.817 13:09:30 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:22.817 13:09:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:22.817 13:09:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:22.817 13:09:30 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3586951 00:04:22.817 13:09:30 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:22.817 Waiting for target to run... 00:04:22.817 13:09:30 json_config -- json_config/common.sh@25 -- # waitforlisten 3586951 /var/tmp/spdk_tgt.sock 00:04:22.817 13:09:30 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:22.817 13:09:30 json_config -- common/autotest_common.sh@833 -- # '[' -z 3586951 ']' 00:04:22.817 13:09:30 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:22.817 13:09:30 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:22.817 13:09:30 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:22.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:22.817 13:09:30 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:22.817 13:09:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.817 [2024-11-07 13:09:30.629722] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:04:22.817 [2024-11-07 13:09:30.629847] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3586951 ] 00:04:23.077 [2024-11-07 13:09:30.996437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.338 [2024-11-07 13:09:31.089522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.277 [2024-11-07 13:09:32.100951] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:24.277 [2024-11-07 13:09:32.133389] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:24.277 13:09:32 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:24.277 13:09:32 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:24.277 13:09:32 json_config -- json_config/common.sh@26 -- # echo '' 00:04:24.277 00:04:24.277 13:09:32 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:24.277 13:09:32 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:24.277 INFO: Checking if target configuration is the same... 00:04:24.277 13:09:32 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:24.277 13:09:32 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:24.277 13:09:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:24.277 + '[' 2 -ne 2 ']' 00:04:24.277 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:24.277 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:24.277 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:24.277 +++ basename /dev/fd/62 00:04:24.277 ++ mktemp /tmp/62.XXX 00:04:24.277 + tmp_file_1=/tmp/62.8lR 00:04:24.277 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:24.277 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:24.277 + tmp_file_2=/tmp/spdk_tgt_config.json.PbI 00:04:24.277 + ret=0 00:04:24.277 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:24.537 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:24.797 + diff -u /tmp/62.8lR /tmp/spdk_tgt_config.json.PbI 00:04:24.797 + echo 'INFO: JSON config files are the same' 00:04:24.797 INFO: JSON config files are the same 00:04:24.797 + rm /tmp/62.8lR /tmp/spdk_tgt_config.json.PbI 00:04:24.797 + exit 0 00:04:24.797 13:09:32 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:24.797 13:09:32 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:24.797 INFO: changing configuration and checking if this can be detected... 00:04:24.797 13:09:32 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:24.797 13:09:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:24.797 13:09:32 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:24.797 13:09:32 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:24.797 13:09:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:24.797 + '[' 2 -ne 2 ']' 00:04:24.797 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:24.797 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:24.797 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:24.797 +++ basename /dev/fd/62 00:04:24.797 ++ mktemp /tmp/62.XXX 00:04:24.797 + tmp_file_1=/tmp/62.Y2G 00:04:24.797 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:24.797 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:24.797 + tmp_file_2=/tmp/spdk_tgt_config.json.khN 00:04:24.797 + ret=0 00:04:24.797 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:25.058 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:25.320 + diff -u /tmp/62.Y2G /tmp/spdk_tgt_config.json.khN 00:04:25.320 + ret=1 00:04:25.320 + echo '=== Start of file: /tmp/62.Y2G ===' 00:04:25.320 + cat /tmp/62.Y2G 00:04:25.320 + echo '=== End of file: /tmp/62.Y2G ===' 00:04:25.320 + echo '' 00:04:25.320 + echo '=== Start of file: /tmp/spdk_tgt_config.json.khN ===' 00:04:25.320 + cat /tmp/spdk_tgt_config.json.khN 00:04:25.320 + echo '=== End of file: /tmp/spdk_tgt_config.json.khN ===' 00:04:25.320 + echo '' 00:04:25.320 + rm /tmp/62.Y2G /tmp/spdk_tgt_config.json.khN 00:04:25.320 + exit 1 00:04:25.320 13:09:33 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:25.320 INFO: configuration change detected. 00:04:25.320 13:09:33 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:25.320 13:09:33 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:25.320 13:09:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:25.320 13:09:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.320 13:09:33 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:25.320 13:09:33 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:25.320 13:09:33 json_config -- json_config/json_config.sh@324 -- # [[ -n 3586951 ]] 00:04:25.320 13:09:33 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:25.320 13:09:33 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:25.320 13:09:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:25.320 13:09:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.320 13:09:33 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:25.320 13:09:33 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:25.320 13:09:33 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:25.320 13:09:33 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:25.320 13:09:33 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:25.320 13:09:33 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:25.320 13:09:33 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:25.320 13:09:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.320 13:09:33 json_config -- json_config/json_config.sh@330 -- # killprocess 3586951 00:04:25.320 13:09:33 json_config -- common/autotest_common.sh@952 -- # '[' -z 3586951 ']' 00:04:25.320 13:09:33 json_config -- common/autotest_common.sh@956 -- # kill -0 3586951 00:04:25.320 13:09:33 json_config -- common/autotest_common.sh@957 -- # uname 00:04:25.320 13:09:33 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:25.320 13:09:33 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3586951 00:04:25.320 13:09:33 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:25.320 13:09:33 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:25.320 13:09:33 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3586951' 00:04:25.320 killing process with pid 3586951 00:04:25.320 13:09:33 json_config -- common/autotest_common.sh@971 -- # kill 3586951 00:04:25.320 13:09:33 json_config -- common/autotest_common.sh@976 -- # wait 3586951 00:04:26.262 13:09:34 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:26.262 13:09:34 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:26.262 13:09:34 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:26.262 13:09:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.262 13:09:34 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:26.262 13:09:34 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:26.262 INFO: Success 00:04:26.262 00:04:26.262 real 0m9.564s 00:04:26.262 user 0m10.670s 00:04:26.262 sys 0m2.406s 00:04:26.262 13:09:34 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:26.262 13:09:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.262 ************************************ 00:04:26.262 END TEST json_config 00:04:26.262 ************************************ 00:04:26.262 13:09:34 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:26.262 13:09:34 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:26.262 13:09:34 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:26.262 13:09:34 -- common/autotest_common.sh@10 -- # set +x 00:04:26.262 ************************************ 00:04:26.262 START TEST json_config_extra_key 00:04:26.262 ************************************ 00:04:26.262 13:09:34 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:26.262 13:09:34 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:26.262 13:09:34 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:26.262 13:09:34 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:26.262 13:09:34 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:26.262 13:09:34 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.262 13:09:34 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.262 13:09:34 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.262 13:09:34 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.262 13:09:34 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.262 13:09:34 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.262 13:09:34 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.262 13:09:34 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.262 13:09:34 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.262 13:09:34 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.262 13:09:34 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.262 13:09:34 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:26.262 13:09:34 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:26.262 13:09:34 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.262 13:09:34 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.525 13:09:34 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:26.525 13:09:34 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:26.525 13:09:34 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.525 13:09:34 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:26.525 13:09:34 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.525 13:09:34 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:26.525 13:09:34 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:26.525 13:09:34 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.525 13:09:34 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:26.525 13:09:34 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.525 13:09:34 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.525 13:09:34 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.525 13:09:34 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:26.525 13:09:34 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.525 13:09:34 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:26.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.525 --rc genhtml_branch_coverage=1 00:04:26.525 --rc genhtml_function_coverage=1 00:04:26.525 --rc genhtml_legend=1 00:04:26.525 --rc geninfo_all_blocks=1 00:04:26.525 --rc geninfo_unexecuted_blocks=1 00:04:26.525 00:04:26.525 ' 00:04:26.525 13:09:34 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:26.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.525 --rc genhtml_branch_coverage=1 00:04:26.525 --rc genhtml_function_coverage=1 00:04:26.525 --rc genhtml_legend=1 00:04:26.525 --rc geninfo_all_blocks=1 00:04:26.525 --rc geninfo_unexecuted_blocks=1 00:04:26.525 00:04:26.525 ' 00:04:26.525 13:09:34 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:26.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.525 --rc genhtml_branch_coverage=1 00:04:26.525 --rc genhtml_function_coverage=1 00:04:26.525 --rc genhtml_legend=1 00:04:26.525 --rc geninfo_all_blocks=1 00:04:26.525 --rc geninfo_unexecuted_blocks=1 00:04:26.525 00:04:26.525 ' 00:04:26.525 13:09:34 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:26.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.525 --rc genhtml_branch_coverage=1 00:04:26.525 --rc genhtml_function_coverage=1 00:04:26.525 --rc genhtml_legend=1 00:04:26.525 --rc geninfo_all_blocks=1 00:04:26.525 --rc geninfo_unexecuted_blocks=1 00:04:26.525 00:04:26.525 ' 00:04:26.525 13:09:34 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:26.525 13:09:34 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:26.525 13:09:34 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:26.525 13:09:34 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:26.525 13:09:34 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:26.525 13:09:34 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:26.525 13:09:34 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:26.525 13:09:34 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:26.525 13:09:34 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:26.525 13:09:34 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:26.525 13:09:34 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:26.525 13:09:34 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:26.525 13:09:34 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:26.525 13:09:34 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:26.525 13:09:34 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:26.525 13:09:34 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:26.525 13:09:34 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:26.525 13:09:34 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:26.525 13:09:34 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:26.525 13:09:34 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:26.525 13:09:34 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:26.525 13:09:34 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:26.525 13:09:34 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:26.526 13:09:34 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.526 13:09:34 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.526 13:09:34 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.526 13:09:34 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:26.526 13:09:34 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.526 13:09:34 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:26.526 13:09:34 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:26.526 13:09:34 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:26.526 13:09:34 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:26.526 13:09:34 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:26.526 13:09:34 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:26.526 13:09:34 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:26.526 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:26.526 13:09:34 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:26.526 13:09:34 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:26.526 13:09:34 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:26.526 13:09:34 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:26.526 13:09:34 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:26.526 13:09:34 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:26.526 13:09:34 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:26.526 13:09:34 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:26.526 13:09:34 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:26.526 13:09:34 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:26.526 13:09:34 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:26.526 13:09:34 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:26.526 13:09:34 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:26.526 13:09:34 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:26.526 INFO: launching applications... 00:04:26.526 13:09:34 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:26.526 13:09:34 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:26.526 13:09:34 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:26.526 13:09:34 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:26.526 13:09:34 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:26.526 13:09:34 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:26.526 13:09:34 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:26.526 13:09:34 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:26.526 13:09:34 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3587754 00:04:26.526 13:09:34 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:26.526 Waiting for target to run... 00:04:26.526 13:09:34 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3587754 /var/tmp/spdk_tgt.sock 00:04:26.526 13:09:34 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 3587754 ']' 00:04:26.526 13:09:34 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:26.526 13:09:34 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:26.526 13:09:34 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:26.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:26.526 13:09:34 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:26.526 13:09:34 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:26.526 13:09:34 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:26.526 [2024-11-07 13:09:34.413973] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:04:26.526 [2024-11-07 13:09:34.414112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3587754 ] 00:04:27.097 [2024-11-07 13:09:34.805393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.097 [2024-11-07 13:09:34.897843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.668 13:09:35 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:27.668 13:09:35 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:04:27.668 13:09:35 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:27.668 00:04:27.668 13:09:35 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:27.668 INFO: shutting down applications... 00:04:27.668 13:09:35 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:27.668 13:09:35 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:27.668 13:09:35 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:27.668 13:09:35 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3587754 ]] 00:04:27.668 13:09:35 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3587754 00:04:27.668 13:09:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:27.668 13:09:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:27.668 13:09:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3587754 00:04:27.668 13:09:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:28.240 13:09:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:28.240 13:09:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:28.240 13:09:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3587754 00:04:28.240 13:09:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:28.501 13:09:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:28.501 13:09:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:28.501 13:09:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3587754 00:04:28.501 13:09:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:29.074 13:09:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:29.074 13:09:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:29.074 13:09:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3587754 00:04:29.074 13:09:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:29.646 13:09:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:29.646 13:09:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:29.646 13:09:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3587754 00:04:29.646 13:09:37 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:29.646 13:09:37 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:29.646 13:09:37 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:29.646 13:09:37 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:29.646 SPDK target shutdown done 00:04:29.646 13:09:37 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:29.646 Success 00:04:29.646 00:04:29.646 real 0m3.347s 00:04:29.646 user 0m2.914s 00:04:29.646 sys 0m0.598s 00:04:29.646 13:09:37 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:29.646 13:09:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:29.646 ************************************ 00:04:29.646 END TEST json_config_extra_key 00:04:29.646 ************************************ 00:04:29.646 13:09:37 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:29.646 13:09:37 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:29.646 13:09:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:29.646 13:09:37 -- common/autotest_common.sh@10 -- # set +x 00:04:29.646 ************************************ 00:04:29.646 START TEST alias_rpc 00:04:29.646 ************************************ 00:04:29.646 13:09:37 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:29.646 * Looking for test storage... 00:04:29.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:29.646 13:09:37 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:29.646 13:09:37 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:29.646 13:09:37 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:29.907 13:09:37 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:29.907 13:09:37 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.907 13:09:37 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.907 13:09:37 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.907 13:09:37 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.907 13:09:37 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.907 13:09:37 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.907 13:09:37 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.907 13:09:37 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.907 13:09:37 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.907 13:09:37 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.907 13:09:37 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.907 13:09:37 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:29.907 13:09:37 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:29.907 13:09:37 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.907 13:09:37 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.907 13:09:37 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:29.907 13:09:37 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:29.907 13:09:37 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.907 13:09:37 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:29.907 13:09:37 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.907 13:09:37 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:29.907 13:09:37 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:29.907 13:09:37 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.907 13:09:37 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:29.907 13:09:37 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.907 13:09:37 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.907 13:09:37 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.907 13:09:37 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:29.907 13:09:37 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.907 13:09:37 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:29.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.907 --rc genhtml_branch_coverage=1 00:04:29.907 --rc genhtml_function_coverage=1 00:04:29.907 --rc genhtml_legend=1 00:04:29.907 --rc geninfo_all_blocks=1 00:04:29.907 --rc geninfo_unexecuted_blocks=1 00:04:29.907 00:04:29.907 ' 00:04:29.907 13:09:37 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:29.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.907 --rc genhtml_branch_coverage=1 00:04:29.907 --rc genhtml_function_coverage=1 00:04:29.907 --rc genhtml_legend=1 00:04:29.907 --rc geninfo_all_blocks=1 00:04:29.907 --rc geninfo_unexecuted_blocks=1 00:04:29.907 00:04:29.907 ' 00:04:29.907 13:09:37 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:29.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.907 --rc genhtml_branch_coverage=1 00:04:29.907 --rc genhtml_function_coverage=1 00:04:29.907 --rc genhtml_legend=1 00:04:29.907 --rc geninfo_all_blocks=1 00:04:29.907 --rc geninfo_unexecuted_blocks=1 00:04:29.907 00:04:29.907 ' 00:04:29.907 13:09:37 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:29.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.907 --rc genhtml_branch_coverage=1 00:04:29.907 --rc genhtml_function_coverage=1 00:04:29.907 --rc genhtml_legend=1 00:04:29.907 --rc geninfo_all_blocks=1 00:04:29.907 --rc geninfo_unexecuted_blocks=1 00:04:29.907 00:04:29.907 ' 00:04:29.907 13:09:37 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:29.907 13:09:37 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3588487 00:04:29.907 13:09:37 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3588487 00:04:29.907 13:09:37 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.907 13:09:37 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 3588487 ']' 00:04:29.907 13:09:37 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.907 13:09:37 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:29.907 13:09:37 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.907 13:09:37 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:29.907 13:09:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.907 [2024-11-07 13:09:37.833775] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:04:29.907 [2024-11-07 13:09:37.833920] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3588487 ] 00:04:30.176 [2024-11-07 13:09:37.988929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.176 [2024-11-07 13:09:38.088100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.748 13:09:38 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:30.748 13:09:38 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:30.748 13:09:38 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:31.008 13:09:38 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3588487 00:04:31.008 13:09:38 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 3588487 ']' 00:04:31.008 13:09:38 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 3588487 00:04:31.008 13:09:38 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:04:31.008 13:09:38 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:31.008 13:09:38 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3588487 00:04:31.008 13:09:38 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:31.008 13:09:38 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:31.008 13:09:38 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3588487' 00:04:31.008 killing process with pid 3588487 00:04:31.008 13:09:38 alias_rpc -- common/autotest_common.sh@971 -- # kill 3588487 00:04:31.008 13:09:38 alias_rpc -- common/autotest_common.sh@976 -- # wait 3588487 00:04:32.919 00:04:32.919 real 0m3.052s 00:04:32.919 user 0m3.041s 00:04:32.919 sys 0m0.579s 00:04:32.919 13:09:40 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:32.919 13:09:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.919 ************************************ 00:04:32.919 END TEST alias_rpc 00:04:32.919 ************************************ 00:04:32.919 13:09:40 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:32.920 13:09:40 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:32.920 13:09:40 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:32.920 13:09:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:32.920 13:09:40 -- common/autotest_common.sh@10 -- # set +x 00:04:32.920 ************************************ 00:04:32.920 START TEST spdkcli_tcp 00:04:32.920 ************************************ 00:04:32.920 13:09:40 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:32.920 * Looking for test storage... 00:04:32.920 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:32.920 13:09:40 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:32.920 13:09:40 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:32.920 13:09:40 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:32.920 13:09:40 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:32.920 13:09:40 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.920 13:09:40 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.920 13:09:40 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.920 13:09:40 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.920 13:09:40 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.920 13:09:40 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.920 13:09:40 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.920 13:09:40 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.920 13:09:40 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.920 13:09:40 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.920 13:09:40 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.920 13:09:40 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:32.920 13:09:40 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:32.920 13:09:40 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.920 13:09:40 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.920 13:09:40 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:32.920 13:09:40 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:32.920 13:09:40 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.920 13:09:40 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:32.920 13:09:40 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.920 13:09:40 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:32.920 13:09:40 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:32.920 13:09:40 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.920 13:09:40 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:32.920 13:09:40 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.920 13:09:40 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.920 13:09:40 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.920 13:09:40 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:32.920 13:09:40 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.920 13:09:40 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:32.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.920 --rc genhtml_branch_coverage=1 00:04:32.920 --rc genhtml_function_coverage=1 00:04:32.920 --rc genhtml_legend=1 00:04:32.920 --rc geninfo_all_blocks=1 00:04:32.920 --rc geninfo_unexecuted_blocks=1 00:04:32.920 00:04:32.920 ' 00:04:32.920 13:09:40 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:32.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.920 --rc genhtml_branch_coverage=1 00:04:32.920 --rc genhtml_function_coverage=1 00:04:32.920 --rc genhtml_legend=1 00:04:32.920 --rc geninfo_all_blocks=1 00:04:32.920 --rc geninfo_unexecuted_blocks=1 00:04:32.920 00:04:32.920 ' 00:04:32.920 13:09:40 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:32.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.920 --rc genhtml_branch_coverage=1 00:04:32.920 --rc genhtml_function_coverage=1 00:04:32.920 --rc genhtml_legend=1 00:04:32.920 --rc geninfo_all_blocks=1 00:04:32.920 --rc geninfo_unexecuted_blocks=1 00:04:32.920 00:04:32.920 ' 00:04:32.920 13:09:40 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:32.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.920 --rc genhtml_branch_coverage=1 00:04:32.920 --rc genhtml_function_coverage=1 00:04:32.920 --rc genhtml_legend=1 00:04:32.920 --rc geninfo_all_blocks=1 00:04:32.920 --rc geninfo_unexecuted_blocks=1 00:04:32.920 00:04:32.920 ' 00:04:32.920 13:09:40 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:32.920 13:09:40 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:32.920 13:09:40 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:32.920 13:09:40 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:32.920 13:09:40 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:32.920 13:09:40 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:32.920 13:09:40 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:32.920 13:09:40 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:32.920 13:09:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:32.920 13:09:40 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3589222 00:04:32.920 13:09:40 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3589222 00:04:32.920 13:09:40 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:32.920 13:09:40 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 3589222 ']' 00:04:32.920 13:09:40 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.920 13:09:40 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:32.920 13:09:40 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.920 13:09:40 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:32.920 13:09:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:33.181 [2024-11-07 13:09:40.980214] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:04:33.181 [2024-11-07 13:09:40.980328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3589222 ] 00:04:33.181 [2024-11-07 13:09:41.121966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:33.441 [2024-11-07 13:09:41.219577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.441 [2024-11-07 13:09:41.219598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:34.013 13:09:41 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:34.013 13:09:41 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:04:34.013 13:09:41 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3589472 00:04:34.013 13:09:41 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:34.013 13:09:41 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:34.013 [ 00:04:34.013 "bdev_malloc_delete", 00:04:34.013 "bdev_malloc_create", 00:04:34.013 "bdev_null_resize", 00:04:34.013 "bdev_null_delete", 00:04:34.013 "bdev_null_create", 00:04:34.013 "bdev_nvme_cuse_unregister", 00:04:34.013 "bdev_nvme_cuse_register", 00:04:34.013 "bdev_opal_new_user", 00:04:34.013 "bdev_opal_set_lock_state", 00:04:34.013 "bdev_opal_delete", 00:04:34.013 "bdev_opal_get_info", 00:04:34.013 "bdev_opal_create", 00:04:34.013 "bdev_nvme_opal_revert", 00:04:34.013 "bdev_nvme_opal_init", 00:04:34.013 "bdev_nvme_send_cmd", 00:04:34.013 "bdev_nvme_set_keys", 00:04:34.013 "bdev_nvme_get_path_iostat", 00:04:34.013 "bdev_nvme_get_mdns_discovery_info", 00:04:34.013 "bdev_nvme_stop_mdns_discovery", 00:04:34.013 "bdev_nvme_start_mdns_discovery", 00:04:34.013 "bdev_nvme_set_multipath_policy", 00:04:34.013 "bdev_nvme_set_preferred_path", 00:04:34.013 "bdev_nvme_get_io_paths", 00:04:34.013 "bdev_nvme_remove_error_injection", 00:04:34.013 "bdev_nvme_add_error_injection", 00:04:34.013 "bdev_nvme_get_discovery_info", 00:04:34.013 "bdev_nvme_stop_discovery", 00:04:34.013 "bdev_nvme_start_discovery", 00:04:34.013 "bdev_nvme_get_controller_health_info", 00:04:34.013 "bdev_nvme_disable_controller", 00:04:34.013 "bdev_nvme_enable_controller", 00:04:34.013 "bdev_nvme_reset_controller", 00:04:34.013 "bdev_nvme_get_transport_statistics", 00:04:34.013 "bdev_nvme_apply_firmware", 00:04:34.013 "bdev_nvme_detach_controller", 00:04:34.013 "bdev_nvme_get_controllers", 00:04:34.013 "bdev_nvme_attach_controller", 00:04:34.013 "bdev_nvme_set_hotplug", 00:04:34.013 "bdev_nvme_set_options", 00:04:34.013 "bdev_passthru_delete", 00:04:34.013 "bdev_passthru_create", 00:04:34.013 "bdev_lvol_set_parent_bdev", 00:04:34.013 "bdev_lvol_set_parent", 00:04:34.013 "bdev_lvol_check_shallow_copy", 00:04:34.013 "bdev_lvol_start_shallow_copy", 00:04:34.013 "bdev_lvol_grow_lvstore", 00:04:34.013 "bdev_lvol_get_lvols", 00:04:34.013 "bdev_lvol_get_lvstores", 00:04:34.013 "bdev_lvol_delete", 00:04:34.013 "bdev_lvol_set_read_only", 00:04:34.013 "bdev_lvol_resize", 00:04:34.013 "bdev_lvol_decouple_parent", 00:04:34.013 "bdev_lvol_inflate", 00:04:34.013 "bdev_lvol_rename", 00:04:34.013 "bdev_lvol_clone_bdev", 00:04:34.013 "bdev_lvol_clone", 00:04:34.013 "bdev_lvol_snapshot", 00:04:34.013 "bdev_lvol_create", 00:04:34.013 "bdev_lvol_delete_lvstore", 00:04:34.013 "bdev_lvol_rename_lvstore", 00:04:34.013 "bdev_lvol_create_lvstore", 00:04:34.013 "bdev_raid_set_options", 00:04:34.013 "bdev_raid_remove_base_bdev", 00:04:34.013 "bdev_raid_add_base_bdev", 00:04:34.013 "bdev_raid_delete", 00:04:34.013 "bdev_raid_create", 00:04:34.013 "bdev_raid_get_bdevs", 00:04:34.013 "bdev_error_inject_error", 00:04:34.013 "bdev_error_delete", 00:04:34.013 "bdev_error_create", 00:04:34.013 "bdev_split_delete", 00:04:34.013 "bdev_split_create", 00:04:34.013 "bdev_delay_delete", 00:04:34.013 "bdev_delay_create", 00:04:34.013 "bdev_delay_update_latency", 00:04:34.013 "bdev_zone_block_delete", 00:04:34.013 "bdev_zone_block_create", 00:04:34.013 "blobfs_create", 00:04:34.013 "blobfs_detect", 00:04:34.013 "blobfs_set_cache_size", 00:04:34.013 "bdev_aio_delete", 00:04:34.013 "bdev_aio_rescan", 00:04:34.013 "bdev_aio_create", 00:04:34.013 "bdev_ftl_set_property", 00:04:34.013 "bdev_ftl_get_properties", 00:04:34.013 "bdev_ftl_get_stats", 00:04:34.013 "bdev_ftl_unmap", 00:04:34.013 "bdev_ftl_unload", 00:04:34.013 "bdev_ftl_delete", 00:04:34.013 "bdev_ftl_load", 00:04:34.013 "bdev_ftl_create", 00:04:34.013 "bdev_virtio_attach_controller", 00:04:34.013 "bdev_virtio_scsi_get_devices", 00:04:34.013 "bdev_virtio_detach_controller", 00:04:34.013 "bdev_virtio_blk_set_hotplug", 00:04:34.013 "bdev_iscsi_delete", 00:04:34.013 "bdev_iscsi_create", 00:04:34.013 "bdev_iscsi_set_options", 00:04:34.013 "accel_error_inject_error", 00:04:34.013 "ioat_scan_accel_module", 00:04:34.013 "dsa_scan_accel_module", 00:04:34.013 "iaa_scan_accel_module", 00:04:34.013 "keyring_file_remove_key", 00:04:34.013 "keyring_file_add_key", 00:04:34.013 "keyring_linux_set_options", 00:04:34.013 "fsdev_aio_delete", 00:04:34.013 "fsdev_aio_create", 00:04:34.013 "iscsi_get_histogram", 00:04:34.013 "iscsi_enable_histogram", 00:04:34.013 "iscsi_set_options", 00:04:34.013 "iscsi_get_auth_groups", 00:04:34.013 "iscsi_auth_group_remove_secret", 00:04:34.013 "iscsi_auth_group_add_secret", 00:04:34.013 "iscsi_delete_auth_group", 00:04:34.013 "iscsi_create_auth_group", 00:04:34.013 "iscsi_set_discovery_auth", 00:04:34.013 "iscsi_get_options", 00:04:34.013 "iscsi_target_node_request_logout", 00:04:34.013 "iscsi_target_node_set_redirect", 00:04:34.013 "iscsi_target_node_set_auth", 00:04:34.013 "iscsi_target_node_add_lun", 00:04:34.014 "iscsi_get_stats", 00:04:34.014 "iscsi_get_connections", 00:04:34.014 "iscsi_portal_group_set_auth", 00:04:34.014 "iscsi_start_portal_group", 00:04:34.014 "iscsi_delete_portal_group", 00:04:34.014 "iscsi_create_portal_group", 00:04:34.014 "iscsi_get_portal_groups", 00:04:34.014 "iscsi_delete_target_node", 00:04:34.014 "iscsi_target_node_remove_pg_ig_maps", 00:04:34.014 "iscsi_target_node_add_pg_ig_maps", 00:04:34.014 "iscsi_create_target_node", 00:04:34.014 "iscsi_get_target_nodes", 00:04:34.014 "iscsi_delete_initiator_group", 00:04:34.014 "iscsi_initiator_group_remove_initiators", 00:04:34.014 "iscsi_initiator_group_add_initiators", 00:04:34.014 "iscsi_create_initiator_group", 00:04:34.014 "iscsi_get_initiator_groups", 00:04:34.014 "nvmf_set_crdt", 00:04:34.014 "nvmf_set_config", 00:04:34.014 "nvmf_set_max_subsystems", 00:04:34.014 "nvmf_stop_mdns_prr", 00:04:34.014 "nvmf_publish_mdns_prr", 00:04:34.014 "nvmf_subsystem_get_listeners", 00:04:34.014 "nvmf_subsystem_get_qpairs", 00:04:34.014 "nvmf_subsystem_get_controllers", 00:04:34.014 "nvmf_get_stats", 00:04:34.014 "nvmf_get_transports", 00:04:34.014 "nvmf_create_transport", 00:04:34.014 "nvmf_get_targets", 00:04:34.014 "nvmf_delete_target", 00:04:34.014 "nvmf_create_target", 00:04:34.014 "nvmf_subsystem_allow_any_host", 00:04:34.014 "nvmf_subsystem_set_keys", 00:04:34.014 "nvmf_subsystem_remove_host", 00:04:34.014 "nvmf_subsystem_add_host", 00:04:34.014 "nvmf_ns_remove_host", 00:04:34.014 "nvmf_ns_add_host", 00:04:34.014 "nvmf_subsystem_remove_ns", 00:04:34.014 "nvmf_subsystem_set_ns_ana_group", 00:04:34.014 "nvmf_subsystem_add_ns", 00:04:34.014 "nvmf_subsystem_listener_set_ana_state", 00:04:34.014 "nvmf_discovery_get_referrals", 00:04:34.014 "nvmf_discovery_remove_referral", 00:04:34.014 "nvmf_discovery_add_referral", 00:04:34.014 "nvmf_subsystem_remove_listener", 00:04:34.014 "nvmf_subsystem_add_listener", 00:04:34.014 "nvmf_delete_subsystem", 00:04:34.014 "nvmf_create_subsystem", 00:04:34.014 "nvmf_get_subsystems", 00:04:34.014 "env_dpdk_get_mem_stats", 00:04:34.014 "nbd_get_disks", 00:04:34.014 "nbd_stop_disk", 00:04:34.014 "nbd_start_disk", 00:04:34.014 "ublk_recover_disk", 00:04:34.014 "ublk_get_disks", 00:04:34.014 "ublk_stop_disk", 00:04:34.014 "ublk_start_disk", 00:04:34.014 "ublk_destroy_target", 00:04:34.014 "ublk_create_target", 00:04:34.014 "virtio_blk_create_transport", 00:04:34.014 "virtio_blk_get_transports", 00:04:34.014 "vhost_controller_set_coalescing", 00:04:34.014 "vhost_get_controllers", 00:04:34.014 "vhost_delete_controller", 00:04:34.014 "vhost_create_blk_controller", 00:04:34.014 "vhost_scsi_controller_remove_target", 00:04:34.014 "vhost_scsi_controller_add_target", 00:04:34.014 "vhost_start_scsi_controller", 00:04:34.014 "vhost_create_scsi_controller", 00:04:34.014 "thread_set_cpumask", 00:04:34.014 "scheduler_set_options", 00:04:34.014 "framework_get_governor", 00:04:34.014 "framework_get_scheduler", 00:04:34.014 "framework_set_scheduler", 00:04:34.014 "framework_get_reactors", 00:04:34.014 "thread_get_io_channels", 00:04:34.014 "thread_get_pollers", 00:04:34.014 "thread_get_stats", 00:04:34.014 "framework_monitor_context_switch", 00:04:34.014 "spdk_kill_instance", 00:04:34.014 "log_enable_timestamps", 00:04:34.014 "log_get_flags", 00:04:34.014 "log_clear_flag", 00:04:34.014 "log_set_flag", 00:04:34.014 "log_get_level", 00:04:34.014 "log_set_level", 00:04:34.014 "log_get_print_level", 00:04:34.014 "log_set_print_level", 00:04:34.014 "framework_enable_cpumask_locks", 00:04:34.014 "framework_disable_cpumask_locks", 00:04:34.014 "framework_wait_init", 00:04:34.014 "framework_start_init", 00:04:34.014 "scsi_get_devices", 00:04:34.014 "bdev_get_histogram", 00:04:34.014 "bdev_enable_histogram", 00:04:34.014 "bdev_set_qos_limit", 00:04:34.014 "bdev_set_qd_sampling_period", 00:04:34.014 "bdev_get_bdevs", 00:04:34.014 "bdev_reset_iostat", 00:04:34.014 "bdev_get_iostat", 00:04:34.014 "bdev_examine", 00:04:34.014 "bdev_wait_for_examine", 00:04:34.014 "bdev_set_options", 00:04:34.014 "accel_get_stats", 00:04:34.014 "accel_set_options", 00:04:34.014 "accel_set_driver", 00:04:34.014 "accel_crypto_key_destroy", 00:04:34.014 "accel_crypto_keys_get", 00:04:34.014 "accel_crypto_key_create", 00:04:34.014 "accel_assign_opc", 00:04:34.014 "accel_get_module_info", 00:04:34.014 "accel_get_opc_assignments", 00:04:34.014 "vmd_rescan", 00:04:34.014 "vmd_remove_device", 00:04:34.014 "vmd_enable", 00:04:34.014 "sock_get_default_impl", 00:04:34.014 "sock_set_default_impl", 00:04:34.014 "sock_impl_set_options", 00:04:34.014 "sock_impl_get_options", 00:04:34.014 "iobuf_get_stats", 00:04:34.014 "iobuf_set_options", 00:04:34.014 "keyring_get_keys", 00:04:34.014 "framework_get_pci_devices", 00:04:34.014 "framework_get_config", 00:04:34.014 "framework_get_subsystems", 00:04:34.014 "fsdev_set_opts", 00:04:34.014 "fsdev_get_opts", 00:04:34.014 "trace_get_info", 00:04:34.014 "trace_get_tpoint_group_mask", 00:04:34.014 "trace_disable_tpoint_group", 00:04:34.014 "trace_enable_tpoint_group", 00:04:34.014 "trace_clear_tpoint_mask", 00:04:34.014 "trace_set_tpoint_mask", 00:04:34.014 "notify_get_notifications", 00:04:34.014 "notify_get_types", 00:04:34.014 "spdk_get_version", 00:04:34.014 "rpc_get_methods" 00:04:34.014 ] 00:04:34.274 13:09:42 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:34.274 13:09:42 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:34.274 13:09:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:34.274 13:09:42 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:34.274 13:09:42 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3589222 00:04:34.274 13:09:42 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 3589222 ']' 00:04:34.274 13:09:42 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 3589222 00:04:34.274 13:09:42 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:04:34.274 13:09:42 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:34.274 13:09:42 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3589222 00:04:34.274 13:09:42 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:34.274 13:09:42 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:34.275 13:09:42 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3589222' 00:04:34.275 killing process with pid 3589222 00:04:34.275 13:09:42 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 3589222 00:04:34.275 13:09:42 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 3589222 00:04:36.186 00:04:36.186 real 0m3.063s 00:04:36.186 user 0m5.370s 00:04:36.186 sys 0m0.600s 00:04:36.186 13:09:43 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:36.186 13:09:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:36.186 ************************************ 00:04:36.186 END TEST spdkcli_tcp 00:04:36.186 ************************************ 00:04:36.186 13:09:43 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:36.186 13:09:43 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:36.186 13:09:43 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:36.186 13:09:43 -- common/autotest_common.sh@10 -- # set +x 00:04:36.186 ************************************ 00:04:36.186 START TEST dpdk_mem_utility 00:04:36.186 ************************************ 00:04:36.187 13:09:43 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:36.187 * Looking for test storage... 00:04:36.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:36.187 13:09:43 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:36.187 13:09:43 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:04:36.187 13:09:43 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:36.187 13:09:43 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:36.187 13:09:43 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.187 13:09:43 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.187 13:09:43 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.187 13:09:43 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.187 13:09:43 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.187 13:09:43 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.187 13:09:43 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.187 13:09:43 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.187 13:09:43 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.187 13:09:43 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.187 13:09:43 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.187 13:09:43 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:36.187 13:09:43 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:36.187 13:09:43 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.187 13:09:43 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.187 13:09:43 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:36.187 13:09:43 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:36.187 13:09:43 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.187 13:09:43 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:36.187 13:09:43 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.187 13:09:43 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:36.187 13:09:43 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:36.187 13:09:44 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.187 13:09:44 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:36.187 13:09:44 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.187 13:09:44 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.187 13:09:44 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.187 13:09:44 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:36.187 13:09:44 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.187 13:09:44 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:36.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.187 --rc genhtml_branch_coverage=1 00:04:36.187 --rc genhtml_function_coverage=1 00:04:36.187 --rc genhtml_legend=1 00:04:36.187 --rc geninfo_all_blocks=1 00:04:36.187 --rc geninfo_unexecuted_blocks=1 00:04:36.187 00:04:36.187 ' 00:04:36.187 13:09:44 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:36.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.187 --rc genhtml_branch_coverage=1 00:04:36.187 --rc genhtml_function_coverage=1 00:04:36.187 --rc genhtml_legend=1 00:04:36.187 --rc geninfo_all_blocks=1 00:04:36.187 --rc geninfo_unexecuted_blocks=1 00:04:36.187 00:04:36.187 ' 00:04:36.187 13:09:44 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:36.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.187 --rc genhtml_branch_coverage=1 00:04:36.187 --rc genhtml_function_coverage=1 00:04:36.187 --rc genhtml_legend=1 00:04:36.187 --rc geninfo_all_blocks=1 00:04:36.187 --rc geninfo_unexecuted_blocks=1 00:04:36.187 00:04:36.187 ' 00:04:36.187 13:09:44 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:36.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.187 --rc genhtml_branch_coverage=1 00:04:36.187 --rc genhtml_function_coverage=1 00:04:36.187 --rc genhtml_legend=1 00:04:36.187 --rc geninfo_all_blocks=1 00:04:36.187 --rc geninfo_unexecuted_blocks=1 00:04:36.187 00:04:36.187 ' 00:04:36.187 13:09:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:36.187 13:09:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3589968 00:04:36.187 13:09:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3589968 00:04:36.187 13:09:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:36.187 13:09:44 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 3589968 ']' 00:04:36.187 13:09:44 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.187 13:09:44 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:36.187 13:09:44 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.187 13:09:44 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:36.187 13:09:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:36.187 [2024-11-07 13:09:44.112886] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:04:36.187 [2024-11-07 13:09:44.113030] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3589968 ] 00:04:36.448 [2024-11-07 13:09:44.265166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.448 [2024-11-07 13:09:44.362712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.018 13:09:44 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:37.019 13:09:44 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:04:37.019 13:09:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:37.019 13:09:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:37.019 13:09:44 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.019 13:09:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:37.019 { 00:04:37.019 "filename": "/tmp/spdk_mem_dump.txt" 00:04:37.019 } 00:04:37.019 13:09:45 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.019 13:09:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:37.280 DPDK memory size 816.000000 MiB in 1 heap(s) 00:04:37.280 1 heaps totaling size 816.000000 MiB 00:04:37.280 size: 816.000000 MiB heap id: 0 00:04:37.280 end heaps---------- 00:04:37.280 9 mempools totaling size 595.772034 MiB 00:04:37.280 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:37.280 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:37.280 size: 92.545471 MiB name: bdev_io_3589968 00:04:37.280 size: 50.003479 MiB name: msgpool_3589968 00:04:37.280 size: 36.509338 MiB name: fsdev_io_3589968 00:04:37.280 size: 21.763794 MiB name: PDU_Pool 00:04:37.280 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:37.280 size: 4.133484 MiB name: evtpool_3589968 00:04:37.280 size: 0.026123 MiB name: Session_Pool 00:04:37.280 end mempools------- 00:04:37.280 6 memzones totaling size 4.142822 MiB 00:04:37.280 size: 1.000366 MiB name: RG_ring_0_3589968 00:04:37.280 size: 1.000366 MiB name: RG_ring_1_3589968 00:04:37.280 size: 1.000366 MiB name: RG_ring_4_3589968 00:04:37.280 size: 1.000366 MiB name: RG_ring_5_3589968 00:04:37.280 size: 0.125366 MiB name: RG_ring_2_3589968 00:04:37.280 size: 0.015991 MiB name: RG_ring_3_3589968 00:04:37.280 end memzones------- 00:04:37.280 13:09:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:37.280 heap id: 0 total size: 816.000000 MiB number of busy elements: 44 number of free elements: 19 00:04:37.280 list of free elements. size: 16.857605 MiB 00:04:37.280 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:37.280 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:37.280 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:37.280 element at address: 0x200018d00040 with size: 0.999939 MiB 00:04:37.280 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:37.280 element at address: 0x200019200000 with size: 0.999329 MiB 00:04:37.280 element at address: 0x200000400000 with size: 0.998108 MiB 00:04:37.280 element at address: 0x200031e00000 with size: 0.994324 MiB 00:04:37.280 element at address: 0x200018a00000 with size: 0.959900 MiB 00:04:37.280 element at address: 0x200019500040 with size: 0.937256 MiB 00:04:37.280 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:37.280 element at address: 0x20001ac00000 with size: 0.583191 MiB 00:04:37.280 element at address: 0x200000c00000 with size: 0.495300 MiB 00:04:37.280 element at address: 0x200018e00000 with size: 0.491150 MiB 00:04:37.280 element at address: 0x200019600000 with size: 0.485657 MiB 00:04:37.280 element at address: 0x200012c00000 with size: 0.446167 MiB 00:04:37.280 element at address: 0x200028000000 with size: 0.411072 MiB 00:04:37.280 element at address: 0x200000800000 with size: 0.355286 MiB 00:04:37.280 element at address: 0x20000a5ff040 with size: 0.001038 MiB 00:04:37.280 list of standard malloc elements. size: 199.221497 MiB 00:04:37.281 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:37.281 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:37.281 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:04:37.281 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:37.281 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:37.281 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:37.281 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:04:37.281 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:37.281 element at address: 0x200012bff040 with size: 0.000427 MiB 00:04:37.281 element at address: 0x200012bffa00 with size: 0.000366 MiB 00:04:37.281 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:37.281 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:37.281 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:37.281 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:37.281 element at address: 0x2000004ffa40 with size: 0.000244 MiB 00:04:37.281 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:37.281 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:37.281 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:37.281 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:37.281 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:37.281 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:37.281 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:37.281 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:37.281 element at address: 0x20000a5ff480 with size: 0.000244 MiB 00:04:37.281 element at address: 0x20000a5ff580 with size: 0.000244 MiB 00:04:37.281 element at address: 0x20000a5ff680 with size: 0.000244 MiB 00:04:37.281 element at address: 0x20000a5ff780 with size: 0.000244 MiB 00:04:37.281 element at address: 0x20000a5ff880 with size: 0.000244 MiB 00:04:37.281 element at address: 0x20000a5ff980 with size: 0.000244 MiB 00:04:37.281 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:37.281 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:37.281 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:37.281 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:37.281 element at address: 0x200012bff200 with size: 0.000244 MiB 00:04:37.281 element at address: 0x200012bff300 with size: 0.000244 MiB 00:04:37.281 element at address: 0x200012bff400 with size: 0.000244 MiB 00:04:37.281 element at address: 0x200012bff500 with size: 0.000244 MiB 00:04:37.281 element at address: 0x200012bff600 with size: 0.000244 MiB 00:04:37.281 element at address: 0x200012bff700 with size: 0.000244 MiB 00:04:37.281 element at address: 0x200012bff800 with size: 0.000244 MiB 00:04:37.281 element at address: 0x200012bff900 with size: 0.000244 MiB 00:04:37.281 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:37.281 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:37.281 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:37.281 list of memzone associated elements. size: 599.920898 MiB 00:04:37.281 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:04:37.281 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:37.281 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:04:37.281 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:37.281 element at address: 0x200012df4740 with size: 92.045105 MiB 00:04:37.281 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_3589968_0 00:04:37.281 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:37.281 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3589968_0 00:04:37.281 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:37.281 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3589968_0 00:04:37.281 element at address: 0x2000197be900 with size: 20.255615 MiB 00:04:37.281 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:37.281 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:04:37.281 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:37.281 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:37.281 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3589968_0 00:04:37.281 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:37.281 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3589968 00:04:37.281 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:37.281 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3589968 00:04:37.281 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:37.281 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:37.281 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:04:37.281 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:37.281 element at address: 0x200018afde00 with size: 1.008179 MiB 00:04:37.281 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:37.281 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:04:37.281 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:37.281 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:37.281 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3589968 00:04:37.281 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:37.281 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3589968 00:04:37.281 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:04:37.281 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3589968 00:04:37.281 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:04:37.281 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3589968 00:04:37.281 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:37.281 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3589968 00:04:37.281 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:37.281 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3589968 00:04:37.281 element at address: 0x200018e7dbc0 with size: 0.500549 MiB 00:04:37.281 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:37.281 element at address: 0x200012c72380 with size: 0.500549 MiB 00:04:37.281 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:37.281 element at address: 0x20001967c540 with size: 0.250549 MiB 00:04:37.281 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:37.281 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:37.281 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3589968 00:04:37.281 element at address: 0x20000085f180 with size: 0.125549 MiB 00:04:37.281 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3589968 00:04:37.281 element at address: 0x200018af5bc0 with size: 0.031799 MiB 00:04:37.281 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:37.281 element at address: 0x2000280693c0 with size: 0.023804 MiB 00:04:37.281 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:37.281 element at address: 0x20000085af40 with size: 0.016174 MiB 00:04:37.281 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3589968 00:04:37.281 element at address: 0x20002806f540 with size: 0.002502 MiB 00:04:37.281 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:37.281 element at address: 0x2000004ffb40 with size: 0.000366 MiB 00:04:37.281 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3589968 00:04:37.281 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:37.281 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3589968 00:04:37.281 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:37.281 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3589968 00:04:37.281 element at address: 0x20000a5ffa80 with size: 0.000366 MiB 00:04:37.281 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:37.281 13:09:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:37.281 13:09:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3589968 00:04:37.281 13:09:45 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 3589968 ']' 00:04:37.281 13:09:45 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 3589968 00:04:37.281 13:09:45 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:04:37.281 13:09:45 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:37.281 13:09:45 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3589968 00:04:37.281 13:09:45 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:37.281 13:09:45 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:37.281 13:09:45 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3589968' 00:04:37.281 killing process with pid 3589968 00:04:37.281 13:09:45 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 3589968 00:04:37.281 13:09:45 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 3589968 00:04:39.195 00:04:39.195 real 0m2.956s 00:04:39.195 user 0m2.936s 00:04:39.195 sys 0m0.523s 00:04:39.195 13:09:46 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:39.195 13:09:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:39.195 ************************************ 00:04:39.195 END TEST dpdk_mem_utility 00:04:39.195 ************************************ 00:04:39.195 13:09:46 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:39.195 13:09:46 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:39.195 13:09:46 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:39.195 13:09:46 -- common/autotest_common.sh@10 -- # set +x 00:04:39.195 ************************************ 00:04:39.195 START TEST event 00:04:39.195 ************************************ 00:04:39.195 13:09:46 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:39.195 * Looking for test storage... 00:04:39.195 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:39.195 13:09:46 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:39.195 13:09:46 event -- common/autotest_common.sh@1691 -- # lcov --version 00:04:39.195 13:09:46 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:39.195 13:09:47 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:39.195 13:09:47 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.195 13:09:47 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.195 13:09:47 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.195 13:09:47 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.195 13:09:47 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.195 13:09:47 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.195 13:09:47 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.195 13:09:47 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.195 13:09:47 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.195 13:09:47 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.195 13:09:47 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.195 13:09:47 event -- scripts/common.sh@344 -- # case "$op" in 00:04:39.195 13:09:47 event -- scripts/common.sh@345 -- # : 1 00:04:39.195 13:09:47 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.195 13:09:47 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.195 13:09:47 event -- scripts/common.sh@365 -- # decimal 1 00:04:39.195 13:09:47 event -- scripts/common.sh@353 -- # local d=1 00:04:39.195 13:09:47 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.195 13:09:47 event -- scripts/common.sh@355 -- # echo 1 00:04:39.195 13:09:47 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.195 13:09:47 event -- scripts/common.sh@366 -- # decimal 2 00:04:39.195 13:09:47 event -- scripts/common.sh@353 -- # local d=2 00:04:39.195 13:09:47 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.195 13:09:47 event -- scripts/common.sh@355 -- # echo 2 00:04:39.195 13:09:47 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.195 13:09:47 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.195 13:09:47 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.195 13:09:47 event -- scripts/common.sh@368 -- # return 0 00:04:39.195 13:09:47 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.195 13:09:47 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:39.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.195 --rc genhtml_branch_coverage=1 00:04:39.195 --rc genhtml_function_coverage=1 00:04:39.195 --rc genhtml_legend=1 00:04:39.195 --rc geninfo_all_blocks=1 00:04:39.195 --rc geninfo_unexecuted_blocks=1 00:04:39.195 00:04:39.195 ' 00:04:39.195 13:09:47 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:39.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.195 --rc genhtml_branch_coverage=1 00:04:39.195 --rc genhtml_function_coverage=1 00:04:39.195 --rc genhtml_legend=1 00:04:39.195 --rc geninfo_all_blocks=1 00:04:39.195 --rc geninfo_unexecuted_blocks=1 00:04:39.195 00:04:39.195 ' 00:04:39.195 13:09:47 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:39.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.195 --rc genhtml_branch_coverage=1 00:04:39.195 --rc genhtml_function_coverage=1 00:04:39.195 --rc genhtml_legend=1 00:04:39.195 --rc geninfo_all_blocks=1 00:04:39.195 --rc geninfo_unexecuted_blocks=1 00:04:39.195 00:04:39.195 ' 00:04:39.195 13:09:47 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:39.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.195 --rc genhtml_branch_coverage=1 00:04:39.195 --rc genhtml_function_coverage=1 00:04:39.195 --rc genhtml_legend=1 00:04:39.195 --rc geninfo_all_blocks=1 00:04:39.195 --rc geninfo_unexecuted_blocks=1 00:04:39.195 00:04:39.195 ' 00:04:39.195 13:09:47 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:39.195 13:09:47 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:39.195 13:09:47 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:39.195 13:09:47 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:04:39.195 13:09:47 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:39.195 13:09:47 event -- common/autotest_common.sh@10 -- # set +x 00:04:39.195 ************************************ 00:04:39.195 START TEST event_perf 00:04:39.195 ************************************ 00:04:39.195 13:09:47 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:39.195 Running I/O for 1 seconds...[2024-11-07 13:09:47.125643] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:04:39.195 [2024-11-07 13:09:47.125740] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3590704 ] 00:04:39.456 [2024-11-07 13:09:47.264460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:39.456 [2024-11-07 13:09:47.365492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.456 [2024-11-07 13:09:47.365576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:39.456 [2024-11-07 13:09:47.365688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.456 Running I/O for 1 seconds...[2024-11-07 13:09:47.365715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:40.838 00:04:40.838 lcore 0: 199642 00:04:40.838 lcore 1: 199641 00:04:40.838 lcore 2: 199636 00:04:40.838 lcore 3: 199639 00:04:40.838 done. 00:04:40.838 00:04:40.838 real 0m1.463s 00:04:40.838 user 0m4.308s 00:04:40.838 sys 0m0.151s 00:04:40.838 13:09:48 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:40.838 13:09:48 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:40.838 ************************************ 00:04:40.838 END TEST event_perf 00:04:40.838 ************************************ 00:04:40.838 13:09:48 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:40.838 13:09:48 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:40.838 13:09:48 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:40.838 13:09:48 event -- common/autotest_common.sh@10 -- # set +x 00:04:40.838 ************************************ 00:04:40.838 START TEST event_reactor 00:04:40.838 ************************************ 00:04:40.838 13:09:48 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:40.838 [2024-11-07 13:09:48.667786] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:04:40.838 [2024-11-07 13:09:48.667896] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3590990 ] 00:04:40.838 [2024-11-07 13:09:48.820207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.099 [2024-11-07 13:09:48.916287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.481 test_start 00:04:42.481 oneshot 00:04:42.481 tick 100 00:04:42.481 tick 100 00:04:42.481 tick 250 00:04:42.481 tick 100 00:04:42.481 tick 100 00:04:42.481 tick 250 00:04:42.481 tick 100 00:04:42.481 tick 500 00:04:42.481 tick 100 00:04:42.481 tick 100 00:04:42.481 tick 250 00:04:42.481 tick 100 00:04:42.481 tick 100 00:04:42.481 test_end 00:04:42.481 00:04:42.481 real 0m1.459s 00:04:42.481 user 0m1.307s 00:04:42.481 sys 0m0.146s 00:04:42.481 13:09:50 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:42.481 13:09:50 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:42.481 ************************************ 00:04:42.481 END TEST event_reactor 00:04:42.481 ************************************ 00:04:42.481 13:09:50 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:42.481 13:09:50 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:42.481 13:09:50 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:42.481 13:09:50 event -- common/autotest_common.sh@10 -- # set +x 00:04:42.481 ************************************ 00:04:42.481 START TEST event_reactor_perf 00:04:42.481 ************************************ 00:04:42.481 13:09:50 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:42.481 [2024-11-07 13:09:50.183158] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:04:42.481 [2024-11-07 13:09:50.183246] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3591271 ] 00:04:42.481 [2024-11-07 13:09:50.317655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.481 [2024-11-07 13:09:50.413912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.864 test_start 00:04:43.864 test_end 00:04:43.864 Performance: 295529 events per second 00:04:43.864 00:04:43.864 real 0m1.420s 00:04:43.864 user 0m1.291s 00:04:43.864 sys 0m0.124s 00:04:43.864 13:09:51 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:43.864 13:09:51 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:43.864 ************************************ 00:04:43.864 END TEST event_reactor_perf 00:04:43.864 ************************************ 00:04:43.864 13:09:51 event -- event/event.sh@49 -- # uname -s 00:04:43.864 13:09:51 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:43.864 13:09:51 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:43.864 13:09:51 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:43.864 13:09:51 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:43.864 13:09:51 event -- common/autotest_common.sh@10 -- # set +x 00:04:43.864 ************************************ 00:04:43.864 START TEST event_scheduler 00:04:43.864 ************************************ 00:04:43.864 13:09:51 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:43.864 * Looking for test storage... 00:04:43.864 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:43.864 13:09:51 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:43.864 13:09:51 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:04:43.864 13:09:51 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:43.864 13:09:51 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:43.864 13:09:51 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.864 13:09:51 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.864 13:09:51 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.864 13:09:51 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.864 13:09:51 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.864 13:09:51 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.864 13:09:51 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.864 13:09:51 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.864 13:09:51 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.864 13:09:51 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.864 13:09:51 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.864 13:09:51 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:43.864 13:09:51 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:43.864 13:09:51 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.864 13:09:51 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.864 13:09:51 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:43.864 13:09:51 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:43.864 13:09:51 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.864 13:09:51 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:43.864 13:09:51 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.864 13:09:51 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:43.864 13:09:51 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:43.864 13:09:51 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.864 13:09:51 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:43.864 13:09:51 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.864 13:09:51 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.864 13:09:51 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.864 13:09:51 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:43.864 13:09:51 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.864 13:09:51 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:43.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.864 --rc genhtml_branch_coverage=1 00:04:43.864 --rc genhtml_function_coverage=1 00:04:43.864 --rc genhtml_legend=1 00:04:43.864 --rc geninfo_all_blocks=1 00:04:43.864 --rc geninfo_unexecuted_blocks=1 00:04:43.864 00:04:43.864 ' 00:04:43.864 13:09:51 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:43.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.864 --rc genhtml_branch_coverage=1 00:04:43.864 --rc genhtml_function_coverage=1 00:04:43.864 --rc genhtml_legend=1 00:04:43.864 --rc geninfo_all_blocks=1 00:04:43.864 --rc geninfo_unexecuted_blocks=1 00:04:43.864 00:04:43.864 ' 00:04:43.864 13:09:51 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:43.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.864 --rc genhtml_branch_coverage=1 00:04:43.864 --rc genhtml_function_coverage=1 00:04:43.864 --rc genhtml_legend=1 00:04:43.864 --rc geninfo_all_blocks=1 00:04:43.864 --rc geninfo_unexecuted_blocks=1 00:04:43.864 00:04:43.864 ' 00:04:43.865 13:09:51 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:43.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.865 --rc genhtml_branch_coverage=1 00:04:43.865 --rc genhtml_function_coverage=1 00:04:43.865 --rc genhtml_legend=1 00:04:43.865 --rc geninfo_all_blocks=1 00:04:43.865 --rc geninfo_unexecuted_blocks=1 00:04:43.865 00:04:43.865 ' 00:04:43.865 13:09:51 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:44.125 13:09:51 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3591668 00:04:44.125 13:09:51 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:44.126 13:09:51 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3591668 00:04:44.126 13:09:51 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:44.126 13:09:51 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 3591668 ']' 00:04:44.126 13:09:51 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.126 13:09:51 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:44.126 13:09:51 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.126 13:09:51 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:44.126 13:09:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:44.126 [2024-11-07 13:09:51.957763] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:04:44.126 [2024-11-07 13:09:51.957926] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3591668 ] 00:04:44.126 [2024-11-07 13:09:52.095364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:44.385 [2024-11-07 13:09:52.176118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.385 [2024-11-07 13:09:52.176347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:44.385 [2024-11-07 13:09:52.176436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:44.385 [2024-11-07 13:09:52.176463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:44.955 13:09:52 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:44.955 13:09:52 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:04:44.955 13:09:52 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:44.955 13:09:52 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.955 13:09:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:44.955 [2024-11-07 13:09:52.742511] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:44.955 [2024-11-07 13:09:52.742532] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:44.955 [2024-11-07 13:09:52.742545] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:44.955 [2024-11-07 13:09:52.742552] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:44.955 [2024-11-07 13:09:52.742559] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:44.955 13:09:52 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.955 13:09:52 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:44.955 13:09:52 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.955 13:09:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:44.955 [2024-11-07 13:09:52.921906] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:44.955 13:09:52 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.955 13:09:52 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:44.955 13:09:52 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:44.955 13:09:52 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:44.955 13:09:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:44.955 ************************************ 00:04:44.955 START TEST scheduler_create_thread 00:04:44.955 ************************************ 00:04:45.215 13:09:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:04:45.215 13:09:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:45.215 13:09:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.215 13:09:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.215 2 00:04:45.215 13:09:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.215 13:09:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:45.215 13:09:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.215 13:09:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.215 3 00:04:45.215 13:09:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.215 13:09:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:45.215 13:09:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.215 13:09:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.215 4 00:04:45.215 13:09:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.215 13:09:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:45.215 13:09:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.215 13:09:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.215 5 00:04:45.215 13:09:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.215 13:09:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:45.215 13:09:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.215 13:09:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.215 6 00:04:45.215 13:09:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.215 13:09:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:45.215 13:09:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.215 13:09:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.215 7 00:04:45.215 13:09:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.215 13:09:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:45.215 13:09:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.215 13:09:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.215 8 00:04:45.215 13:09:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.215 13:09:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:45.215 13:09:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.215 13:09:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.215 9 00:04:45.215 13:09:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.215 13:09:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:45.215 13:09:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.215 13:09:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.215 10 00:04:45.215 13:09:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.215 13:09:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:45.215 13:09:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.215 13:09:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.596 13:09:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.596 13:09:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:46.596 13:09:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:46.596 13:09:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.596 13:09:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.537 13:09:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.537 13:09:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:47.537 13:09:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.537 13:09:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.142 13:09:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.142 13:09:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:48.142 13:09:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:48.142 13:09:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.142 13:09:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.133 13:09:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.133 00:04:49.133 real 0m3.894s 00:04:49.133 user 0m0.026s 00:04:49.133 sys 0m0.006s 00:04:49.133 13:09:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:49.133 13:09:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.133 ************************************ 00:04:49.133 END TEST scheduler_create_thread 00:04:49.133 ************************************ 00:04:49.133 13:09:56 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:49.133 13:09:56 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3591668 00:04:49.133 13:09:56 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 3591668 ']' 00:04:49.133 13:09:56 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 3591668 00:04:49.133 13:09:56 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:04:49.133 13:09:56 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:49.133 13:09:56 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3591668 00:04:49.133 13:09:56 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:04:49.133 13:09:56 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:04:49.133 13:09:56 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3591668' 00:04:49.133 killing process with pid 3591668 00:04:49.133 13:09:56 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 3591668 00:04:49.133 13:09:56 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 3591668 00:04:49.393 [2024-11-07 13:09:57.234004] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:49.968 00:04:49.968 real 0m6.135s 00:04:49.968 user 0m12.715s 00:04:49.968 sys 0m0.548s 00:04:49.968 13:09:57 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:49.968 13:09:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:49.968 ************************************ 00:04:49.968 END TEST event_scheduler 00:04:49.968 ************************************ 00:04:49.968 13:09:57 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:49.968 13:09:57 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:49.968 13:09:57 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:49.968 13:09:57 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:49.968 13:09:57 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.968 ************************************ 00:04:49.968 START TEST app_repeat 00:04:49.968 ************************************ 00:04:49.968 13:09:57 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:04:49.968 13:09:57 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.968 13:09:57 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.968 13:09:57 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:49.968 13:09:57 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:49.968 13:09:57 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:49.968 13:09:57 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:49.968 13:09:57 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:49.968 13:09:57 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3592892 00:04:49.968 13:09:57 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.968 13:09:57 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:49.968 13:09:57 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3592892' 00:04:49.968 Process app_repeat pid: 3592892 00:04:49.968 13:09:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:49.968 13:09:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:49.968 spdk_app_start Round 0 00:04:49.968 13:09:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3592892 /var/tmp/spdk-nbd.sock 00:04:49.968 13:09:57 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3592892 ']' 00:04:49.968 13:09:57 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:49.968 13:09:57 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:49.968 13:09:57 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:49.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:49.968 13:09:57 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:49.968 13:09:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:49.968 [2024-11-07 13:09:57.942619] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:04:49.968 [2024-11-07 13:09:57.942726] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3592892 ] 00:04:50.233 [2024-11-07 13:09:58.092624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:50.233 [2024-11-07 13:09:58.192921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.233 [2024-11-07 13:09:58.192932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.802 13:09:58 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:50.802 13:09:58 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:50.802 13:09:58 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:51.064 Malloc0 00:04:51.064 13:09:58 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:51.325 Malloc1 00:04:51.325 13:09:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:51.325 13:09:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.325 13:09:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.325 13:09:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:51.325 13:09:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.325 13:09:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:51.325 13:09:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:51.325 13:09:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.325 13:09:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.325 13:09:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:51.325 13:09:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.325 13:09:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:51.325 13:09:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:51.325 13:09:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:51.325 13:09:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.325 13:09:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:51.600 /dev/nbd0 00:04:51.600 13:09:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:51.600 13:09:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:51.600 13:09:59 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:51.600 13:09:59 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:51.600 13:09:59 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:51.600 13:09:59 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:51.600 13:09:59 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:51.600 13:09:59 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:51.600 13:09:59 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:51.600 13:09:59 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:51.600 13:09:59 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:51.600 1+0 records in 00:04:51.600 1+0 records out 00:04:51.600 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304737 s, 13.4 MB/s 00:04:51.600 13:09:59 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:51.600 13:09:59 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:51.600 13:09:59 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:51.600 13:09:59 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:51.600 13:09:59 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:51.600 13:09:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:51.600 13:09:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.600 13:09:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:51.600 /dev/nbd1 00:04:51.861 13:09:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:51.861 13:09:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:51.861 13:09:59 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:51.861 13:09:59 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:51.861 13:09:59 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:51.861 13:09:59 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:51.861 13:09:59 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:51.861 13:09:59 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:51.861 13:09:59 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:51.861 13:09:59 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:51.861 13:09:59 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:51.861 1+0 records in 00:04:51.861 1+0 records out 00:04:51.861 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028637 s, 14.3 MB/s 00:04:51.861 13:09:59 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:51.861 13:09:59 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:51.861 13:09:59 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:51.861 13:09:59 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:51.861 13:09:59 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:51.861 13:09:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:51.861 13:09:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.861 13:09:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:51.861 13:09:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.861 13:09:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:51.861 13:09:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:51.861 { 00:04:51.861 "nbd_device": "/dev/nbd0", 00:04:51.861 "bdev_name": "Malloc0" 00:04:51.861 }, 00:04:51.861 { 00:04:51.861 "nbd_device": "/dev/nbd1", 00:04:51.861 "bdev_name": "Malloc1" 00:04:51.861 } 00:04:51.861 ]' 00:04:51.861 13:09:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:51.861 { 00:04:51.861 "nbd_device": "/dev/nbd0", 00:04:51.861 "bdev_name": "Malloc0" 00:04:51.861 }, 00:04:51.861 { 00:04:51.861 "nbd_device": "/dev/nbd1", 00:04:51.861 "bdev_name": "Malloc1" 00:04:51.861 } 00:04:51.861 ]' 00:04:51.861 13:09:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:51.861 13:09:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:51.861 /dev/nbd1' 00:04:51.861 13:09:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:51.861 /dev/nbd1' 00:04:51.861 13:09:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:51.861 13:09:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:51.861 13:09:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:51.861 13:09:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:51.861 13:09:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:51.861 13:09:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:51.861 13:09:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.861 13:09:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:51.861 13:09:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:51.861 13:09:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:51.861 13:09:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:51.861 13:09:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:51.861 256+0 records in 00:04:51.861 256+0 records out 00:04:51.861 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122659 s, 85.5 MB/s 00:04:51.861 13:09:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:51.861 13:09:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:52.121 256+0 records in 00:04:52.121 256+0 records out 00:04:52.121 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0184832 s, 56.7 MB/s 00:04:52.121 13:09:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:52.121 13:09:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:52.121 256+0 records in 00:04:52.121 256+0 records out 00:04:52.121 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0218002 s, 48.1 MB/s 00:04:52.122 13:09:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:52.122 13:09:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.122 13:09:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:52.122 13:09:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:52.122 13:09:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:52.122 13:09:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:52.122 13:09:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:52.122 13:09:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:52.122 13:09:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:52.122 13:09:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:52.122 13:09:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:52.122 13:09:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:52.122 13:09:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:52.122 13:09:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.122 13:09:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.122 13:09:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:52.122 13:09:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:52.122 13:09:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:52.122 13:09:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:52.122 13:10:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:52.122 13:10:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:52.122 13:10:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:52.122 13:10:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:52.122 13:10:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:52.122 13:10:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:52.122 13:10:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:52.122 13:10:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:52.122 13:10:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:52.122 13:10:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:52.382 13:10:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:52.382 13:10:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:52.382 13:10:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:52.382 13:10:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:52.382 13:10:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:52.382 13:10:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:52.382 13:10:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:52.382 13:10:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:52.382 13:10:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:52.382 13:10:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.382 13:10:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:52.643 13:10:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:52.643 13:10:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:52.643 13:10:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:52.643 13:10:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:52.643 13:10:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:52.643 13:10:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:52.643 13:10:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:52.643 13:10:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:52.643 13:10:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:52.643 13:10:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:52.643 13:10:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:52.643 13:10:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:52.643 13:10:00 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:52.903 13:10:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:53.845 [2024-11-07 13:10:01.653075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:53.845 [2024-11-07 13:10:01.745144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.846 [2024-11-07 13:10:01.745146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.107 [2024-11-07 13:10:01.883972] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:54.107 [2024-11-07 13:10:01.884022] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:56.027 13:10:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:56.027 13:10:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:56.027 spdk_app_start Round 1 00:04:56.027 13:10:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3592892 /var/tmp/spdk-nbd.sock 00:04:56.027 13:10:03 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3592892 ']' 00:04:56.027 13:10:03 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:56.027 13:10:03 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:56.027 13:10:03 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:56.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:56.027 13:10:03 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:56.027 13:10:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:56.027 13:10:04 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:56.027 13:10:04 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:56.027 13:10:04 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:56.287 Malloc0 00:04:56.287 13:10:04 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:56.547 Malloc1 00:04:56.547 13:10:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:56.547 13:10:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.547 13:10:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:56.547 13:10:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:56.547 13:10:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.547 13:10:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:56.547 13:10:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:56.547 13:10:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.547 13:10:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:56.547 13:10:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:56.547 13:10:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.547 13:10:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:56.547 13:10:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:56.547 13:10:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:56.547 13:10:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:56.547 13:10:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:56.807 /dev/nbd0 00:04:56.807 13:10:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:56.807 13:10:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:56.807 13:10:04 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:56.807 13:10:04 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:56.807 13:10:04 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:56.807 13:10:04 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:56.808 13:10:04 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:56.808 13:10:04 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:56.808 13:10:04 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:56.808 13:10:04 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:56.808 13:10:04 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:56.808 1+0 records in 00:04:56.808 1+0 records out 00:04:56.808 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290705 s, 14.1 MB/s 00:04:56.808 13:10:04 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:56.808 13:10:04 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:56.808 13:10:04 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:56.808 13:10:04 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:56.808 13:10:04 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:56.808 13:10:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:56.808 13:10:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:56.808 13:10:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:57.068 /dev/nbd1 00:04:57.068 13:10:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:57.068 13:10:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:57.068 13:10:04 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:57.068 13:10:04 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:57.068 13:10:04 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:57.068 13:10:04 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:57.068 13:10:04 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:57.068 13:10:04 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:57.068 13:10:04 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:57.068 13:10:04 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:57.068 13:10:04 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:57.068 1+0 records in 00:04:57.068 1+0 records out 00:04:57.068 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028641 s, 14.3 MB/s 00:04:57.068 13:10:04 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:57.068 13:10:04 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:57.068 13:10:04 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:57.068 13:10:04 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:57.068 13:10:04 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:57.068 13:10:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:57.068 13:10:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.068 13:10:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:57.068 13:10:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.068 13:10:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:57.068 13:10:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:57.068 { 00:04:57.069 "nbd_device": "/dev/nbd0", 00:04:57.069 "bdev_name": "Malloc0" 00:04:57.069 }, 00:04:57.069 { 00:04:57.069 "nbd_device": "/dev/nbd1", 00:04:57.069 "bdev_name": "Malloc1" 00:04:57.069 } 00:04:57.069 ]' 00:04:57.330 13:10:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:57.330 { 00:04:57.330 "nbd_device": "/dev/nbd0", 00:04:57.330 "bdev_name": "Malloc0" 00:04:57.330 }, 00:04:57.330 { 00:04:57.330 "nbd_device": "/dev/nbd1", 00:04:57.330 "bdev_name": "Malloc1" 00:04:57.330 } 00:04:57.330 ]' 00:04:57.330 13:10:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:57.330 13:10:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:57.330 /dev/nbd1' 00:04:57.330 13:10:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:57.330 /dev/nbd1' 00:04:57.330 13:10:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:57.330 13:10:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:57.330 13:10:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:57.330 13:10:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:57.330 13:10:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:57.330 13:10:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:57.330 13:10:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.330 13:10:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:57.330 13:10:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:57.330 13:10:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:57.330 13:10:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:57.330 13:10:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:57.330 256+0 records in 00:04:57.330 256+0 records out 00:04:57.330 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117912 s, 88.9 MB/s 00:04:57.330 13:10:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:57.330 13:10:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:57.330 256+0 records in 00:04:57.330 256+0 records out 00:04:57.330 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0190831 s, 54.9 MB/s 00:04:57.330 13:10:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:57.330 13:10:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:57.330 256+0 records in 00:04:57.330 256+0 records out 00:04:57.330 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0218425 s, 48.0 MB/s 00:04:57.330 13:10:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:57.330 13:10:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.330 13:10:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:57.330 13:10:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:57.330 13:10:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:57.330 13:10:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:57.330 13:10:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:57.330 13:10:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:57.330 13:10:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:57.330 13:10:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:57.330 13:10:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:57.330 13:10:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:57.330 13:10:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:57.330 13:10:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.330 13:10:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.330 13:10:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:57.330 13:10:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:57.330 13:10:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:57.330 13:10:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:57.592 13:10:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:57.592 13:10:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:57.592 13:10:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:57.592 13:10:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:57.592 13:10:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:57.592 13:10:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:57.592 13:10:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:57.592 13:10:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:57.592 13:10:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:57.592 13:10:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:57.592 13:10:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:57.592 13:10:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:57.592 13:10:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:57.592 13:10:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:57.592 13:10:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:57.592 13:10:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:57.592 13:10:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:57.592 13:10:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:57.592 13:10:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:57.592 13:10:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.592 13:10:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:57.853 13:10:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:57.853 13:10:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:57.853 13:10:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:57.853 13:10:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:57.853 13:10:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:57.853 13:10:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:57.853 13:10:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:57.853 13:10:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:57.853 13:10:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:57.853 13:10:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:57.853 13:10:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:57.853 13:10:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:57.853 13:10:05 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:58.114 13:10:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:59.056 [2024-11-07 13:10:06.926672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:59.056 [2024-11-07 13:10:07.021680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.057 [2024-11-07 13:10:07.021696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.318 [2024-11-07 13:10:07.160728] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:59.318 [2024-11-07 13:10:07.160776] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:01.231 13:10:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:01.231 13:10:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:01.231 spdk_app_start Round 2 00:05:01.231 13:10:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3592892 /var/tmp/spdk-nbd.sock 00:05:01.231 13:10:09 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3592892 ']' 00:05:01.231 13:10:09 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:01.231 13:10:09 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:01.231 13:10:09 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:01.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:01.231 13:10:09 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:01.231 13:10:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:01.492 13:10:09 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:01.492 13:10:09 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:01.492 13:10:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:01.492 Malloc0 00:05:01.492 13:10:09 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:01.753 Malloc1 00:05:01.753 13:10:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:01.753 13:10:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.753 13:10:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:01.753 13:10:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:01.753 13:10:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.753 13:10:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:01.753 13:10:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:01.753 13:10:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.753 13:10:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:01.753 13:10:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:01.753 13:10:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.753 13:10:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:01.753 13:10:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:01.753 13:10:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:01.753 13:10:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.753 13:10:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:02.016 /dev/nbd0 00:05:02.016 13:10:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:02.016 13:10:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:02.016 13:10:09 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:02.016 13:10:09 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:02.016 13:10:09 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:02.016 13:10:09 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:02.016 13:10:09 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:02.016 13:10:09 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:02.016 13:10:09 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:02.016 13:10:09 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:02.016 13:10:09 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:02.016 1+0 records in 00:05:02.016 1+0 records out 00:05:02.016 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292324 s, 14.0 MB/s 00:05:02.016 13:10:09 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:02.016 13:10:09 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:02.016 13:10:09 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:02.016 13:10:09 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:02.016 13:10:09 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:02.016 13:10:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:02.016 13:10:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.016 13:10:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:02.277 /dev/nbd1 00:05:02.277 13:10:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:02.277 13:10:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:02.277 13:10:10 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:02.277 13:10:10 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:02.277 13:10:10 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:02.277 13:10:10 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:02.277 13:10:10 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:02.277 13:10:10 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:02.277 13:10:10 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:02.277 13:10:10 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:02.277 13:10:10 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:02.277 1+0 records in 00:05:02.277 1+0 records out 00:05:02.277 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435684 s, 9.4 MB/s 00:05:02.277 13:10:10 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:02.277 13:10:10 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:02.277 13:10:10 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:02.277 13:10:10 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:02.277 13:10:10 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:02.277 13:10:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:02.277 13:10:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.277 13:10:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:02.277 13:10:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.277 13:10:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:02.539 13:10:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:02.539 { 00:05:02.539 "nbd_device": "/dev/nbd0", 00:05:02.539 "bdev_name": "Malloc0" 00:05:02.539 }, 00:05:02.539 { 00:05:02.539 "nbd_device": "/dev/nbd1", 00:05:02.539 "bdev_name": "Malloc1" 00:05:02.539 } 00:05:02.539 ]' 00:05:02.539 13:10:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:02.539 { 00:05:02.539 "nbd_device": "/dev/nbd0", 00:05:02.539 "bdev_name": "Malloc0" 00:05:02.539 }, 00:05:02.539 { 00:05:02.539 "nbd_device": "/dev/nbd1", 00:05:02.539 "bdev_name": "Malloc1" 00:05:02.539 } 00:05:02.539 ]' 00:05:02.539 13:10:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:02.539 13:10:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:02.539 /dev/nbd1' 00:05:02.539 13:10:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:02.539 /dev/nbd1' 00:05:02.539 13:10:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:02.539 13:10:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:02.539 13:10:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:02.539 13:10:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:02.539 13:10:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:02.539 13:10:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:02.539 13:10:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.539 13:10:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:02.539 13:10:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:02.539 13:10:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:02.539 13:10:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:02.539 13:10:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:02.539 256+0 records in 00:05:02.539 256+0 records out 00:05:02.539 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128367 s, 81.7 MB/s 00:05:02.539 13:10:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:02.539 13:10:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:02.539 256+0 records in 00:05:02.539 256+0 records out 00:05:02.539 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0188134 s, 55.7 MB/s 00:05:02.539 13:10:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:02.539 13:10:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:02.539 256+0 records in 00:05:02.539 256+0 records out 00:05:02.539 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0191353 s, 54.8 MB/s 00:05:02.539 13:10:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:02.539 13:10:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.539 13:10:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:02.539 13:10:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:02.539 13:10:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:02.539 13:10:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:02.539 13:10:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:02.539 13:10:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:02.539 13:10:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:02.539 13:10:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:02.539 13:10:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:02.539 13:10:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:02.539 13:10:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:02.539 13:10:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.539 13:10:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.539 13:10:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:02.539 13:10:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:02.539 13:10:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:02.539 13:10:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:02.799 13:10:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:02.799 13:10:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:02.799 13:10:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:02.799 13:10:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:02.799 13:10:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:02.799 13:10:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:02.799 13:10:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:02.799 13:10:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:02.800 13:10:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:02.800 13:10:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:03.060 13:10:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:03.060 13:10:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:03.060 13:10:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:03.060 13:10:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:03.060 13:10:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:03.060 13:10:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:03.060 13:10:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:03.060 13:10:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:03.060 13:10:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:03.060 13:10:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.061 13:10:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:03.061 13:10:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:03.061 13:10:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:03.061 13:10:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:03.061 13:10:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:03.061 13:10:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:03.061 13:10:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:03.061 13:10:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:03.061 13:10:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:03.061 13:10:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:03.061 13:10:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:03.061 13:10:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:03.061 13:10:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:03.061 13:10:11 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:03.630 13:10:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:04.201 [2024-11-07 13:10:12.173700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:04.461 [2024-11-07 13:10:12.265211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.461 [2024-11-07 13:10:12.265213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.461 [2024-11-07 13:10:12.404006] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:04.461 [2024-11-07 13:10:12.404052] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:06.375 13:10:14 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3592892 /var/tmp/spdk-nbd.sock 00:05:06.375 13:10:14 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3592892 ']' 00:05:06.375 13:10:14 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:06.375 13:10:14 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:06.375 13:10:14 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:06.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:06.375 13:10:14 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:06.375 13:10:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:06.635 13:10:14 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:06.635 13:10:14 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:06.635 13:10:14 event.app_repeat -- event/event.sh@39 -- # killprocess 3592892 00:05:06.635 13:10:14 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 3592892 ']' 00:05:06.635 13:10:14 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 3592892 00:05:06.635 13:10:14 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:05:06.635 13:10:14 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:06.635 13:10:14 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3592892 00:05:06.635 13:10:14 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:06.635 13:10:14 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:06.635 13:10:14 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3592892' 00:05:06.635 killing process with pid 3592892 00:05:06.635 13:10:14 event.app_repeat -- common/autotest_common.sh@971 -- # kill 3592892 00:05:06.635 13:10:14 event.app_repeat -- common/autotest_common.sh@976 -- # wait 3592892 00:05:07.578 spdk_app_start is called in Round 0. 00:05:07.579 Shutdown signal received, stop current app iteration 00:05:07.579 Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 reinitialization... 00:05:07.579 spdk_app_start is called in Round 1. 00:05:07.579 Shutdown signal received, stop current app iteration 00:05:07.579 Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 reinitialization... 00:05:07.579 spdk_app_start is called in Round 2. 00:05:07.579 Shutdown signal received, stop current app iteration 00:05:07.579 Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 reinitialization... 00:05:07.579 spdk_app_start is called in Round 3. 00:05:07.579 Shutdown signal received, stop current app iteration 00:05:07.579 13:10:15 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:07.579 13:10:15 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:07.579 00:05:07.579 real 0m17.400s 00:05:07.579 user 0m36.581s 00:05:07.579 sys 0m2.499s 00:05:07.579 13:10:15 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:07.579 13:10:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:07.579 ************************************ 00:05:07.579 END TEST app_repeat 00:05:07.579 ************************************ 00:05:07.579 13:10:15 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:07.579 13:10:15 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:07.579 13:10:15 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:07.579 13:10:15 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:07.579 13:10:15 event -- common/autotest_common.sh@10 -- # set +x 00:05:07.579 ************************************ 00:05:07.579 START TEST cpu_locks 00:05:07.579 ************************************ 00:05:07.579 13:10:15 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:07.579 * Looking for test storage... 00:05:07.579 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:07.579 13:10:15 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:07.579 13:10:15 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:05:07.579 13:10:15 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:07.579 13:10:15 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:07.579 13:10:15 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.579 13:10:15 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.579 13:10:15 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.579 13:10:15 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.579 13:10:15 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.579 13:10:15 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.579 13:10:15 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.579 13:10:15 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.579 13:10:15 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.579 13:10:15 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.579 13:10:15 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.579 13:10:15 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:07.579 13:10:15 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:07.579 13:10:15 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.579 13:10:15 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.579 13:10:15 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:07.579 13:10:15 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:07.579 13:10:15 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.579 13:10:15 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:07.579 13:10:15 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.579 13:10:15 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:07.579 13:10:15 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:07.579 13:10:15 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.579 13:10:15 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:07.579 13:10:15 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.579 13:10:15 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.579 13:10:15 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.579 13:10:15 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:07.579 13:10:15 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.579 13:10:15 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:07.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.579 --rc genhtml_branch_coverage=1 00:05:07.579 --rc genhtml_function_coverage=1 00:05:07.579 --rc genhtml_legend=1 00:05:07.579 --rc geninfo_all_blocks=1 00:05:07.579 --rc geninfo_unexecuted_blocks=1 00:05:07.579 00:05:07.579 ' 00:05:07.579 13:10:15 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:07.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.579 --rc genhtml_branch_coverage=1 00:05:07.579 --rc genhtml_function_coverage=1 00:05:07.579 --rc genhtml_legend=1 00:05:07.579 --rc geninfo_all_blocks=1 00:05:07.579 --rc geninfo_unexecuted_blocks=1 00:05:07.579 00:05:07.579 ' 00:05:07.579 13:10:15 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:07.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.579 --rc genhtml_branch_coverage=1 00:05:07.579 --rc genhtml_function_coverage=1 00:05:07.579 --rc genhtml_legend=1 00:05:07.579 --rc geninfo_all_blocks=1 00:05:07.579 --rc geninfo_unexecuted_blocks=1 00:05:07.579 00:05:07.579 ' 00:05:07.579 13:10:15 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:07.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.579 --rc genhtml_branch_coverage=1 00:05:07.579 --rc genhtml_function_coverage=1 00:05:07.579 --rc genhtml_legend=1 00:05:07.579 --rc geninfo_all_blocks=1 00:05:07.579 --rc geninfo_unexecuted_blocks=1 00:05:07.579 00:05:07.579 ' 00:05:07.579 13:10:15 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:07.579 13:10:15 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:07.579 13:10:15 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:07.579 13:10:15 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:07.579 13:10:15 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:07.579 13:10:15 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:07.579 13:10:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.840 ************************************ 00:05:07.840 START TEST default_locks 00:05:07.840 ************************************ 00:05:07.840 13:10:15 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:05:07.840 13:10:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3596766 00:05:07.840 13:10:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3596766 00:05:07.840 13:10:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:07.840 13:10:15 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 3596766 ']' 00:05:07.840 13:10:15 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.840 13:10:15 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:07.840 13:10:15 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.840 13:10:15 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:07.840 13:10:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.840 [2024-11-07 13:10:15.683798] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:05:07.840 [2024-11-07 13:10:15.683912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3596766 ] 00:05:07.840 [2024-11-07 13:10:15.804719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.101 [2024-11-07 13:10:15.901126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.672 13:10:16 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:08.672 13:10:16 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:05:08.672 13:10:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3596766 00:05:08.672 13:10:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3596766 00:05:08.672 13:10:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:09.244 lslocks: write error 00:05:09.244 13:10:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3596766 00:05:09.244 13:10:16 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 3596766 ']' 00:05:09.244 13:10:16 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 3596766 00:05:09.244 13:10:16 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:05:09.244 13:10:16 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:09.244 13:10:16 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3596766 00:05:09.244 13:10:17 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:09.244 13:10:17 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:09.244 13:10:17 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3596766' 00:05:09.244 killing process with pid 3596766 00:05:09.244 13:10:17 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 3596766 00:05:09.244 13:10:17 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 3596766 00:05:10.628 13:10:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3596766 00:05:10.628 13:10:18 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:10.628 13:10:18 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3596766 00:05:10.628 13:10:18 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:10.628 13:10:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:10.628 13:10:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:10.628 13:10:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:10.628 13:10:18 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 3596766 00:05:10.628 13:10:18 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 3596766 ']' 00:05:10.628 13:10:18 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.628 13:10:18 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:10.628 13:10:18 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.628 13:10:18 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:10.628 13:10:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3596766) - No such process 00:05:10.628 ERROR: process (pid: 3596766) is no longer running 00:05:10.628 13:10:18 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:10.628 13:10:18 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:05:10.628 13:10:18 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:10.628 13:10:18 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:10.628 13:10:18 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:10.628 13:10:18 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:10.628 13:10:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:10.628 13:10:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:10.628 13:10:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:10.628 13:10:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:10.628 00:05:10.628 real 0m3.033s 00:05:10.628 user 0m3.029s 00:05:10.628 sys 0m0.642s 00:05:10.628 13:10:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:10.628 13:10:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.628 ************************************ 00:05:10.628 END TEST default_locks 00:05:10.628 ************************************ 00:05:10.890 13:10:18 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:10.890 13:10:18 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:10.890 13:10:18 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:10.890 13:10:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.890 ************************************ 00:05:10.890 START TEST default_locks_via_rpc 00:05:10.890 ************************************ 00:05:10.890 13:10:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:05:10.890 13:10:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3597334 00:05:10.890 13:10:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3597334 00:05:10.890 13:10:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:10.890 13:10:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3597334 ']' 00:05:10.890 13:10:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.890 13:10:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:10.890 13:10:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.890 13:10:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:10.890 13:10:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.890 [2024-11-07 13:10:18.808734] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:05:10.890 [2024-11-07 13:10:18.808880] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3597334 ] 00:05:11.151 [2024-11-07 13:10:18.961194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.151 [2024-11-07 13:10:19.062190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.725 13:10:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:11.725 13:10:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:11.725 13:10:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:11.725 13:10:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.725 13:10:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.726 13:10:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.726 13:10:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:11.726 13:10:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:11.726 13:10:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:11.726 13:10:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:11.726 13:10:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:11.726 13:10:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.726 13:10:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.726 13:10:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.726 13:10:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3597334 00:05:11.726 13:10:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3597334 00:05:11.726 13:10:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:12.298 13:10:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3597334 00:05:12.298 13:10:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 3597334 ']' 00:05:12.298 13:10:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 3597334 00:05:12.298 13:10:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:05:12.298 13:10:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:12.298 13:10:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3597334 00:05:12.298 13:10:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:12.298 13:10:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:12.298 13:10:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3597334' 00:05:12.298 killing process with pid 3597334 00:05:12.298 13:10:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 3597334 00:05:12.298 13:10:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 3597334 00:05:14.206 00:05:14.206 real 0m3.057s 00:05:14.206 user 0m3.028s 00:05:14.206 sys 0m0.664s 00:05:14.206 13:10:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:14.206 13:10:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.206 ************************************ 00:05:14.206 END TEST default_locks_via_rpc 00:05:14.206 ************************************ 00:05:14.206 13:10:21 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:14.206 13:10:21 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:14.206 13:10:21 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:14.206 13:10:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.206 ************************************ 00:05:14.206 START TEST non_locking_app_on_locked_coremask 00:05:14.206 ************************************ 00:05:14.206 13:10:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:05:14.206 13:10:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3597913 00:05:14.206 13:10:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3597913 /var/tmp/spdk.sock 00:05:14.206 13:10:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:14.206 13:10:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3597913 ']' 00:05:14.206 13:10:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.206 13:10:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:14.206 13:10:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.206 13:10:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:14.206 13:10:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.206 [2024-11-07 13:10:21.927924] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:05:14.206 [2024-11-07 13:10:21.928060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3597913 ] 00:05:14.206 [2024-11-07 13:10:22.086288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.206 [2024-11-07 13:10:22.184965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.146 13:10:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:15.146 13:10:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:15.146 13:10:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3598231 00:05:15.146 13:10:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3598231 /var/tmp/spdk2.sock 00:05:15.146 13:10:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3598231 ']' 00:05:15.146 13:10:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:15.146 13:10:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:15.146 13:10:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:15.146 13:10:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:15.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:15.146 13:10:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:15.146 13:10:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.146 [2024-11-07 13:10:22.922766] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:05:15.146 [2024-11-07 13:10:22.922886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3598231 ] 00:05:15.146 [2024-11-07 13:10:23.121023] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:15.146 [2024-11-07 13:10:23.121075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.407 [2024-11-07 13:10:23.313772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.792 13:10:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:16.792 13:10:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:16.792 13:10:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3597913 00:05:16.792 13:10:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:16.792 13:10:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3597913 00:05:17.053 lslocks: write error 00:05:17.053 13:10:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3597913 00:05:17.053 13:10:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3597913 ']' 00:05:17.053 13:10:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3597913 00:05:17.053 13:10:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:17.053 13:10:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:17.053 13:10:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3597913 00:05:17.314 13:10:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:17.314 13:10:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:17.314 13:10:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3597913' 00:05:17.314 killing process with pid 3597913 00:05:17.314 13:10:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3597913 00:05:17.314 13:10:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3597913 00:05:20.618 13:10:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3598231 00:05:20.618 13:10:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3598231 ']' 00:05:20.618 13:10:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3598231 00:05:20.618 13:10:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:20.618 13:10:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:20.618 13:10:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3598231 00:05:20.618 13:10:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:20.618 13:10:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:20.618 13:10:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3598231' 00:05:20.618 killing process with pid 3598231 00:05:20.618 13:10:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3598231 00:05:20.618 13:10:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3598231 00:05:22.004 00:05:22.004 real 0m8.122s 00:05:22.004 user 0m8.245s 00:05:22.004 sys 0m1.166s 00:05:22.004 13:10:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:22.004 13:10:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.004 ************************************ 00:05:22.004 END TEST non_locking_app_on_locked_coremask 00:05:22.004 ************************************ 00:05:22.004 13:10:29 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:22.004 13:10:29 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:22.004 13:10:29 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:22.004 13:10:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.265 ************************************ 00:05:22.265 START TEST locking_app_on_unlocked_coremask 00:05:22.265 ************************************ 00:05:22.265 13:10:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:05:22.265 13:10:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3599616 00:05:22.265 13:10:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3599616 /var/tmp/spdk.sock 00:05:22.265 13:10:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:22.265 13:10:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3599616 ']' 00:05:22.265 13:10:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.265 13:10:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:22.265 13:10:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.265 13:10:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:22.265 13:10:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.265 [2024-11-07 13:10:30.127509] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:05:22.265 [2024-11-07 13:10:30.127623] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3599616 ] 00:05:22.265 [2024-11-07 13:10:30.268286] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:22.265 [2024-11-07 13:10:30.268328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.526 [2024-11-07 13:10:30.366075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.098 13:10:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:23.098 13:10:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:23.098 13:10:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3599948 00:05:23.098 13:10:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3599948 /var/tmp/spdk2.sock 00:05:23.098 13:10:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3599948 ']' 00:05:23.098 13:10:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:23.098 13:10:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:23.098 13:10:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:23.098 13:10:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:23.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:23.098 13:10:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:23.098 13:10:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.098 [2024-11-07 13:10:31.098156] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:05:23.098 [2024-11-07 13:10:31.098269] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3599948 ] 00:05:23.360 [2024-11-07 13:10:31.299931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.621 [2024-11-07 13:10:31.492575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.007 13:10:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:25.007 13:10:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:25.007 13:10:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3599948 00:05:25.007 13:10:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3599948 00:05:25.007 13:10:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:25.580 lslocks: write error 00:05:25.580 13:10:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3599616 00:05:25.580 13:10:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3599616 ']' 00:05:25.580 13:10:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 3599616 00:05:25.580 13:10:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:25.580 13:10:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:25.580 13:10:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3599616 00:05:25.580 13:10:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:25.580 13:10:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:25.580 13:10:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3599616' 00:05:25.580 killing process with pid 3599616 00:05:25.580 13:10:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 3599616 00:05:25.580 13:10:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 3599616 00:05:28.882 13:10:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3599948 00:05:28.882 13:10:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3599948 ']' 00:05:28.882 13:10:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 3599948 00:05:28.882 13:10:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:28.882 13:10:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:28.882 13:10:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3599948 00:05:28.882 13:10:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:28.882 13:10:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:28.882 13:10:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3599948' 00:05:28.882 killing process with pid 3599948 00:05:28.882 13:10:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 3599948 00:05:28.882 13:10:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 3599948 00:05:30.280 00:05:30.280 real 0m8.239s 00:05:30.280 user 0m8.379s 00:05:30.280 sys 0m1.156s 00:05:30.280 13:10:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:30.280 13:10:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.280 ************************************ 00:05:30.280 END TEST locking_app_on_unlocked_coremask 00:05:30.280 ************************************ 00:05:30.541 13:10:38 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:30.541 13:10:38 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:30.541 13:10:38 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:30.541 13:10:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.541 ************************************ 00:05:30.541 START TEST locking_app_on_locked_coremask 00:05:30.541 ************************************ 00:05:30.541 13:10:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:05:30.541 13:10:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3601334 00:05:30.541 13:10:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3601334 /var/tmp/spdk.sock 00:05:30.541 13:10:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:30.541 13:10:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3601334 ']' 00:05:30.541 13:10:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.541 13:10:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:30.541 13:10:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.541 13:10:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:30.541 13:10:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.541 [2024-11-07 13:10:38.437242] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:05:30.541 [2024-11-07 13:10:38.437364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3601334 ] 00:05:30.802 [2024-11-07 13:10:38.587158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.802 [2024-11-07 13:10:38.685227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.372 13:10:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:31.372 13:10:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:31.372 13:10:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3601665 00:05:31.372 13:10:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3601665 /var/tmp/spdk2.sock 00:05:31.372 13:10:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:31.372 13:10:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:31.372 13:10:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3601665 /var/tmp/spdk2.sock 00:05:31.372 13:10:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:31.372 13:10:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:31.372 13:10:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:31.372 13:10:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:31.372 13:10:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3601665 /var/tmp/spdk2.sock 00:05:31.372 13:10:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3601665 ']' 00:05:31.372 13:10:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:31.372 13:10:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:31.372 13:10:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:31.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:31.372 13:10:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:31.372 13:10:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.633 [2024-11-07 13:10:39.418611] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:05:31.633 [2024-11-07 13:10:39.418729] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3601665 ] 00:05:31.633 [2024-11-07 13:10:39.620297] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3601334 has claimed it. 00:05:31.633 [2024-11-07 13:10:39.620357] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:32.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3601665) - No such process 00:05:32.204 ERROR: process (pid: 3601665) is no longer running 00:05:32.204 13:10:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:32.204 13:10:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:32.204 13:10:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:32.204 13:10:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:32.205 13:10:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:32.205 13:10:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:32.205 13:10:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3601334 00:05:32.205 13:10:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3601334 00:05:32.205 13:10:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:32.466 lslocks: write error 00:05:32.466 13:10:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3601334 00:05:32.466 13:10:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3601334 ']' 00:05:32.466 13:10:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3601334 00:05:32.466 13:10:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:32.466 13:10:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:32.466 13:10:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3601334 00:05:32.466 13:10:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:32.466 13:10:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:32.466 13:10:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3601334' 00:05:32.466 killing process with pid 3601334 00:05:32.466 13:10:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3601334 00:05:32.466 13:10:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3601334 00:05:34.386 00:05:34.386 real 0m3.696s 00:05:34.386 user 0m3.846s 00:05:34.386 sys 0m0.806s 00:05:34.386 13:10:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:34.386 13:10:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.386 ************************************ 00:05:34.386 END TEST locking_app_on_locked_coremask 00:05:34.386 ************************************ 00:05:34.386 13:10:42 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:34.386 13:10:42 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:34.386 13:10:42 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:34.386 13:10:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.386 ************************************ 00:05:34.386 START TEST locking_overlapped_coremask 00:05:34.386 ************************************ 00:05:34.386 13:10:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:05:34.386 13:10:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3602074 00:05:34.386 13:10:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3602074 /var/tmp/spdk.sock 00:05:34.386 13:10:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:34.386 13:10:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 3602074 ']' 00:05:34.386 13:10:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.386 13:10:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:34.386 13:10:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.386 13:10:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:34.386 13:10:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.386 [2024-11-07 13:10:42.216439] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:05:34.386 [2024-11-07 13:10:42.216569] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3602074 ] 00:05:34.386 [2024-11-07 13:10:42.370505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:34.736 [2024-11-07 13:10:42.472166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.736 [2024-11-07 13:10:42.472246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.736 [2024-11-07 13:10:42.472246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:35.382 13:10:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:35.382 13:10:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:35.382 13:10:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3602384 00:05:35.382 13:10:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3602384 /var/tmp/spdk2.sock 00:05:35.382 13:10:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:35.382 13:10:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:35.382 13:10:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3602384 /var/tmp/spdk2.sock 00:05:35.382 13:10:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:35.382 13:10:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.382 13:10:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:35.382 13:10:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.382 13:10:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3602384 /var/tmp/spdk2.sock 00:05:35.382 13:10:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 3602384 ']' 00:05:35.382 13:10:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:35.382 13:10:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:35.382 13:10:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:35.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:35.382 13:10:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:35.382 13:10:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.382 [2024-11-07 13:10:43.219425] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:05:35.382 [2024-11-07 13:10:43.219535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3602384 ] 00:05:35.643 [2024-11-07 13:10:43.388453] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3602074 has claimed it. 00:05:35.643 [2024-11-07 13:10:43.388505] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:35.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3602384) - No such process 00:05:35.904 ERROR: process (pid: 3602384) is no longer running 00:05:35.904 13:10:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:35.904 13:10:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:35.904 13:10:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:35.904 13:10:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:35.904 13:10:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:35.904 13:10:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:35.904 13:10:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:35.904 13:10:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:35.904 13:10:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:35.904 13:10:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:35.904 13:10:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3602074 00:05:35.904 13:10:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 3602074 ']' 00:05:35.904 13:10:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 3602074 00:05:35.904 13:10:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:05:35.904 13:10:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:35.904 13:10:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3602074 00:05:35.904 13:10:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:35.904 13:10:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:35.904 13:10:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3602074' 00:05:35.904 killing process with pid 3602074 00:05:35.904 13:10:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 3602074 00:05:35.904 13:10:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 3602074 00:05:37.819 00:05:37.819 real 0m3.342s 00:05:37.819 user 0m9.002s 00:05:37.819 sys 0m0.600s 00:05:37.819 13:10:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:37.819 13:10:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.819 ************************************ 00:05:37.819 END TEST locking_overlapped_coremask 00:05:37.819 ************************************ 00:05:37.819 13:10:45 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:37.819 13:10:45 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:37.819 13:10:45 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:37.819 13:10:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.819 ************************************ 00:05:37.819 START TEST locking_overlapped_coremask_via_rpc 00:05:37.819 ************************************ 00:05:37.819 13:10:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:05:37.819 13:10:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3602792 00:05:37.819 13:10:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3602792 /var/tmp/spdk.sock 00:05:37.819 13:10:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:37.819 13:10:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3602792 ']' 00:05:37.819 13:10:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.819 13:10:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:37.819 13:10:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.819 13:10:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:37.819 13:10:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.819 [2024-11-07 13:10:45.638517] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:05:37.819 [2024-11-07 13:10:45.638644] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3602792 ] 00:05:37.819 [2024-11-07 13:10:45.797126] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:37.819 [2024-11-07 13:10:45.797188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:38.080 [2024-11-07 13:10:45.900933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.080 [2024-11-07 13:10:45.900984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.080 [2024-11-07 13:10:45.900988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:38.651 13:10:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:38.651 13:10:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:38.651 13:10:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:38.651 13:10:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3603097 00:05:38.651 13:10:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3603097 /var/tmp/spdk2.sock 00:05:38.651 13:10:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3603097 ']' 00:05:38.651 13:10:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:38.651 13:10:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:38.651 13:10:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:38.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:38.651 13:10:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:38.651 13:10:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.651 [2024-11-07 13:10:46.634162] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:05:38.651 [2024-11-07 13:10:46.634278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3603097 ] 00:05:38.911 [2024-11-07 13:10:46.798080] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:38.911 [2024-11-07 13:10:46.798124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:39.172 [2024-11-07 13:10:46.950830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:39.172 [2024-11-07 13:10:46.953930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.172 [2024-11-07 13:10:46.953958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:40.112 13:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:40.112 13:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:40.112 13:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:40.112 13:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.112 13:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.112 13:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.112 13:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:40.112 13:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:40.112 13:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:40.112 13:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:40.112 13:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:40.112 13:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:40.112 13:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:40.112 13:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:40.112 13:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.112 13:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.112 [2024-11-07 13:10:47.915972] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3602792 has claimed it. 00:05:40.112 request: 00:05:40.112 { 00:05:40.112 "method": "framework_enable_cpumask_locks", 00:05:40.112 "req_id": 1 00:05:40.112 } 00:05:40.112 Got JSON-RPC error response 00:05:40.112 response: 00:05:40.112 { 00:05:40.112 "code": -32603, 00:05:40.112 "message": "Failed to claim CPU core: 2" 00:05:40.112 } 00:05:40.112 13:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:40.112 13:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:40.112 13:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:40.112 13:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:40.112 13:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:40.112 13:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3602792 /var/tmp/spdk.sock 00:05:40.112 13:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3602792 ']' 00:05:40.112 13:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.112 13:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:40.112 13:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.112 13:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:40.112 13:10:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.112 13:10:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:40.112 13:10:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:40.112 13:10:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3603097 /var/tmp/spdk2.sock 00:05:40.112 13:10:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3603097 ']' 00:05:40.112 13:10:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:40.112 13:10:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:40.112 13:10:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:40.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:40.112 13:10:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:40.112 13:10:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.372 13:10:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:40.372 13:10:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:40.372 13:10:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:40.372 13:10:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:40.372 13:10:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:40.373 13:10:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:40.373 00:05:40.373 real 0m2.753s 00:05:40.373 user 0m0.886s 00:05:40.373 sys 0m0.153s 00:05:40.373 13:10:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:40.373 13:10:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.373 ************************************ 00:05:40.373 END TEST locking_overlapped_coremask_via_rpc 00:05:40.373 ************************************ 00:05:40.373 13:10:48 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:40.373 13:10:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3602792 ]] 00:05:40.373 13:10:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3602792 00:05:40.373 13:10:48 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3602792 ']' 00:05:40.373 13:10:48 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3602792 00:05:40.373 13:10:48 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:40.373 13:10:48 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:40.373 13:10:48 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3602792 00:05:40.633 13:10:48 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:40.633 13:10:48 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:40.633 13:10:48 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3602792' 00:05:40.633 killing process with pid 3602792 00:05:40.633 13:10:48 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 3602792 00:05:40.633 13:10:48 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 3602792 00:05:42.017 13:10:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3603097 ]] 00:05:42.018 13:10:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3603097 00:05:42.018 13:10:50 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3603097 ']' 00:05:42.018 13:10:50 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3603097 00:05:42.018 13:10:50 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:42.018 13:10:50 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:42.018 13:10:50 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3603097 00:05:42.278 13:10:50 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:42.278 13:10:50 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:42.278 13:10:50 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3603097' 00:05:42.278 killing process with pid 3603097 00:05:42.278 13:10:50 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 3603097 00:05:42.278 13:10:50 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 3603097 00:05:43.662 13:10:51 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:43.662 13:10:51 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:43.662 13:10:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3602792 ]] 00:05:43.662 13:10:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3602792 00:05:43.662 13:10:51 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3602792 ']' 00:05:43.662 13:10:51 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3602792 00:05:43.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3602792) - No such process 00:05:43.662 13:10:51 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 3602792 is not found' 00:05:43.662 Process with pid 3602792 is not found 00:05:43.662 13:10:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3603097 ]] 00:05:43.662 13:10:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3603097 00:05:43.662 13:10:51 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3603097 ']' 00:05:43.662 13:10:51 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3603097 00:05:43.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3603097) - No such process 00:05:43.662 13:10:51 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 3603097 is not found' 00:05:43.662 Process with pid 3603097 is not found 00:05:43.662 13:10:51 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:43.662 00:05:43.662 real 0m35.879s 00:05:43.662 user 0m58.106s 00:05:43.662 sys 0m6.422s 00:05:43.662 13:10:51 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:43.662 13:10:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.662 ************************************ 00:05:43.662 END TEST cpu_locks 00:05:43.662 ************************************ 00:05:43.662 00:05:43.662 real 1m4.439s 00:05:43.662 user 1m54.588s 00:05:43.662 sys 0m10.331s 00:05:43.662 13:10:51 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:43.662 13:10:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:43.662 ************************************ 00:05:43.662 END TEST event 00:05:43.662 ************************************ 00:05:43.662 13:10:51 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:43.662 13:10:51 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:43.662 13:10:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:43.662 13:10:51 -- common/autotest_common.sh@10 -- # set +x 00:05:43.662 ************************************ 00:05:43.662 START TEST thread 00:05:43.662 ************************************ 00:05:43.662 13:10:51 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:43.662 * Looking for test storage... 00:05:43.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:43.662 13:10:51 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:43.662 13:10:51 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:05:43.662 13:10:51 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:43.662 13:10:51 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:43.662 13:10:51 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.662 13:10:51 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.662 13:10:51 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.662 13:10:51 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.662 13:10:51 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.662 13:10:51 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.662 13:10:51 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.662 13:10:51 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.662 13:10:51 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.662 13:10:51 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.662 13:10:51 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.662 13:10:51 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:43.662 13:10:51 thread -- scripts/common.sh@345 -- # : 1 00:05:43.662 13:10:51 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.662 13:10:51 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.662 13:10:51 thread -- scripts/common.sh@365 -- # decimal 1 00:05:43.662 13:10:51 thread -- scripts/common.sh@353 -- # local d=1 00:05:43.662 13:10:51 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.662 13:10:51 thread -- scripts/common.sh@355 -- # echo 1 00:05:43.662 13:10:51 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.662 13:10:51 thread -- scripts/common.sh@366 -- # decimal 2 00:05:43.662 13:10:51 thread -- scripts/common.sh@353 -- # local d=2 00:05:43.662 13:10:51 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.662 13:10:51 thread -- scripts/common.sh@355 -- # echo 2 00:05:43.662 13:10:51 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.662 13:10:51 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.662 13:10:51 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.662 13:10:51 thread -- scripts/common.sh@368 -- # return 0 00:05:43.662 13:10:51 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.662 13:10:51 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:43.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.662 --rc genhtml_branch_coverage=1 00:05:43.662 --rc genhtml_function_coverage=1 00:05:43.662 --rc genhtml_legend=1 00:05:43.662 --rc geninfo_all_blocks=1 00:05:43.662 --rc geninfo_unexecuted_blocks=1 00:05:43.662 00:05:43.662 ' 00:05:43.662 13:10:51 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:43.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.662 --rc genhtml_branch_coverage=1 00:05:43.662 --rc genhtml_function_coverage=1 00:05:43.662 --rc genhtml_legend=1 00:05:43.662 --rc geninfo_all_blocks=1 00:05:43.662 --rc geninfo_unexecuted_blocks=1 00:05:43.662 00:05:43.662 ' 00:05:43.662 13:10:51 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:43.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.662 --rc genhtml_branch_coverage=1 00:05:43.662 --rc genhtml_function_coverage=1 00:05:43.662 --rc genhtml_legend=1 00:05:43.662 --rc geninfo_all_blocks=1 00:05:43.662 --rc geninfo_unexecuted_blocks=1 00:05:43.662 00:05:43.662 ' 00:05:43.662 13:10:51 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:43.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.662 --rc genhtml_branch_coverage=1 00:05:43.662 --rc genhtml_function_coverage=1 00:05:43.662 --rc genhtml_legend=1 00:05:43.662 --rc geninfo_all_blocks=1 00:05:43.662 --rc geninfo_unexecuted_blocks=1 00:05:43.662 00:05:43.662 ' 00:05:43.662 13:10:51 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:43.662 13:10:51 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:43.662 13:10:51 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:43.662 13:10:51 thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.662 ************************************ 00:05:43.662 START TEST thread_poller_perf 00:05:43.662 ************************************ 00:05:43.662 13:10:51 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:43.662 [2024-11-07 13:10:51.629884] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:05:43.663 [2024-11-07 13:10:51.629987] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3604216 ] 00:05:43.923 [2024-11-07 13:10:51.769709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.923 [2024-11-07 13:10:51.865326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.923 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:45.307 [2024-11-07T12:10:53.314Z] ====================================== 00:05:45.307 [2024-11-07T12:10:53.314Z] busy:2411498078 (cyc) 00:05:45.307 [2024-11-07T12:10:53.314Z] total_run_count: 284000 00:05:45.307 [2024-11-07T12:10:53.314Z] tsc_hz: 2400000000 (cyc) 00:05:45.307 [2024-11-07T12:10:53.314Z] ====================================== 00:05:45.307 [2024-11-07T12:10:53.314Z] poller_cost: 8491 (cyc), 3537 (nsec) 00:05:45.307 00:05:45.307 real 0m1.458s 00:05:45.307 user 0m1.308s 00:05:45.307 sys 0m0.143s 00:05:45.307 13:10:53 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:45.307 13:10:53 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:45.307 ************************************ 00:05:45.308 END TEST thread_poller_perf 00:05:45.308 ************************************ 00:05:45.308 13:10:53 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:45.308 13:10:53 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:45.308 13:10:53 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:45.308 13:10:53 thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.308 ************************************ 00:05:45.308 START TEST thread_poller_perf 00:05:45.308 ************************************ 00:05:45.308 13:10:53 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:45.308 [2024-11-07 13:10:53.133387] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:05:45.308 [2024-11-07 13:10:53.133492] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3604890 ] 00:05:45.308 [2024-11-07 13:10:53.270362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.567 [2024-11-07 13:10:53.364401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.567 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:46.952 [2024-11-07T12:10:54.959Z] ====================================== 00:05:46.952 [2024-11-07T12:10:54.959Z] busy:2403336498 (cyc) 00:05:46.952 [2024-11-07T12:10:54.959Z] total_run_count: 3643000 00:05:46.952 [2024-11-07T12:10:54.959Z] tsc_hz: 2400000000 (cyc) 00:05:46.952 [2024-11-07T12:10:54.959Z] ====================================== 00:05:46.952 [2024-11-07T12:10:54.959Z] poller_cost: 659 (cyc), 274 (nsec) 00:05:46.952 00:05:46.952 real 0m1.445s 00:05:46.952 user 0m1.301s 00:05:46.952 sys 0m0.138s 00:05:46.952 13:10:54 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:46.952 13:10:54 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:46.952 ************************************ 00:05:46.952 END TEST thread_poller_perf 00:05:46.952 ************************************ 00:05:46.952 13:10:54 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:46.952 00:05:46.952 real 0m3.217s 00:05:46.952 user 0m8.992s 00:05:46.952 sys 0m3.981s 00:05:46.952 13:10:54 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:46.952 13:10:54 thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.952 ************************************ 00:05:46.952 END TEST thread 00:05:46.952 ************************************ 00:05:46.952 13:10:54 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:46.952 13:10:54 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:46.952 13:10:54 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:46.952 13:10:54 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:46.952 13:10:54 -- common/autotest_common.sh@10 -- # set +x 00:05:46.952 ************************************ 00:05:46.952 START TEST app_cmdline 00:05:46.952 ************************************ 00:05:46.952 13:10:54 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:46.952 * Looking for test storage... 00:05:46.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:46.952 13:10:54 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:46.952 13:10:54 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:05:46.952 13:10:54 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:46.952 13:10:54 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:46.952 13:10:54 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.952 13:10:54 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.952 13:10:54 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.952 13:10:54 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.952 13:10:54 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.952 13:10:54 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.952 13:10:54 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.952 13:10:54 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.952 13:10:54 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.952 13:10:54 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.952 13:10:54 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.952 13:10:54 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:46.952 13:10:54 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:46.952 13:10:54 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.952 13:10:54 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.952 13:10:54 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:46.952 13:10:54 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:46.952 13:10:54 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.952 13:10:54 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:46.952 13:10:54 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.952 13:10:54 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:46.952 13:10:54 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:46.952 13:10:54 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.952 13:10:54 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:46.952 13:10:54 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.952 13:10:54 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.952 13:10:54 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.952 13:10:54 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:46.952 13:10:54 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.952 13:10:54 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:46.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.952 --rc genhtml_branch_coverage=1 00:05:46.952 --rc genhtml_function_coverage=1 00:05:46.952 --rc genhtml_legend=1 00:05:46.952 --rc geninfo_all_blocks=1 00:05:46.952 --rc geninfo_unexecuted_blocks=1 00:05:46.952 00:05:46.952 ' 00:05:46.952 13:10:54 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:46.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.952 --rc genhtml_branch_coverage=1 00:05:46.952 --rc genhtml_function_coverage=1 00:05:46.952 --rc genhtml_legend=1 00:05:46.952 --rc geninfo_all_blocks=1 00:05:46.952 --rc geninfo_unexecuted_blocks=1 00:05:46.952 00:05:46.952 ' 00:05:46.952 13:10:54 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:46.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.952 --rc genhtml_branch_coverage=1 00:05:46.952 --rc genhtml_function_coverage=1 00:05:46.952 --rc genhtml_legend=1 00:05:46.952 --rc geninfo_all_blocks=1 00:05:46.952 --rc geninfo_unexecuted_blocks=1 00:05:46.952 00:05:46.952 ' 00:05:46.952 13:10:54 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:46.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.952 --rc genhtml_branch_coverage=1 00:05:46.952 --rc genhtml_function_coverage=1 00:05:46.952 --rc genhtml_legend=1 00:05:46.952 --rc geninfo_all_blocks=1 00:05:46.952 --rc geninfo_unexecuted_blocks=1 00:05:46.952 00:05:46.952 ' 00:05:46.952 13:10:54 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:46.952 13:10:54 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3605290 00:05:46.952 13:10:54 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3605290 00:05:46.952 13:10:54 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 3605290 ']' 00:05:46.952 13:10:54 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.952 13:10:54 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:46.952 13:10:54 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.952 13:10:54 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:46.952 13:10:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:46.952 13:10:54 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:46.952 [2024-11-07 13:10:54.888724] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:05:46.952 [2024-11-07 13:10:54.888847] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3605290 ] 00:05:47.214 [2024-11-07 13:10:55.040370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.214 [2024-11-07 13:10:55.138511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.786 13:10:55 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:47.786 13:10:55 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:05:47.786 13:10:55 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:48.046 { 00:05:48.046 "version": "SPDK v25.01-pre git sha1 b264e22f0", 00:05:48.046 "fields": { 00:05:48.046 "major": 25, 00:05:48.046 "minor": 1, 00:05:48.046 "patch": 0, 00:05:48.046 "suffix": "-pre", 00:05:48.046 "commit": "b264e22f0" 00:05:48.046 } 00:05:48.046 } 00:05:48.046 13:10:55 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:48.046 13:10:55 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:48.046 13:10:55 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:48.047 13:10:55 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:48.047 13:10:55 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:48.047 13:10:55 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.047 13:10:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:48.047 13:10:55 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:48.047 13:10:55 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:48.047 13:10:55 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.047 13:10:55 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:48.047 13:10:55 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:48.047 13:10:55 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:48.047 13:10:55 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:48.047 13:10:55 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:48.047 13:10:55 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:48.047 13:10:55 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.047 13:10:55 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:48.047 13:10:55 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.047 13:10:55 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:48.047 13:10:55 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.047 13:10:55 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:48.047 13:10:55 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:48.047 13:10:55 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:48.307 request: 00:05:48.307 { 00:05:48.307 "method": "env_dpdk_get_mem_stats", 00:05:48.307 "req_id": 1 00:05:48.307 } 00:05:48.307 Got JSON-RPC error response 00:05:48.307 response: 00:05:48.307 { 00:05:48.307 "code": -32601, 00:05:48.307 "message": "Method not found" 00:05:48.307 } 00:05:48.307 13:10:56 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:48.307 13:10:56 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:48.307 13:10:56 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:48.307 13:10:56 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:48.308 13:10:56 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3605290 00:05:48.308 13:10:56 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 3605290 ']' 00:05:48.308 13:10:56 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 3605290 00:05:48.308 13:10:56 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:05:48.308 13:10:56 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:48.308 13:10:56 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3605290 00:05:48.308 13:10:56 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:48.308 13:10:56 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:48.308 13:10:56 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3605290' 00:05:48.308 killing process with pid 3605290 00:05:48.308 13:10:56 app_cmdline -- common/autotest_common.sh@971 -- # kill 3605290 00:05:48.308 13:10:56 app_cmdline -- common/autotest_common.sh@976 -- # wait 3605290 00:05:50.224 00:05:50.224 real 0m3.195s 00:05:50.224 user 0m3.405s 00:05:50.224 sys 0m0.552s 00:05:50.224 13:10:57 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:50.224 13:10:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:50.224 ************************************ 00:05:50.224 END TEST app_cmdline 00:05:50.224 ************************************ 00:05:50.224 13:10:57 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:50.224 13:10:57 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:50.224 13:10:57 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:50.224 13:10:57 -- common/autotest_common.sh@10 -- # set +x 00:05:50.224 ************************************ 00:05:50.224 START TEST version 00:05:50.224 ************************************ 00:05:50.224 13:10:57 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:50.224 * Looking for test storage... 00:05:50.224 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:50.224 13:10:57 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:50.224 13:10:57 version -- common/autotest_common.sh@1691 -- # lcov --version 00:05:50.224 13:10:57 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:50.224 13:10:58 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:50.224 13:10:58 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.224 13:10:58 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.224 13:10:58 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.224 13:10:58 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.224 13:10:58 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.224 13:10:58 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.224 13:10:58 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.224 13:10:58 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.224 13:10:58 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.224 13:10:58 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.224 13:10:58 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.224 13:10:58 version -- scripts/common.sh@344 -- # case "$op" in 00:05:50.224 13:10:58 version -- scripts/common.sh@345 -- # : 1 00:05:50.224 13:10:58 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.224 13:10:58 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.224 13:10:58 version -- scripts/common.sh@365 -- # decimal 1 00:05:50.224 13:10:58 version -- scripts/common.sh@353 -- # local d=1 00:05:50.224 13:10:58 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.224 13:10:58 version -- scripts/common.sh@355 -- # echo 1 00:05:50.224 13:10:58 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.224 13:10:58 version -- scripts/common.sh@366 -- # decimal 2 00:05:50.224 13:10:58 version -- scripts/common.sh@353 -- # local d=2 00:05:50.224 13:10:58 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.224 13:10:58 version -- scripts/common.sh@355 -- # echo 2 00:05:50.224 13:10:58 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.224 13:10:58 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.224 13:10:58 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.224 13:10:58 version -- scripts/common.sh@368 -- # return 0 00:05:50.224 13:10:58 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.224 13:10:58 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:50.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.224 --rc genhtml_branch_coverage=1 00:05:50.224 --rc genhtml_function_coverage=1 00:05:50.224 --rc genhtml_legend=1 00:05:50.224 --rc geninfo_all_blocks=1 00:05:50.224 --rc geninfo_unexecuted_blocks=1 00:05:50.224 00:05:50.224 ' 00:05:50.224 13:10:58 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:50.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.224 --rc genhtml_branch_coverage=1 00:05:50.224 --rc genhtml_function_coverage=1 00:05:50.224 --rc genhtml_legend=1 00:05:50.224 --rc geninfo_all_blocks=1 00:05:50.224 --rc geninfo_unexecuted_blocks=1 00:05:50.224 00:05:50.224 ' 00:05:50.224 13:10:58 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:50.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.224 --rc genhtml_branch_coverage=1 00:05:50.224 --rc genhtml_function_coverage=1 00:05:50.224 --rc genhtml_legend=1 00:05:50.224 --rc geninfo_all_blocks=1 00:05:50.224 --rc geninfo_unexecuted_blocks=1 00:05:50.224 00:05:50.224 ' 00:05:50.224 13:10:58 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:50.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.224 --rc genhtml_branch_coverage=1 00:05:50.224 --rc genhtml_function_coverage=1 00:05:50.224 --rc genhtml_legend=1 00:05:50.224 --rc geninfo_all_blocks=1 00:05:50.224 --rc geninfo_unexecuted_blocks=1 00:05:50.224 00:05:50.224 ' 00:05:50.224 13:10:58 version -- app/version.sh@17 -- # get_header_version major 00:05:50.224 13:10:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:50.224 13:10:58 version -- app/version.sh@14 -- # cut -f2 00:05:50.224 13:10:58 version -- app/version.sh@14 -- # tr -d '"' 00:05:50.224 13:10:58 version -- app/version.sh@17 -- # major=25 00:05:50.224 13:10:58 version -- app/version.sh@18 -- # get_header_version minor 00:05:50.224 13:10:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:50.224 13:10:58 version -- app/version.sh@14 -- # cut -f2 00:05:50.224 13:10:58 version -- app/version.sh@14 -- # tr -d '"' 00:05:50.224 13:10:58 version -- app/version.sh@18 -- # minor=1 00:05:50.224 13:10:58 version -- app/version.sh@19 -- # get_header_version patch 00:05:50.224 13:10:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:50.224 13:10:58 version -- app/version.sh@14 -- # cut -f2 00:05:50.224 13:10:58 version -- app/version.sh@14 -- # tr -d '"' 00:05:50.224 13:10:58 version -- app/version.sh@19 -- # patch=0 00:05:50.224 13:10:58 version -- app/version.sh@20 -- # get_header_version suffix 00:05:50.224 13:10:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:50.224 13:10:58 version -- app/version.sh@14 -- # cut -f2 00:05:50.224 13:10:58 version -- app/version.sh@14 -- # tr -d '"' 00:05:50.224 13:10:58 version -- app/version.sh@20 -- # suffix=-pre 00:05:50.224 13:10:58 version -- app/version.sh@22 -- # version=25.1 00:05:50.224 13:10:58 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:50.224 13:10:58 version -- app/version.sh@28 -- # version=25.1rc0 00:05:50.224 13:10:58 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:50.224 13:10:58 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:50.224 13:10:58 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:50.224 13:10:58 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:50.224 00:05:50.224 real 0m0.275s 00:05:50.224 user 0m0.164s 00:05:50.224 sys 0m0.153s 00:05:50.224 13:10:58 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:50.224 13:10:58 version -- common/autotest_common.sh@10 -- # set +x 00:05:50.224 ************************************ 00:05:50.224 END TEST version 00:05:50.224 ************************************ 00:05:50.224 13:10:58 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:50.224 13:10:58 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:50.224 13:10:58 -- spdk/autotest.sh@194 -- # uname -s 00:05:50.224 13:10:58 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:50.224 13:10:58 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:50.224 13:10:58 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:50.224 13:10:58 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:50.224 13:10:58 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:05:50.224 13:10:58 -- spdk/autotest.sh@256 -- # timing_exit lib 00:05:50.224 13:10:58 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:50.224 13:10:58 -- common/autotest_common.sh@10 -- # set +x 00:05:50.224 13:10:58 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:05:50.224 13:10:58 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:05:50.224 13:10:58 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:05:50.224 13:10:58 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:05:50.224 13:10:58 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:05:50.224 13:10:58 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:05:50.224 13:10:58 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:50.224 13:10:58 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:50.224 13:10:58 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:50.224 13:10:58 -- common/autotest_common.sh@10 -- # set +x 00:05:50.485 ************************************ 00:05:50.485 START TEST nvmf_tcp 00:05:50.485 ************************************ 00:05:50.485 13:10:58 nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:50.485 * Looking for test storage... 00:05:50.485 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:50.485 13:10:58 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:50.485 13:10:58 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:50.485 13:10:58 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:50.485 13:10:58 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:50.485 13:10:58 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.485 13:10:58 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.485 13:10:58 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.485 13:10:58 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.485 13:10:58 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.485 13:10:58 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.485 13:10:58 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.485 13:10:58 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.485 13:10:58 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.485 13:10:58 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.485 13:10:58 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.485 13:10:58 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:50.485 13:10:58 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:50.485 13:10:58 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.485 13:10:58 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.485 13:10:58 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:50.485 13:10:58 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:50.485 13:10:58 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.485 13:10:58 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:50.485 13:10:58 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.485 13:10:58 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:50.485 13:10:58 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:50.485 13:10:58 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.485 13:10:58 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:50.485 13:10:58 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.485 13:10:58 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.485 13:10:58 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.485 13:10:58 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:50.485 13:10:58 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.485 13:10:58 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:50.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.485 --rc genhtml_branch_coverage=1 00:05:50.485 --rc genhtml_function_coverage=1 00:05:50.485 --rc genhtml_legend=1 00:05:50.485 --rc geninfo_all_blocks=1 00:05:50.485 --rc geninfo_unexecuted_blocks=1 00:05:50.485 00:05:50.485 ' 00:05:50.485 13:10:58 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:50.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.485 --rc genhtml_branch_coverage=1 00:05:50.485 --rc genhtml_function_coverage=1 00:05:50.485 --rc genhtml_legend=1 00:05:50.485 --rc geninfo_all_blocks=1 00:05:50.485 --rc geninfo_unexecuted_blocks=1 00:05:50.485 00:05:50.485 ' 00:05:50.485 13:10:58 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:50.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.485 --rc genhtml_branch_coverage=1 00:05:50.485 --rc genhtml_function_coverage=1 00:05:50.485 --rc genhtml_legend=1 00:05:50.485 --rc geninfo_all_blocks=1 00:05:50.485 --rc geninfo_unexecuted_blocks=1 00:05:50.485 00:05:50.485 ' 00:05:50.485 13:10:58 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:50.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.485 --rc genhtml_branch_coverage=1 00:05:50.485 --rc genhtml_function_coverage=1 00:05:50.485 --rc genhtml_legend=1 00:05:50.485 --rc geninfo_all_blocks=1 00:05:50.485 --rc geninfo_unexecuted_blocks=1 00:05:50.485 00:05:50.485 ' 00:05:50.485 13:10:58 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:50.485 13:10:58 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:50.485 13:10:58 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:50.485 13:10:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:50.485 13:10:58 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:50.485 13:10:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:50.485 ************************************ 00:05:50.485 START TEST nvmf_target_core 00:05:50.485 ************************************ 00:05:50.486 13:10:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:50.747 * Looking for test storage... 00:05:50.747 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:50.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.747 --rc genhtml_branch_coverage=1 00:05:50.747 --rc genhtml_function_coverage=1 00:05:50.747 --rc genhtml_legend=1 00:05:50.747 --rc geninfo_all_blocks=1 00:05:50.747 --rc geninfo_unexecuted_blocks=1 00:05:50.747 00:05:50.747 ' 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:50.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.747 --rc genhtml_branch_coverage=1 00:05:50.747 --rc genhtml_function_coverage=1 00:05:50.747 --rc genhtml_legend=1 00:05:50.747 --rc geninfo_all_blocks=1 00:05:50.747 --rc geninfo_unexecuted_blocks=1 00:05:50.747 00:05:50.747 ' 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:50.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.747 --rc genhtml_branch_coverage=1 00:05:50.747 --rc genhtml_function_coverage=1 00:05:50.747 --rc genhtml_legend=1 00:05:50.747 --rc geninfo_all_blocks=1 00:05:50.747 --rc geninfo_unexecuted_blocks=1 00:05:50.747 00:05:50.747 ' 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:50.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.747 --rc genhtml_branch_coverage=1 00:05:50.747 --rc genhtml_function_coverage=1 00:05:50.747 --rc genhtml_legend=1 00:05:50.747 --rc geninfo_all_blocks=1 00:05:50.747 --rc geninfo_unexecuted_blocks=1 00:05:50.747 00:05:50.747 ' 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:50.747 13:10:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:50.748 13:10:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:50.748 13:10:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:50.748 13:10:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:50.748 13:10:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:50.748 13:10:58 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:50.748 13:10:58 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.748 13:10:58 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.748 13:10:58 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.748 13:10:58 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:50.748 13:10:58 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.748 13:10:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:50.748 13:10:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:50.748 13:10:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:50.748 13:10:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:50.748 13:10:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:50.748 13:10:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:50.748 13:10:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:50.748 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:50.748 13:10:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:50.748 13:10:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:50.748 13:10:58 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:50.748 13:10:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:50.748 13:10:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:50.748 13:10:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:50.748 13:10:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:50.748 13:10:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:50.748 13:10:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:50.748 13:10:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:50.748 ************************************ 00:05:50.748 START TEST nvmf_abort 00:05:50.748 ************************************ 00:05:50.748 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:51.009 * Looking for test storage... 00:05:51.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:51.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.009 --rc genhtml_branch_coverage=1 00:05:51.009 --rc genhtml_function_coverage=1 00:05:51.009 --rc genhtml_legend=1 00:05:51.009 --rc geninfo_all_blocks=1 00:05:51.009 --rc geninfo_unexecuted_blocks=1 00:05:51.009 00:05:51.009 ' 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:51.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.009 --rc genhtml_branch_coverage=1 00:05:51.009 --rc genhtml_function_coverage=1 00:05:51.009 --rc genhtml_legend=1 00:05:51.009 --rc geninfo_all_blocks=1 00:05:51.009 --rc geninfo_unexecuted_blocks=1 00:05:51.009 00:05:51.009 ' 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:51.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.009 --rc genhtml_branch_coverage=1 00:05:51.009 --rc genhtml_function_coverage=1 00:05:51.009 --rc genhtml_legend=1 00:05:51.009 --rc geninfo_all_blocks=1 00:05:51.009 --rc geninfo_unexecuted_blocks=1 00:05:51.009 00:05:51.009 ' 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:51.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.009 --rc genhtml_branch_coverage=1 00:05:51.009 --rc genhtml_function_coverage=1 00:05:51.009 --rc genhtml_legend=1 00:05:51.009 --rc geninfo_all_blocks=1 00:05:51.009 --rc geninfo_unexecuted_blocks=1 00:05:51.009 00:05:51.009 ' 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:51.009 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:51.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:51.010 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:05:59.182 Found 0000:31:00.0 (0x8086 - 0x159b) 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:05:59.182 Found 0000:31:00.1 (0x8086 - 0x159b) 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:05:59.182 Found net devices under 0000:31:00.0: cvl_0_0 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:05:59.182 Found net devices under 0000:31:00.1: cvl_0_1 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:59.182 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:59.183 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:59.183 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:59.183 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:59.183 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:59.183 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:59.183 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:59.183 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:59.183 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:59.183 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:59.183 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:59.183 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:59.183 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:59.183 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:59.183 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:59.183 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:59.443 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:59.443 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:59.443 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:59.443 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:59.443 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:59.443 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:05:59.443 00:05:59.443 --- 10.0.0.2 ping statistics --- 00:05:59.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:59.443 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:05:59.443 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:59.443 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:59.443 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:05:59.443 00:05:59.443 --- 10.0.0.1 ping statistics --- 00:05:59.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:59.443 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:05:59.443 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:59.443 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:59.443 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:59.443 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:59.443 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:59.443 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:59.443 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:59.443 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:59.443 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:59.443 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:59.443 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:59.443 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:59.443 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:59.443 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3610429 00:05:59.443 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3610429 00:05:59.443 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:59.443 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 3610429 ']' 00:05:59.443 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.444 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:59.444 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.444 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:59.444 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:59.444 [2024-11-07 13:11:07.412961] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:05:59.444 [2024-11-07 13:11:07.413062] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:59.705 [2024-11-07 13:11:07.579661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:59.705 [2024-11-07 13:11:07.698667] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:59.705 [2024-11-07 13:11:07.698741] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:59.705 [2024-11-07 13:11:07.698756] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:59.705 [2024-11-07 13:11:07.698769] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:59.705 [2024-11-07 13:11:07.698780] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:59.705 [2024-11-07 13:11:07.701490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.705 [2024-11-07 13:11:07.701617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.705 [2024-11-07 13:11:07.701643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.276 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:00.276 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:06:00.276 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:00.276 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:00.277 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:00.277 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:00.277 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:00.277 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.277 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:00.277 [2024-11-07 13:11:08.215172] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:00.277 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.277 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:00.277 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.277 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:00.537 Malloc0 00:06:00.537 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.537 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:00.537 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.537 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:00.537 Delay0 00:06:00.537 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.537 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:00.537 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.537 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:00.537 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.537 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:00.537 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.537 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:00.537 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.537 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:00.537 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.537 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:00.537 [2024-11-07 13:11:08.315040] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:00.537 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.537 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:00.537 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.537 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:00.537 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.538 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:00.538 [2024-11-07 13:11:08.472564] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:03.082 Initializing NVMe Controllers 00:06:03.082 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:03.082 controller IO queue size 128 less than required 00:06:03.082 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:03.082 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:03.082 Initialization complete. Launching workers. 00:06:03.082 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 27403 00:06:03.082 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27460, failed to submit 66 00:06:03.082 success 27403, unsuccessful 57, failed 0 00:06:03.082 13:11:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:03.082 13:11:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.082 13:11:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:03.082 13:11:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.082 13:11:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:03.082 13:11:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:03.082 13:11:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:03.082 13:11:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:03.082 13:11:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:03.082 13:11:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:03.082 13:11:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:03.082 13:11:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:03.082 rmmod nvme_tcp 00:06:03.082 rmmod nvme_fabrics 00:06:03.082 rmmod nvme_keyring 00:06:03.082 13:11:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:03.082 13:11:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:03.082 13:11:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:03.082 13:11:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3610429 ']' 00:06:03.082 13:11:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3610429 00:06:03.082 13:11:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 3610429 ']' 00:06:03.082 13:11:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 3610429 00:06:03.082 13:11:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:06:03.082 13:11:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:03.082 13:11:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3610429 00:06:03.082 13:11:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:03.082 13:11:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:03.082 13:11:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3610429' 00:06:03.082 killing process with pid 3610429 00:06:03.082 13:11:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # kill 3610429 00:06:03.082 13:11:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@976 -- # wait 3610429 00:06:03.656 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:03.656 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:03.656 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:03.656 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:03.656 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:03.656 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:03.656 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:03.656 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:03.656 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:03.656 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:03.656 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:03.656 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:05.568 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:05.568 00:06:05.568 real 0m14.835s 00:06:05.568 user 0m15.160s 00:06:05.568 sys 0m7.290s 00:06:05.568 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:05.568 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:05.568 ************************************ 00:06:05.568 END TEST nvmf_abort 00:06:05.568 ************************************ 00:06:05.568 13:11:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:05.568 13:11:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:05.569 13:11:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:05.569 13:11:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:05.569 ************************************ 00:06:05.569 START TEST nvmf_ns_hotplug_stress 00:06:05.569 ************************************ 00:06:05.569 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:05.830 * Looking for test storage... 00:06:05.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:05.830 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:05.830 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:06:05.830 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:05.830 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:05.830 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.830 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.830 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.830 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.830 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.830 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.830 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.830 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.830 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.830 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.830 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.830 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:05.830 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:05.830 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.830 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.830 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:05.830 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:05.830 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.830 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:05.830 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.830 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:05.830 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:05.830 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.830 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:05.830 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.830 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.830 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.830 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:05.830 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.830 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:05.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.831 --rc genhtml_branch_coverage=1 00:06:05.831 --rc genhtml_function_coverage=1 00:06:05.831 --rc genhtml_legend=1 00:06:05.831 --rc geninfo_all_blocks=1 00:06:05.831 --rc geninfo_unexecuted_blocks=1 00:06:05.831 00:06:05.831 ' 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:05.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.831 --rc genhtml_branch_coverage=1 00:06:05.831 --rc genhtml_function_coverage=1 00:06:05.831 --rc genhtml_legend=1 00:06:05.831 --rc geninfo_all_blocks=1 00:06:05.831 --rc geninfo_unexecuted_blocks=1 00:06:05.831 00:06:05.831 ' 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:05.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.831 --rc genhtml_branch_coverage=1 00:06:05.831 --rc genhtml_function_coverage=1 00:06:05.831 --rc genhtml_legend=1 00:06:05.831 --rc geninfo_all_blocks=1 00:06:05.831 --rc geninfo_unexecuted_blocks=1 00:06:05.831 00:06:05.831 ' 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:05.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.831 --rc genhtml_branch_coverage=1 00:06:05.831 --rc genhtml_function_coverage=1 00:06:05.831 --rc genhtml_legend=1 00:06:05.831 --rc geninfo_all_blocks=1 00:06:05.831 --rc geninfo_unexecuted_blocks=1 00:06:05.831 00:06:05.831 ' 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:05.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:05.831 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:13.971 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:13.971 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:13.971 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:13.971 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:13.971 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:13.971 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:13.971 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:13.971 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:13.971 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:13.971 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:13.971 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:13.971 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:13.971 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:13.971 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:13.971 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:13.971 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:13.971 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:13.971 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:13.971 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:13.971 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:13.971 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:13.971 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:13.971 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:13.971 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:13.971 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:13.971 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:13.971 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:13.971 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:13.971 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:13.971 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:13.971 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:13.971 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:13.971 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:13.971 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:13.971 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:13.971 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:13.972 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:13.972 Found net devices under 0000:31:00.0: cvl_0_0 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:13.972 Found net devices under 0000:31:00.1: cvl_0_1 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:13.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:13.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.589 ms 00:06:13.972 00:06:13.972 --- 10.0.0.2 ping statistics --- 00:06:13.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:13.972 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:13.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:13.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:06:13.972 00:06:13.972 --- 10.0.0.1 ping statistics --- 00:06:13.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:13.972 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:13.972 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:14.234 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:14.234 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:14.234 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:14.234 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:14.234 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3616038 00:06:14.234 13:11:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3616038 00:06:14.234 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:14.234 13:11:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 3616038 ']' 00:06:14.234 13:11:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.234 13:11:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:14.234 13:11:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.234 13:11:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:14.234 13:11:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:14.234 [2024-11-07 13:11:22.099761] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:06:14.234 [2024-11-07 13:11:22.099907] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:14.494 [2024-11-07 13:11:22.282143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:14.494 [2024-11-07 13:11:22.408202] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:14.495 [2024-11-07 13:11:22.408270] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:14.495 [2024-11-07 13:11:22.408283] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:14.495 [2024-11-07 13:11:22.408297] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:14.495 [2024-11-07 13:11:22.408308] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:14.495 [2024-11-07 13:11:22.410889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.495 [2024-11-07 13:11:22.411076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:14.495 [2024-11-07 13:11:22.411165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.066 13:11:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:15.066 13:11:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:06:15.066 13:11:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:15.066 13:11:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:15.066 13:11:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:15.066 13:11:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:15.066 13:11:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:15.066 13:11:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:15.066 [2024-11-07 13:11:23.063799] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:15.326 13:11:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:15.326 13:11:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:15.586 [2024-11-07 13:11:23.430761] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:15.586 13:11:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:15.846 13:11:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:15.846 Malloc0 00:06:16.106 13:11:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:16.106 Delay0 00:06:16.106 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.367 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:16.629 NULL1 00:06:16.629 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:16.629 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:16.629 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3616483 00:06:16.629 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:16.629 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.890 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.151 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:17.151 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:17.151 true 00:06:17.151 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:17.151 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.413 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.674 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:17.674 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:17.674 true 00:06:17.935 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:17.935 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.935 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.196 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:18.196 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:18.458 true 00:06:18.458 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:18.458 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.458 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.719 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:18.719 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:18.981 true 00:06:18.981 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:18.982 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.982 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.243 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:19.243 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:19.504 true 00:06:19.504 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:19.504 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.765 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.765 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:19.765 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:20.026 true 00:06:20.026 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:20.026 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.286 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.286 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:20.286 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:20.546 true 00:06:20.546 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:20.546 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.807 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.807 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:20.807 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:21.066 true 00:06:21.066 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:21.066 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.327 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.587 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:21.587 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:21.587 true 00:06:21.587 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:21.587 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.846 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.106 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:22.106 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:22.106 true 00:06:22.106 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:22.106 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.368 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.629 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:22.629 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:22.629 true 00:06:22.629 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:22.629 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.890 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.150 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:23.150 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:23.150 true 00:06:23.411 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:23.411 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.411 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.671 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:23.672 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:23.672 true 00:06:23.932 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:23.932 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.933 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.193 13:11:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:24.193 13:11:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:24.453 true 00:06:24.453 13:11:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:24.453 13:11:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.453 13:11:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.713 13:11:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:24.714 13:11:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:24.974 true 00:06:24.974 13:11:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:24.975 13:11:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.975 13:11:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.236 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:25.236 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:25.497 true 00:06:25.497 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:25.497 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.497 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.758 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:25.758 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:26.018 true 00:06:26.018 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:26.018 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.018 13:11:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.278 13:11:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:26.278 13:11:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:26.556 true 00:06:26.556 13:11:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:26.556 13:11:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.886 13:11:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.886 13:11:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:26.886 13:11:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:27.204 true 00:06:27.204 13:11:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:27.204 13:11:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.205 13:11:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.465 13:11:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:27.466 13:11:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:27.726 true 00:06:27.726 13:11:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:27.726 13:11:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.726 13:11:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.986 13:11:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:27.986 13:11:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:28.246 true 00:06:28.246 13:11:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:28.246 13:11:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.246 13:11:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.509 13:11:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:28.509 13:11:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:28.774 true 00:06:28.774 13:11:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:28.774 13:11:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.035 13:11:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.035 13:11:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:29.035 13:11:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:29.296 true 00:06:29.296 13:11:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:29.296 13:11:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.556 13:11:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.556 13:11:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:29.556 13:11:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:29.816 true 00:06:29.816 13:11:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:29.816 13:11:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.077 13:11:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.077 13:11:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:30.077 13:11:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:30.339 true 00:06:30.339 13:11:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:30.339 13:11:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.600 13:11:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.860 13:11:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:30.860 13:11:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:30.860 true 00:06:30.860 13:11:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:30.861 13:11:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.121 13:11:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.382 13:11:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:31.382 13:11:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:31.382 true 00:06:31.382 13:11:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:31.382 13:11:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.643 13:11:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.902 13:11:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:31.902 13:11:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:31.902 true 00:06:31.902 13:11:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:31.902 13:11:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.162 13:11:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.423 13:11:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:32.423 13:11:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:32.423 true 00:06:32.684 13:11:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:32.684 13:11:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.684 13:11:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.944 13:11:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:32.945 13:11:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:33.205 true 00:06:33.205 13:11:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:33.205 13:11:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.205 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.465 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:33.465 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:33.726 true 00:06:33.726 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:33.726 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.726 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.986 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:33.987 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:34.247 true 00:06:34.247 13:11:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:34.247 13:11:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.507 13:11:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.507 13:11:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:06:34.507 13:11:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:06:34.767 true 00:06:34.767 13:11:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:34.767 13:11:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.027 13:11:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.027 13:11:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:06:35.027 13:11:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:06:35.287 true 00:06:35.288 13:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:35.288 13:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.548 13:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.548 13:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:06:35.548 13:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:06:35.809 true 00:06:35.810 13:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:35.810 13:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.070 13:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.330 13:11:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:06:36.330 13:11:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:06:36.330 true 00:06:36.330 13:11:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:36.330 13:11:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.591 13:11:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.851 13:11:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:06:36.851 13:11:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:06:36.851 true 00:06:36.851 13:11:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:36.851 13:11:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.112 13:11:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.374 13:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:06:37.374 13:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:06:37.374 true 00:06:37.374 13:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:37.374 13:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.635 13:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.895 13:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:06:37.895 13:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:06:37.895 true 00:06:38.155 13:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:38.155 13:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.155 13:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.416 13:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:06:38.416 13:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:06:38.676 true 00:06:38.676 13:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:38.676 13:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.676 13:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.938 13:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:06:38.938 13:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:06:39.198 true 00:06:39.198 13:11:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:39.198 13:11:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.198 13:11:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.459 13:11:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:06:39.459 13:11:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:06:39.720 true 00:06:39.720 13:11:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:39.720 13:11:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.980 13:11:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.980 13:11:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:06:39.980 13:11:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:06:40.241 true 00:06:40.241 13:11:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:40.241 13:11:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.502 13:11:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.502 13:11:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:06:40.502 13:11:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:06:40.763 true 00:06:40.763 13:11:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:40.763 13:11:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.024 13:11:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.024 13:11:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:06:41.024 13:11:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:06:41.284 true 00:06:41.284 13:11:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:41.284 13:11:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.545 13:11:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.545 13:11:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:06:41.545 13:11:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:06:41.804 true 00:06:41.804 13:11:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:41.804 13:11:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.065 13:11:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.065 13:11:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:06:42.065 13:11:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:06:42.325 true 00:06:42.326 13:11:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:42.326 13:11:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.586 13:11:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.846 13:11:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:06:42.846 13:11:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:06:42.846 true 00:06:42.846 13:11:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:42.846 13:11:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.106 13:11:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.366 13:11:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:06:43.366 13:11:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:06:43.366 true 00:06:43.366 13:11:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:43.366 13:11:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.627 13:11:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.891 13:11:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:06:43.891 13:11:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:06:43.891 true 00:06:44.152 13:11:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:44.152 13:11:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.152 13:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.413 13:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:06:44.413 13:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:06:44.673 true 00:06:44.673 13:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:44.673 13:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.673 13:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.934 13:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:06:44.934 13:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:06:45.194 true 00:06:45.194 13:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:45.194 13:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.455 13:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.455 13:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:06:45.455 13:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:06:45.715 true 00:06:45.715 13:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:45.715 13:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.976 13:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.976 13:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:06:45.976 13:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:06:46.235 true 00:06:46.235 13:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:46.235 13:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.496 13:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.496 13:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:06:46.496 13:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:06:46.756 true 00:06:46.756 13:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:46.756 13:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.016 13:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.016 Initializing NVMe Controllers 00:06:47.016 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:47.016 Controller IO queue size 128, less than required. 00:06:47.016 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:47.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:47.016 Initialization complete. Launching workers. 00:06:47.016 ======================================================== 00:06:47.016 Latency(us) 00:06:47.016 Device Information : IOPS MiB/s Average min max 00:06:47.016 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27497.20 13.43 4655.12 1602.14 11697.00 00:06:47.016 ======================================================== 00:06:47.016 Total : 27497.20 13.43 4655.12 1602.14 11697.00 00:06:47.016 00:06:47.276 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:06:47.276 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:06:47.276 true 00:06:47.276 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3616483 00:06:47.276 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3616483) - No such process 00:06:47.276 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3616483 00:06:47.276 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.536 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:47.796 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:47.796 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:47.796 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:47.796 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:47.796 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:47.796 null0 00:06:47.796 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:47.796 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:47.796 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:48.056 null1 00:06:48.057 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:48.057 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:48.057 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:48.316 null2 00:06:48.316 13:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:48.316 13:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:48.316 13:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:48.316 null3 00:06:48.316 13:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:48.316 13:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:48.316 13:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:48.577 null4 00:06:48.577 13:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:48.577 13:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:48.577 13:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:48.837 null5 00:06:48.838 13:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:48.838 13:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:48.838 13:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:48.838 null6 00:06:48.838 13:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:49.099 13:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:49.099 13:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:49.099 null7 00:06:49.099 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:49.099 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:49.099 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:49.099 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:49.099 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:49.099 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:49.099 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:49.099 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:49.099 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:49.099 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:49.099 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.099 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:49.099 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:49.099 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:49.099 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:49.099 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:49.099 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:49.099 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:49.099 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.099 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:49.099 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:49.099 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:49.099 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:49.099 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:49.099 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:49.099 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:49.099 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3623160 3623162 3623165 3623168 3623171 3623174 3623177 3623179 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.100 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:49.361 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:49.361 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.361 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:49.361 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:49.361 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:49.361 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:49.361 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:49.361 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:49.621 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.621 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.621 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:49.621 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.621 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.621 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:49.621 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.621 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.621 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:49.621 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.621 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.621 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:49.621 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.621 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.621 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:49.621 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.621 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.621 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:49.621 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.621 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.621 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:49.622 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.622 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.622 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:49.622 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:49.622 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:49.622 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:49.881 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:49.881 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:49.881 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:49.881 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.881 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:49.881 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.881 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.881 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:49.881 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.881 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.881 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:49.881 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.881 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.881 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:49.881 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.881 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.881 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:49.881 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.881 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.881 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:49.881 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.881 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.881 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:49.881 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.881 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.881 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:50.143 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.143 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.143 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:50.143 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:50.143 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:50.143 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:50.143 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:50.143 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.143 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:50.143 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:50.143 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:50.143 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.143 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.143 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:50.404 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.404 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.404 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:50.404 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.404 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.404 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:50.404 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.404 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.404 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:50.404 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.404 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.404 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.404 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.404 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:50.404 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:50.404 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.404 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.404 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:50.404 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.404 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.404 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:50.404 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:50.404 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:50.404 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:50.404 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:50.404 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:50.404 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:50.665 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:50.665 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.665 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.665 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.665 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:50.665 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.665 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.665 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:50.665 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.665 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.665 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:50.665 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.665 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.665 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:50.665 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.665 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.665 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:50.665 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.665 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.665 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:50.665 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.665 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.665 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:50.665 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.665 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.665 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:50.926 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:50.926 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:50.926 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:50.926 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:50.926 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:50.926 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:50.926 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:50.926 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.926 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.926 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.926 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:50.926 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.926 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.926 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:50.926 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.926 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.926 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:50.926 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.926 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.926 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:50.926 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.926 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.926 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:51.186 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.186 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.186 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:51.186 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.186 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.186 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:51.186 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.186 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.186 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:51.186 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:51.186 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:51.186 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:51.186 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:51.186 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:51.186 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.187 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:51.187 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:51.448 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.448 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.448 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:51.448 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.448 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.448 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:51.448 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.448 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.448 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:51.448 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.448 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.448 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:51.448 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.448 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.448 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:51.448 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.448 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.448 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:51.448 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.448 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.448 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:51.448 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.448 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.448 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:51.448 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:51.448 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:51.448 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:51.709 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:51.709 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:51.709 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:51.709 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.709 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.709 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.709 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:51.709 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:51.709 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.709 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.709 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:51.709 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.709 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.709 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:51.709 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.709 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.709 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:51.709 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.709 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.709 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:51.709 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:51.709 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.709 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.709 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:51.709 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.709 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.709 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:51.970 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.970 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.970 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:51.970 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:51.970 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:51.970 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:51.970 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:51.970 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.970 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.970 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:51.970 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.970 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:51.971 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:51.971 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.971 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.971 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:51.971 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.971 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.971 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:51.971 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.971 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.971 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:52.231 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.231 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.231 13:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:52.231 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:52.231 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.231 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.231 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:52.231 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.231 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.231 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:52.231 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:52.231 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.231 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.231 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:52.231 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:52.231 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:52.231 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:52.231 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.231 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.231 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:52.491 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:52.491 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:52.491 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.491 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.491 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.491 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:52.491 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.491 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.491 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:52.491 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.491 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.491 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:52.491 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.491 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.491 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:52.491 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:52.491 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.491 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.491 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:52.491 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.491 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.491 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:52.491 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.491 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.491 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:52.491 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:52.491 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:52.491 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:52.750 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:52.750 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.750 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.750 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:52.750 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.751 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.751 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.751 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:52.751 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.751 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.751 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.751 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.751 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.751 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.009 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.009 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.009 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.009 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.009 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.009 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.009 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:53.009 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:53.009 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:53.009 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:53.009 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:53.009 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:53.009 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:53.009 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:53.009 rmmod nvme_tcp 00:06:53.009 rmmod nvme_fabrics 00:06:53.009 rmmod nvme_keyring 00:06:53.009 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:53.009 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:53.009 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:53.009 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3616038 ']' 00:06:53.009 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3616038 00:06:53.009 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 3616038 ']' 00:06:53.009 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 3616038 00:06:53.009 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:06:53.009 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:53.009 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3616038 00:06:53.009 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:53.009 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:53.009 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3616038' 00:06:53.009 killing process with pid 3616038 00:06:53.009 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 3616038 00:06:53.009 13:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 3616038 00:06:53.577 13:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:53.577 13:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:53.577 13:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:53.577 13:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:53.577 13:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:53.577 13:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:53.577 13:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:53.577 13:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:53.577 13:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:53.577 13:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:53.577 13:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:53.577 13:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:56.138 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:56.138 00:06:56.138 real 0m50.100s 00:06:56.138 user 3m20.428s 00:06:56.138 sys 0m17.229s 00:06:56.138 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:56.138 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:56.138 ************************************ 00:06:56.138 END TEST nvmf_ns_hotplug_stress 00:06:56.138 ************************************ 00:06:56.138 13:12:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:56.138 13:12:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:56.138 13:12:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:56.138 13:12:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:56.138 ************************************ 00:06:56.138 START TEST nvmf_delete_subsystem 00:06:56.138 ************************************ 00:06:56.138 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:56.138 * Looking for test storage... 00:06:56.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:56.138 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:56.138 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:06:56.138 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:56.138 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:56.138 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.138 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.138 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.138 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.138 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.138 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.138 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.138 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.138 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.138 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.138 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.138 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:56.138 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:56.138 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.138 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:56.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.139 --rc genhtml_branch_coverage=1 00:06:56.139 --rc genhtml_function_coverage=1 00:06:56.139 --rc genhtml_legend=1 00:06:56.139 --rc geninfo_all_blocks=1 00:06:56.139 --rc geninfo_unexecuted_blocks=1 00:06:56.139 00:06:56.139 ' 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:56.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.139 --rc genhtml_branch_coverage=1 00:06:56.139 --rc genhtml_function_coverage=1 00:06:56.139 --rc genhtml_legend=1 00:06:56.139 --rc geninfo_all_blocks=1 00:06:56.139 --rc geninfo_unexecuted_blocks=1 00:06:56.139 00:06:56.139 ' 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:56.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.139 --rc genhtml_branch_coverage=1 00:06:56.139 --rc genhtml_function_coverage=1 00:06:56.139 --rc genhtml_legend=1 00:06:56.139 --rc geninfo_all_blocks=1 00:06:56.139 --rc geninfo_unexecuted_blocks=1 00:06:56.139 00:06:56.139 ' 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:56.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.139 --rc genhtml_branch_coverage=1 00:06:56.139 --rc genhtml_function_coverage=1 00:06:56.139 --rc genhtml_legend=1 00:06:56.139 --rc geninfo_all_blocks=1 00:06:56.139 --rc geninfo_unexecuted_blocks=1 00:06:56.139 00:06:56.139 ' 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:56.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:56.139 13:12:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:04.279 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:04.279 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:04.279 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:04.279 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:04.279 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:04.279 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:04.279 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:04.279 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:04.279 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:04.279 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:04.279 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:04.279 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:04.279 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:04.279 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:04.279 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:04.279 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:04.279 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:04.279 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:04.279 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:04.279 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:04.279 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:04.279 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:04.279 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:04.279 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:04.279 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:04.279 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:04.280 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:04.280 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:04.280 Found net devices under 0000:31:00.0: cvl_0_0 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:04.280 Found net devices under 0000:31:00.1: cvl_0_1 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:04.280 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:04.541 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:04.541 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:04.541 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:04.541 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:04.541 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:04.541 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:07:04.541 00:07:04.541 --- 10.0.0.2 ping statistics --- 00:07:04.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.541 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:07:04.541 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:04.541 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:04.541 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:07:04.541 00:07:04.541 --- 10.0.0.1 ping statistics --- 00:07:04.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.541 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:07:04.541 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:04.541 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:04.541 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:04.541 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:04.541 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:04.541 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:04.541 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:04.541 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:04.541 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:04.542 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:04.542 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:04.542 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:04.542 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:04.542 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3629546 00:07:04.542 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3629546 00:07:04.542 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:04.542 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 3629546 ']' 00:07:04.542 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.542 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:04.542 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.542 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:04.542 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:04.802 [2024-11-07 13:12:12.546486] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:07:04.802 [2024-11-07 13:12:12.546623] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.802 [2024-11-07 13:12:12.710251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:05.061 [2024-11-07 13:12:12.808509] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:05.062 [2024-11-07 13:12:12.808554] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:05.062 [2024-11-07 13:12:12.808567] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:05.062 [2024-11-07 13:12:12.808581] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:05.062 [2024-11-07 13:12:12.808590] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:05.062 [2024-11-07 13:12:12.810403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.062 [2024-11-07 13:12:12.810424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.322 13:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:05.322 13:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:07:05.322 13:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:05.322 13:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:05.322 13:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:05.582 13:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:05.582 13:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:05.582 13:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.582 13:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:05.582 [2024-11-07 13:12:13.364798] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:05.582 13:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.582 13:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:05.582 13:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.582 13:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:05.582 13:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.582 13:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:05.582 13:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.582 13:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:05.582 [2024-11-07 13:12:13.389366] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:05.582 13:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.582 13:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:05.582 13:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.582 13:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:05.582 NULL1 00:07:05.582 13:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.582 13:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:05.582 13:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.582 13:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:05.582 Delay0 00:07:05.583 13:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.583 13:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.583 13:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.583 13:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:05.583 13:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.583 13:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3629668 00:07:05.583 13:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:05.583 13:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:05.583 [2024-11-07 13:12:13.526993] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:07.496 13:12:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:07.497 13:12:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.497 13:12:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.069 Read completed with error (sct=0, sc=8) 00:07:08.069 Read completed with error (sct=0, sc=8) 00:07:08.069 starting I/O failed: -6 00:07:08.069 Read completed with error (sct=0, sc=8) 00:07:08.069 Write completed with error (sct=0, sc=8) 00:07:08.069 Read completed with error (sct=0, sc=8) 00:07:08.069 Write completed with error (sct=0, sc=8) 00:07:08.069 starting I/O failed: -6 00:07:08.069 Write completed with error (sct=0, sc=8) 00:07:08.069 Write completed with error (sct=0, sc=8) 00:07:08.069 Read completed with error (sct=0, sc=8) 00:07:08.069 Read completed with error (sct=0, sc=8) 00:07:08.069 starting I/O failed: -6 00:07:08.069 Read completed with error (sct=0, sc=8) 00:07:08.069 Read completed with error (sct=0, sc=8) 00:07:08.069 Read completed with error (sct=0, sc=8) 00:07:08.069 Read completed with error (sct=0, sc=8) 00:07:08.069 starting I/O failed: -6 00:07:08.069 Read completed with error (sct=0, sc=8) 00:07:08.069 Read completed with error (sct=0, sc=8) 00:07:08.069 Write completed with error (sct=0, sc=8) 00:07:08.069 Read completed with error (sct=0, sc=8) 00:07:08.069 starting I/O failed: -6 00:07:08.069 Read completed with error (sct=0, sc=8) 00:07:08.069 Write completed with error (sct=0, sc=8) 00:07:08.069 Write completed with error (sct=0, sc=8) 00:07:08.069 Read completed with error (sct=0, sc=8) 00:07:08.069 starting I/O failed: -6 00:07:08.069 Read completed with error (sct=0, sc=8) 00:07:08.069 Read completed with error (sct=0, sc=8) 00:07:08.069 Write completed with error (sct=0, sc=8) 00:07:08.069 Read completed with error (sct=0, sc=8) 00:07:08.069 starting I/O failed: -6 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 starting I/O failed: -6 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 starting I/O failed: -6 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 starting I/O failed: -6 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 starting I/O failed: -6 00:07:08.070 [2024-11-07 13:12:15.773757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000026780 is same with the state(6) to be set 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 [2024-11-07 13:12:15.774275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000027180 is same with the state(6) to be set 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 starting I/O failed: -6 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 starting I/O failed: -6 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 starting I/O failed: -6 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 starting I/O failed: -6 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 starting I/O failed: -6 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 starting I/O failed: -6 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 starting I/O failed: -6 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 starting I/O failed: -6 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 starting I/O failed: -6 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 starting I/O failed: -6 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 starting I/O failed: -6 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 starting I/O failed: -6 00:07:08.070 starting I/O failed: -6 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 starting I/O failed: -6 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 starting I/O failed: -6 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 starting I/O failed: -6 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 starting I/O failed: -6 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 starting I/O failed: -6 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 starting I/O failed: -6 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 starting I/O failed: -6 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 starting I/O failed: -6 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 starting I/O failed: -6 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 starting I/O failed: -6 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 starting I/O failed: -6 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 starting I/O failed: -6 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 starting I/O failed: -6 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 starting I/O failed: -6 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 starting I/O failed: -6 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 starting I/O failed: -6 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 starting I/O failed: -6 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 starting I/O failed: -6 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 starting I/O failed: -6 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 starting I/O failed: -6 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 starting I/O failed: -6 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 starting I/O failed: -6 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Write completed with error (sct=0, sc=8) 00:07:08.070 starting I/O failed: -6 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.070 starting I/O failed: -6 00:07:08.070 Read completed with error (sct=0, sc=8) 00:07:08.071 Read completed with error (sct=0, sc=8) 00:07:08.071 starting I/O failed: -6 00:07:08.071 Read completed with error (sct=0, sc=8) 00:07:08.071 Read completed with error (sct=0, sc=8) 00:07:08.071 starting I/O failed: -6 00:07:08.071 Read completed with error (sct=0, sc=8) 00:07:08.071 Write completed with error (sct=0, sc=8) 00:07:08.071 starting I/O failed: -6 00:07:08.071 Read completed with error (sct=0, sc=8) 00:07:08.071 Read completed with error (sct=0, sc=8) 00:07:08.071 starting I/O failed: -6 00:07:08.071 Read completed with error (sct=0, sc=8) 00:07:08.071 [2024-11-07 13:12:15.781728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030000 is same with the state(6) to be set 00:07:09.013 [2024-11-07 13:12:16.754650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000025d80 is same with the state(6) to be set 00:07:09.013 Read completed with error (sct=0, sc=8) 00:07:09.013 Read completed with error (sct=0, sc=8) 00:07:09.013 Write completed with error (sct=0, sc=8) 00:07:09.013 Write completed with error (sct=0, sc=8) 00:07:09.013 Write completed with error (sct=0, sc=8) 00:07:09.013 Read completed with error (sct=0, sc=8) 00:07:09.013 Write completed with error (sct=0, sc=8) 00:07:09.013 Read completed with error (sct=0, sc=8) 00:07:09.013 Write completed with error (sct=0, sc=8) 00:07:09.013 Read completed with error (sct=0, sc=8) 00:07:09.013 Read completed with error (sct=0, sc=8) 00:07:09.013 Read completed with error (sct=0, sc=8) 00:07:09.013 Read completed with error (sct=0, sc=8) 00:07:09.013 Read completed with error (sct=0, sc=8) 00:07:09.013 Read completed with error (sct=0, sc=8) 00:07:09.013 Write completed with error (sct=0, sc=8) 00:07:09.013 Write completed with error (sct=0, sc=8) 00:07:09.013 Read completed with error (sct=0, sc=8) 00:07:09.013 Write completed with error (sct=0, sc=8) 00:07:09.013 Write completed with error (sct=0, sc=8) 00:07:09.013 [2024-11-07 13:12:16.777547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000026c80 is same with the state(6) to be set 00:07:09.013 Write completed with error (sct=0, sc=8) 00:07:09.013 Write completed with error (sct=0, sc=8) 00:07:09.013 Read completed with error (sct=0, sc=8) 00:07:09.013 Read completed with error (sct=0, sc=8) 00:07:09.013 Write completed with error (sct=0, sc=8) 00:07:09.013 Read completed with error (sct=0, sc=8) 00:07:09.013 Read completed with error (sct=0, sc=8) 00:07:09.013 Read completed with error (sct=0, sc=8) 00:07:09.013 Write completed with error (sct=0, sc=8) 00:07:09.013 Read completed with error (sct=0, sc=8) 00:07:09.013 Read completed with error (sct=0, sc=8) 00:07:09.013 Read completed with error (sct=0, sc=8) 00:07:09.013 Read completed with error (sct=0, sc=8) 00:07:09.013 Read completed with error (sct=0, sc=8) 00:07:09.013 Read completed with error (sct=0, sc=8) 00:07:09.013 Read completed with error (sct=0, sc=8) 00:07:09.013 Read completed with error (sct=0, sc=8) 00:07:09.013 Read completed with error (sct=0, sc=8) 00:07:09.013 Read completed with error (sct=0, sc=8) 00:07:09.013 Read completed with error (sct=0, sc=8) 00:07:09.013 [2024-11-07 13:12:16.778079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000027680 is same with the state(6) to be set 00:07:09.013 Read completed with error (sct=0, sc=8) 00:07:09.013 Write completed with error (sct=0, sc=8) 00:07:09.013 Write completed with error (sct=0, sc=8) 00:07:09.013 Read completed with error (sct=0, sc=8) 00:07:09.013 Read completed with error (sct=0, sc=8) 00:07:09.013 Read completed with error (sct=0, sc=8) 00:07:09.013 Write completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Write completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Write completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Write completed with error (sct=0, sc=8) 00:07:09.014 Write completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Write completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Write completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Write completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 [2024-11-07 13:12:16.783482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030f00 is same with the state(6) to be set 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Write completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Write completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Write completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Write completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Write completed with error (sct=0, sc=8) 00:07:09.014 Write completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Write completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Write completed with error (sct=0, sc=8) 00:07:09.014 Write completed with error (sct=0, sc=8) 00:07:09.014 Write completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Write completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 Write completed with error (sct=0, sc=8) 00:07:09.014 Read completed with error (sct=0, sc=8) 00:07:09.014 [2024-11-07 13:12:16.785661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030500 is same with the state(6) to be set 00:07:09.014 Initializing NVMe Controllers 00:07:09.014 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:09.014 Controller IO queue size 128, less than required. 00:07:09.014 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:09.014 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:09.014 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:09.014 Initialization complete. Launching workers. 00:07:09.014 ======================================================== 00:07:09.014 Latency(us) 00:07:09.014 Device Information : IOPS MiB/s Average min max 00:07:09.014 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 162.43 0.08 910965.55 475.62 1005853.85 00:07:09.014 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 183.86 0.09 907008.80 566.21 1012175.95 00:07:09.014 ======================================================== 00:07:09.014 Total : 346.30 0.17 908864.77 475.62 1012175.95 00:07:09.014 00:07:09.014 [2024-11-07 13:12:16.786816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000025d80 (9): Bad file descriptor 00:07:09.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:09.014 13:12:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.014 13:12:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:09.014 13:12:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3629668 00:07:09.014 13:12:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:09.590 13:12:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:09.590 13:12:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3629668 00:07:09.590 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3629668) - No such process 00:07:09.590 13:12:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3629668 00:07:09.590 13:12:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:09.590 13:12:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3629668 00:07:09.590 13:12:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:09.590 13:12:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:09.590 13:12:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:09.590 13:12:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:09.590 13:12:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3629668 00:07:09.590 13:12:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:09.590 13:12:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:09.590 13:12:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:09.590 13:12:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:09.590 13:12:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:09.590 13:12:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.590 13:12:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:09.590 13:12:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.590 13:12:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:09.590 13:12:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.590 13:12:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:09.590 [2024-11-07 13:12:17.316828] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:09.590 13:12:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.590 13:12:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.590 13:12:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.590 13:12:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:09.590 13:12:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.590 13:12:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3630484 00:07:09.590 13:12:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:09.590 13:12:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:09.590 13:12:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3630484 00:07:09.590 13:12:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:09.590 [2024-11-07 13:12:17.441044] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:09.958 13:12:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:09.958 13:12:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3630484 00:07:09.958 13:12:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:10.552 13:12:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:10.552 13:12:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3630484 00:07:10.552 13:12:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:11.124 13:12:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:11.124 13:12:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3630484 00:07:11.124 13:12:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:11.384 13:12:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:11.384 13:12:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3630484 00:07:11.384 13:12:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:11.954 13:12:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:11.954 13:12:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3630484 00:07:11.954 13:12:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:12.526 13:12:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:12.526 13:12:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3630484 00:07:12.526 13:12:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:12.787 Initializing NVMe Controllers 00:07:12.787 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:12.787 Controller IO queue size 128, less than required. 00:07:12.787 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:12.787 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:12.787 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:12.787 Initialization complete. Launching workers. 00:07:12.787 ======================================================== 00:07:12.787 Latency(us) 00:07:12.787 Device Information : IOPS MiB/s Average min max 00:07:12.787 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002003.59 1000166.80 1006258.95 00:07:12.787 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003600.07 1000266.11 1042234.23 00:07:12.787 ======================================================== 00:07:12.787 Total : 256.00 0.12 1002801.83 1000166.80 1042234.23 00:07:12.787 00:07:13.048 13:12:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:13.048 13:12:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3630484 00:07:13.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3630484) - No such process 00:07:13.048 13:12:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3630484 00:07:13.048 13:12:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:13.048 13:12:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:13.048 13:12:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:13.048 13:12:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:13.048 13:12:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:13.048 13:12:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:13.048 13:12:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:13.048 13:12:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:13.048 rmmod nvme_tcp 00:07:13.048 rmmod nvme_fabrics 00:07:13.048 rmmod nvme_keyring 00:07:13.048 13:12:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:13.048 13:12:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:13.048 13:12:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:13.048 13:12:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3629546 ']' 00:07:13.048 13:12:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3629546 00:07:13.048 13:12:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 3629546 ']' 00:07:13.048 13:12:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 3629546 00:07:13.048 13:12:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:07:13.048 13:12:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:13.048 13:12:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3629546 00:07:13.048 13:12:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:13.048 13:12:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:13.048 13:12:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3629546' 00:07:13.048 killing process with pid 3629546 00:07:13.048 13:12:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 3629546 00:07:13.048 13:12:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 3629546 00:07:13.990 13:12:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:13.990 13:12:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:13.990 13:12:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:13.990 13:12:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:13.990 13:12:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:13.990 13:12:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:13.990 13:12:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:13.990 13:12:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:13.990 13:12:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:13.990 13:12:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:13.990 13:12:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:13.990 13:12:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:15.904 13:12:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:15.904 00:07:15.904 real 0m20.166s 00:07:15.904 user 0m32.301s 00:07:15.904 sys 0m7.681s 00:07:15.904 13:12:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:15.904 13:12:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:15.904 ************************************ 00:07:15.904 END TEST nvmf_delete_subsystem 00:07:15.904 ************************************ 00:07:15.904 13:12:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:15.904 13:12:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:15.904 13:12:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:15.904 13:12:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:16.165 ************************************ 00:07:16.165 START TEST nvmf_host_management 00:07:16.165 ************************************ 00:07:16.165 13:12:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:16.165 * Looking for test storage... 00:07:16.165 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:16.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.165 --rc genhtml_branch_coverage=1 00:07:16.165 --rc genhtml_function_coverage=1 00:07:16.165 --rc genhtml_legend=1 00:07:16.165 --rc geninfo_all_blocks=1 00:07:16.165 --rc geninfo_unexecuted_blocks=1 00:07:16.165 00:07:16.165 ' 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:16.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.165 --rc genhtml_branch_coverage=1 00:07:16.165 --rc genhtml_function_coverage=1 00:07:16.165 --rc genhtml_legend=1 00:07:16.165 --rc geninfo_all_blocks=1 00:07:16.165 --rc geninfo_unexecuted_blocks=1 00:07:16.165 00:07:16.165 ' 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:16.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.165 --rc genhtml_branch_coverage=1 00:07:16.165 --rc genhtml_function_coverage=1 00:07:16.165 --rc genhtml_legend=1 00:07:16.165 --rc geninfo_all_blocks=1 00:07:16.165 --rc geninfo_unexecuted_blocks=1 00:07:16.165 00:07:16.165 ' 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:16.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.165 --rc genhtml_branch_coverage=1 00:07:16.165 --rc genhtml_function_coverage=1 00:07:16.165 --rc genhtml_legend=1 00:07:16.165 --rc geninfo_all_blocks=1 00:07:16.165 --rc geninfo_unexecuted_blocks=1 00:07:16.165 00:07:16.165 ' 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:16.165 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:16.166 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:16.166 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:16.166 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:16.166 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.166 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.166 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.166 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:16.166 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.166 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:16.166 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:16.166 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:16.166 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:16.166 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:16.166 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:16.166 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:16.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:16.166 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:16.166 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:16.166 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:16.166 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:16.166 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:16.166 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:16.166 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:16.166 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:16.166 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:16.166 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:16.166 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:16.166 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:16.166 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:16.166 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:16.166 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:16.166 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:16.166 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:16.166 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:24.309 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:24.309 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:24.309 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:24.310 Found net devices under 0000:31:00.0: cvl_0_0 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:24.310 Found net devices under 0000:31:00.1: cvl_0_1 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:24.310 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:24.571 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:24.571 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:24.571 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:24.571 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:24.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:24.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:07:24.571 00:07:24.571 --- 10.0.0.2 ping statistics --- 00:07:24.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:24.571 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:07:24.571 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:24.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:24.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:07:24.571 00:07:24.571 --- 10.0.0.1 ping statistics --- 00:07:24.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:24.571 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:07:24.571 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:24.571 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:24.571 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:24.571 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:24.571 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:24.571 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:24.571 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:24.571 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:24.571 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:24.571 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:24.571 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:24.571 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:24.572 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:24.572 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:24.572 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.572 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3636017 00:07:24.572 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3636017 00:07:24.572 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:24.572 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3636017 ']' 00:07:24.572 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.572 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:24.572 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.572 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:24.572 13:12:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.572 [2024-11-07 13:12:32.514499] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:07:24.572 [2024-11-07 13:12:32.514634] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:24.833 [2024-11-07 13:12:32.698900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:24.833 [2024-11-07 13:12:32.826268] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:24.833 [2024-11-07 13:12:32.826338] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:24.833 [2024-11-07 13:12:32.826351] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:24.833 [2024-11-07 13:12:32.826364] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:24.833 [2024-11-07 13:12:32.826374] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:24.833 [2024-11-07 13:12:32.829935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:24.833 [2024-11-07 13:12:32.830089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:24.833 [2024-11-07 13:12:32.830200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.833 [2024-11-07 13:12:32.830225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:25.404 13:12:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:25.404 13:12:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:07:25.404 13:12:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:25.404 13:12:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:25.404 13:12:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.404 13:12:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:25.404 13:12:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:25.404 13:12:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.404 13:12:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.404 [2024-11-07 13:12:33.331794] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:25.404 13:12:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.404 13:12:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:25.404 13:12:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:25.404 13:12:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.404 13:12:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:25.404 13:12:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:25.404 13:12:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:25.404 13:12:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.404 13:12:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.665 Malloc0 00:07:25.665 [2024-11-07 13:12:33.449281] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:25.665 13:12:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.665 13:12:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:25.665 13:12:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:25.665 13:12:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.665 13:12:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3636384 00:07:25.665 13:12:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3636384 /var/tmp/bdevperf.sock 00:07:25.665 13:12:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3636384 ']' 00:07:25.665 13:12:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:25.665 13:12:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:25.665 13:12:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:25.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:25.665 13:12:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:25.665 13:12:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:25.665 13:12:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:25.665 13:12:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.665 13:12:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:25.665 13:12:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:25.665 13:12:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:25.665 13:12:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:25.665 { 00:07:25.665 "params": { 00:07:25.665 "name": "Nvme$subsystem", 00:07:25.665 "trtype": "$TEST_TRANSPORT", 00:07:25.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:25.665 "adrfam": "ipv4", 00:07:25.665 "trsvcid": "$NVMF_PORT", 00:07:25.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:25.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:25.665 "hdgst": ${hdgst:-false}, 00:07:25.665 "ddgst": ${ddgst:-false} 00:07:25.665 }, 00:07:25.665 "method": "bdev_nvme_attach_controller" 00:07:25.665 } 00:07:25.665 EOF 00:07:25.665 )") 00:07:25.665 13:12:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:25.665 13:12:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:25.665 13:12:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:25.665 13:12:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:25.665 "params": { 00:07:25.665 "name": "Nvme0", 00:07:25.665 "trtype": "tcp", 00:07:25.665 "traddr": "10.0.0.2", 00:07:25.665 "adrfam": "ipv4", 00:07:25.665 "trsvcid": "4420", 00:07:25.665 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:25.665 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:25.665 "hdgst": false, 00:07:25.665 "ddgst": false 00:07:25.665 }, 00:07:25.665 "method": "bdev_nvme_attach_controller" 00:07:25.665 }' 00:07:25.665 [2024-11-07 13:12:33.590492] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:07:25.665 [2024-11-07 13:12:33.590601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3636384 ] 00:07:25.926 [2024-11-07 13:12:33.727322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.926 [2024-11-07 13:12:33.825247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.498 Running I/O for 10 seconds... 00:07:26.498 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:26.498 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:07:26.498 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:26.498 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.498 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:26.498 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.498 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:26.498 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:26.498 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:26.498 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:26.498 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:26.498 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:26.498 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:26.498 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:26.498 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:26.498 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:26.498 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.498 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:26.498 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.498 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:26.498 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:26.498 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:26.759 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:26.759 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:26.759 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:26.759 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:26.759 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.759 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:26.759 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.021 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:07:27.021 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:07:27.021 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:27.021 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:27.021 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:27.021 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:27.021 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.021 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:27.021 [2024-11-07 13:12:34.792968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.021 [2024-11-07 13:12:34.793029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.021 [2024-11-07 13:12:34.793065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.021 [2024-11-07 13:12:34.793078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.021 [2024-11-07 13:12:34.793093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.021 [2024-11-07 13:12:34.793104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.021 [2024-11-07 13:12:34.793118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.021 [2024-11-07 13:12:34.793128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.021 [2024-11-07 13:12:34.793141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.021 [2024-11-07 13:12:34.793151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.021 [2024-11-07 13:12:34.793169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.021 [2024-11-07 13:12:34.793181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.021 [2024-11-07 13:12:34.793193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.022 [2024-11-07 13:12:34.793203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.022 [2024-11-07 13:12:34.793216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.022 [2024-11-07 13:12:34.793227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.022 [2024-11-07 13:12:34.793239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.022 [2024-11-07 13:12:34.793250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.022 [2024-11-07 13:12:34.793263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.022 [2024-11-07 13:12:34.793273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.022 [2024-11-07 13:12:34.793285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.022 [2024-11-07 13:12:34.793296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.022 [2024-11-07 13:12:34.793309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.022 [2024-11-07 13:12:34.793319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.022 [2024-11-07 13:12:34.793332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.022 [2024-11-07 13:12:34.793343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.022 [2024-11-07 13:12:34.793355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.022 [2024-11-07 13:12:34.793366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.022 [2024-11-07 13:12:34.793378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.022 [2024-11-07 13:12:34.793389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.022 [2024-11-07 13:12:34.793401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.022 [2024-11-07 13:12:34.793411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.022 [2024-11-07 13:12:34.793425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.022 [2024-11-07 13:12:34.793435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.022 [2024-11-07 13:12:34.793448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.022 [2024-11-07 13:12:34.793460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.022 [2024-11-07 13:12:34.793473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.022 [2024-11-07 13:12:34.793483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.022 [2024-11-07 13:12:34.793497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.022 [2024-11-07 13:12:34.793508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.022 [2024-11-07 13:12:34.793520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.022 [2024-11-07 13:12:34.793531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.022 [2024-11-07 13:12:34.793543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.022 [2024-11-07 13:12:34.793554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.022 [2024-11-07 13:12:34.793566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.022 [2024-11-07 13:12:34.793576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.022 [2024-11-07 13:12:34.793589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.022 [2024-11-07 13:12:34.793599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.022 [2024-11-07 13:12:34.793612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.022 [2024-11-07 13:12:34.793622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.022 [2024-11-07 13:12:34.793634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.022 [2024-11-07 13:12:34.793644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.022 [2024-11-07 13:12:34.793657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.022 [2024-11-07 13:12:34.793667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.022 [2024-11-07 13:12:34.793679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.022 [2024-11-07 13:12:34.793689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.022 [2024-11-07 13:12:34.793702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.022 [2024-11-07 13:12:34.793712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.022 [2024-11-07 13:12:34.793724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.022 [2024-11-07 13:12:34.793734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.022 [2024-11-07 13:12:34.793749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.022 [2024-11-07 13:12:34.793760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.022 [2024-11-07 13:12:34.793774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.022 [2024-11-07 13:12:34.793784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.022 [2024-11-07 13:12:34.793797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.022 [2024-11-07 13:12:34.793807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.022 [2024-11-07 13:12:34.793819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.022 [2024-11-07 13:12:34.793831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.022 [2024-11-07 13:12:34.793844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.022 [2024-11-07 13:12:34.793860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.022 [2024-11-07 13:12:34.793880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.022 [2024-11-07 13:12:34.793891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.022 [2024-11-07 13:12:34.793903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.022 [2024-11-07 13:12:34.793913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.022 [2024-11-07 13:12:34.793926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.022 [2024-11-07 13:12:34.793936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.022 [2024-11-07 13:12:34.793949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.022 [2024-11-07 13:12:34.793959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.022 [2024-11-07 13:12:34.793972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.022 [2024-11-07 13:12:34.793982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.022 [2024-11-07 13:12:34.793995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.022 [2024-11-07 13:12:34.794005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.022 [2024-11-07 13:12:34.794018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.022 [2024-11-07 13:12:34.794028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.022 [2024-11-07 13:12:34.794040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.022 [2024-11-07 13:12:34.794053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.022 [2024-11-07 13:12:34.794065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.022 [2024-11-07 13:12:34.794076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.022 [2024-11-07 13:12:34.794088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.022 [2024-11-07 13:12:34.794099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.022 [2024-11-07 13:12:34.794111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.022 [2024-11-07 13:12:34.794121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.023 [2024-11-07 13:12:34.794134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.023 [2024-11-07 13:12:34.794144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.023 [2024-11-07 13:12:34.794157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.023 [2024-11-07 13:12:34.794167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.023 [2024-11-07 13:12:34.794179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.023 [2024-11-07 13:12:34.794190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.023 [2024-11-07 13:12:34.794202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.023 [2024-11-07 13:12:34.794212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.023 [2024-11-07 13:12:34.794225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.023 [2024-11-07 13:12:34.794235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.023 [2024-11-07 13:12:34.794248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.023 [2024-11-07 13:12:34.794267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.023 [2024-11-07 13:12:34.794279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.023 [2024-11-07 13:12:34.794289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.023 [2024-11-07 13:12:34.794302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.023 [2024-11-07 13:12:34.794312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.023 [2024-11-07 13:12:34.794324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.023 [2024-11-07 13:12:34.794334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.023 [2024-11-07 13:12:34.794346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.023 [2024-11-07 13:12:34.794358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.023 [2024-11-07 13:12:34.794371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.023 [2024-11-07 13:12:34.794381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.023 [2024-11-07 13:12:34.794393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.023 [2024-11-07 13:12:34.794403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.023 [2024-11-07 13:12:34.794416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.023 [2024-11-07 13:12:34.794426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.023 [2024-11-07 13:12:34.794438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.023 [2024-11-07 13:12:34.794448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.023 [2024-11-07 13:12:34.794461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.023 [2024-11-07 13:12:34.794472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.023 [2024-11-07 13:12:34.794485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.023 [2024-11-07 13:12:34.794496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.023 [2024-11-07 13:12:34.794509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.023 [2024-11-07 13:12:34.794519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.023 [2024-11-07 13:12:34.794532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:27.023 [2024-11-07 13:12:34.794541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:27.023 [2024-11-07 13:12:34.794553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000417b00 is same with the state(6) to be set 00:07:27.023 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.023 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:27.023 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.023 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:27.023 [2024-11-07 13:12:34.796078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:27.023 task offset: 81792 on job bdev=Nvme0n1 fails 00:07:27.023 00:07:27.023 Latency(us) 00:07:27.023 [2024-11-07T12:12:35.030Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:27.023 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:27.023 Job: Nvme0n1 ended in about 0.43 seconds with error 00:07:27.023 Verification LBA range: start 0x0 length 0x400 00:07:27.023 Nvme0n1 : 0.43 1338.91 83.68 148.77 0.00 41718.06 6444.37 34515.63 00:07:27.023 [2024-11-07T12:12:35.030Z] =================================================================================================================== 00:07:27.023 [2024-11-07T12:12:35.030Z] Total : 1338.91 83.68 148.77 0.00 41718.06 6444.37 34515.63 00:07:27.023 [2024-11-07 13:12:34.800417] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:27.023 [2024-11-07 13:12:34.800455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:07:27.023 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.023 13:12:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:27.023 [2024-11-07 13:12:34.849461] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:27.964 13:12:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3636384 00:07:27.964 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3636384) - No such process 00:07:27.964 13:12:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:27.964 13:12:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:27.964 13:12:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:27.964 13:12:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:27.964 13:12:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:27.964 13:12:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:27.964 13:12:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:27.964 13:12:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:27.964 { 00:07:27.964 "params": { 00:07:27.964 "name": "Nvme$subsystem", 00:07:27.964 "trtype": "$TEST_TRANSPORT", 00:07:27.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:27.964 "adrfam": "ipv4", 00:07:27.964 "trsvcid": "$NVMF_PORT", 00:07:27.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:27.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:27.964 "hdgst": ${hdgst:-false}, 00:07:27.964 "ddgst": ${ddgst:-false} 00:07:27.964 }, 00:07:27.964 "method": "bdev_nvme_attach_controller" 00:07:27.964 } 00:07:27.964 EOF 00:07:27.964 )") 00:07:27.964 13:12:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:27.964 13:12:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:27.964 13:12:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:27.964 13:12:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:27.964 "params": { 00:07:27.964 "name": "Nvme0", 00:07:27.964 "trtype": "tcp", 00:07:27.964 "traddr": "10.0.0.2", 00:07:27.964 "adrfam": "ipv4", 00:07:27.964 "trsvcid": "4420", 00:07:27.964 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:27.964 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:27.964 "hdgst": false, 00:07:27.964 "ddgst": false 00:07:27.964 }, 00:07:27.964 "method": "bdev_nvme_attach_controller" 00:07:27.964 }' 00:07:27.964 [2024-11-07 13:12:35.896821] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:07:27.964 [2024-11-07 13:12:35.896935] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3636738 ] 00:07:28.225 [2024-11-07 13:12:36.033010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.225 [2024-11-07 13:12:36.131585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.795 Running I/O for 1 seconds... 00:07:29.736 1536.00 IOPS, 96.00 MiB/s 00:07:29.736 Latency(us) 00:07:29.736 [2024-11-07T12:12:37.743Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:29.736 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:29.736 Verification LBA range: start 0x0 length 0x400 00:07:29.736 Nvme0n1 : 1.02 1566.94 97.93 0.00 0.00 40113.17 7154.35 34515.63 00:07:29.736 [2024-11-07T12:12:37.743Z] =================================================================================================================== 00:07:29.736 [2024-11-07T12:12:37.743Z] Total : 1566.94 97.93 0.00 0.00 40113.17 7154.35 34515.63 00:07:30.307 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:30.307 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:30.307 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:30.307 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:30.307 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:30.307 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:30.307 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:30.307 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:30.307 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:30.307 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:30.307 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:30.307 rmmod nvme_tcp 00:07:30.307 rmmod nvme_fabrics 00:07:30.307 rmmod nvme_keyring 00:07:30.307 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:30.568 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:30.568 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:30.568 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3636017 ']' 00:07:30.568 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3636017 00:07:30.568 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 3636017 ']' 00:07:30.568 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 3636017 00:07:30.568 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:07:30.568 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:30.568 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3636017 00:07:30.568 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:30.568 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:30.568 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3636017' 00:07:30.568 killing process with pid 3636017 00:07:30.568 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 3636017 00:07:30.568 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 3636017 00:07:31.138 [2024-11-07 13:12:38.978963] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:31.138 13:12:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:31.138 13:12:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:31.138 13:12:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:31.138 13:12:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:31.138 13:12:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:31.138 13:12:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:31.138 13:12:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:31.138 13:12:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:31.138 13:12:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:31.138 13:12:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.138 13:12:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:31.138 13:12:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.681 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:33.681 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:33.681 00:07:33.681 real 0m17.210s 00:07:33.681 user 0m30.694s 00:07:33.681 sys 0m7.564s 00:07:33.681 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:33.681 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:33.681 ************************************ 00:07:33.681 END TEST nvmf_host_management 00:07:33.681 ************************************ 00:07:33.681 13:12:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:33.681 13:12:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:33.681 13:12:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:33.681 13:12:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:33.681 ************************************ 00:07:33.681 START TEST nvmf_lvol 00:07:33.681 ************************************ 00:07:33.681 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:33.681 * Looking for test storage... 00:07:33.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:33.681 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:33.681 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:07:33.681 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:33.681 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:33.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.682 --rc genhtml_branch_coverage=1 00:07:33.682 --rc genhtml_function_coverage=1 00:07:33.682 --rc genhtml_legend=1 00:07:33.682 --rc geninfo_all_blocks=1 00:07:33.682 --rc geninfo_unexecuted_blocks=1 00:07:33.682 00:07:33.682 ' 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:33.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.682 --rc genhtml_branch_coverage=1 00:07:33.682 --rc genhtml_function_coverage=1 00:07:33.682 --rc genhtml_legend=1 00:07:33.682 --rc geninfo_all_blocks=1 00:07:33.682 --rc geninfo_unexecuted_blocks=1 00:07:33.682 00:07:33.682 ' 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:33.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.682 --rc genhtml_branch_coverage=1 00:07:33.682 --rc genhtml_function_coverage=1 00:07:33.682 --rc genhtml_legend=1 00:07:33.682 --rc geninfo_all_blocks=1 00:07:33.682 --rc geninfo_unexecuted_blocks=1 00:07:33.682 00:07:33.682 ' 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:33.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.682 --rc genhtml_branch_coverage=1 00:07:33.682 --rc genhtml_function_coverage=1 00:07:33.682 --rc genhtml_legend=1 00:07:33.682 --rc geninfo_all_blocks=1 00:07:33.682 --rc geninfo_unexecuted_blocks=1 00:07:33.682 00:07:33.682 ' 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:33.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:33.682 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.683 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:33.683 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.683 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:33.683 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:33.683 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:33.683 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:41.818 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:41.818 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:41.818 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:41.818 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:41.818 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:41.818 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:41.818 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:41.818 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:41.818 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:41.818 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:41.818 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:41.818 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:41.818 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:41.818 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:41.818 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:41.818 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:41.818 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:41.818 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:41.818 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:41.818 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:41.818 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:41.818 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:41.818 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:41.818 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:41.818 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:41.818 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:41.818 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:41.818 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:41.818 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:41.818 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:41.818 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:41.818 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:41.818 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:41.818 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:41.818 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:41.818 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:41.818 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:41.818 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:41.818 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.818 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:41.819 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:41.819 Found net devices under 0000:31:00.0: cvl_0_0 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:41.819 Found net devices under 0000:31:00.1: cvl_0_1 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:41.819 13:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:41.819 13:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:41.819 13:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:41.819 13:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:41.819 13:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:41.819 13:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:41.819 13:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:41.819 13:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:41.819 13:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:41.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:41.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:07:41.819 00:07:41.819 --- 10.0.0.2 ping statistics --- 00:07:41.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.819 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:07:41.819 13:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:41.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:41.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:07:41.819 00:07:41.819 --- 10.0.0.1 ping statistics --- 00:07:41.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.819 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:07:41.819 13:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:41.819 13:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:41.819 13:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:41.819 13:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:41.819 13:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:41.819 13:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:41.819 13:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:41.819 13:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:41.819 13:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:41.819 13:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:41.819 13:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:41.819 13:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:41.819 13:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:41.819 13:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3642072 00:07:41.819 13:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3642072 00:07:41.819 13:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:41.819 13:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 3642072 ']' 00:07:41.819 13:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.819 13:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:41.819 13:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.819 13:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:41.819 13:12:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:41.819 [2024-11-07 13:12:49.372443] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:07:41.819 [2024-11-07 13:12:49.372570] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.819 [2024-11-07 13:12:49.530358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:41.819 [2024-11-07 13:12:49.628108] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:41.819 [2024-11-07 13:12:49.628156] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:41.819 [2024-11-07 13:12:49.628171] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:41.819 [2024-11-07 13:12:49.628183] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:41.819 [2024-11-07 13:12:49.628192] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:41.819 [2024-11-07 13:12:49.630301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.819 [2024-11-07 13:12:49.630380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.819 [2024-11-07 13:12:49.630382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.391 13:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:42.391 13:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:07:42.391 13:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:42.391 13:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:42.391 13:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:42.391 13:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:42.391 13:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:42.391 [2024-11-07 13:12:50.322326] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:42.391 13:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:42.652 13:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:42.652 13:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:42.912 13:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:42.912 13:12:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:43.172 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:43.433 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=21e1a303-3a68-4e79-ac74-086d15158f2b 00:07:43.433 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 21e1a303-3a68-4e79-ac74-086d15158f2b lvol 20 00:07:43.433 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=59498be0-9976-4383-86e6-c4157ef2dc36 00:07:43.433 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:43.725 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 59498be0-9976-4383-86e6-c4157ef2dc36 00:07:43.986 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:43.986 [2024-11-07 13:12:51.918138] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:43.986 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:44.246 13:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3642704 00:07:44.246 13:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:44.246 13:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:45.187 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 59498be0-9976-4383-86e6-c4157ef2dc36 MY_SNAPSHOT 00:07:45.448 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=a37233a7-8824-4f40-bc81-dbc03d39d23f 00:07:45.448 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 59498be0-9976-4383-86e6-c4157ef2dc36 30 00:07:45.709 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone a37233a7-8824-4f40-bc81-dbc03d39d23f MY_CLONE 00:07:45.970 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=8df1f8bd-cfc6-4d72-8c38-3961b953879e 00:07:45.970 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 8df1f8bd-cfc6-4d72-8c38-3961b953879e 00:07:46.548 13:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3642704 00:07:54.697 Initializing NVMe Controllers 00:07:54.697 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:54.697 Controller IO queue size 128, less than required. 00:07:54.697 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:54.697 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:54.697 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:54.697 Initialization complete. Launching workers. 00:07:54.697 ======================================================== 00:07:54.697 Latency(us) 00:07:54.697 Device Information : IOPS MiB/s Average min max 00:07:54.697 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16406.49 64.09 7802.74 451.24 108772.84 00:07:54.697 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11383.52 44.47 11249.66 3381.65 125156.93 00:07:54.697 ======================================================== 00:07:54.697 Total : 27790.01 108.55 9214.69 451.24 125156.93 00:07:54.697 00:07:54.697 13:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:54.957 13:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 59498be0-9976-4383-86e6-c4157ef2dc36 00:07:54.957 13:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 21e1a303-3a68-4e79-ac74-086d15158f2b 00:07:55.218 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:55.218 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:55.218 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:55.218 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:55.218 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:55.218 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:55.218 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:55.218 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:55.218 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:55.218 rmmod nvme_tcp 00:07:55.218 rmmod nvme_fabrics 00:07:55.218 rmmod nvme_keyring 00:07:55.218 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:55.218 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:55.218 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:55.218 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3642072 ']' 00:07:55.218 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3642072 00:07:55.218 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 3642072 ']' 00:07:55.218 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 3642072 00:07:55.218 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:07:55.218 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:55.218 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3642072 00:07:55.478 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:55.478 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:55.478 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3642072' 00:07:55.478 killing process with pid 3642072 00:07:55.478 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 3642072 00:07:55.478 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 3642072 00:07:56.418 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:56.418 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:56.418 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:56.418 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:56.418 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:56.418 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:56.418 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:56.418 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:56.418 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:56.418 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.418 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:56.418 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.332 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:58.332 00:07:58.332 real 0m25.122s 00:07:58.332 user 1m6.471s 00:07:58.332 sys 0m8.964s 00:07:58.332 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:58.332 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:58.332 ************************************ 00:07:58.332 END TEST nvmf_lvol 00:07:58.332 ************************************ 00:07:58.332 13:13:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:58.332 13:13:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:58.332 13:13:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:58.332 13:13:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:58.332 ************************************ 00:07:58.332 START TEST nvmf_lvs_grow 00:07:58.332 ************************************ 00:07:58.332 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:58.594 * Looking for test storage... 00:07:58.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:58.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.594 --rc genhtml_branch_coverage=1 00:07:58.594 --rc genhtml_function_coverage=1 00:07:58.594 --rc genhtml_legend=1 00:07:58.594 --rc geninfo_all_blocks=1 00:07:58.594 --rc geninfo_unexecuted_blocks=1 00:07:58.594 00:07:58.594 ' 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:58.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.594 --rc genhtml_branch_coverage=1 00:07:58.594 --rc genhtml_function_coverage=1 00:07:58.594 --rc genhtml_legend=1 00:07:58.594 --rc geninfo_all_blocks=1 00:07:58.594 --rc geninfo_unexecuted_blocks=1 00:07:58.594 00:07:58.594 ' 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:58.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.594 --rc genhtml_branch_coverage=1 00:07:58.594 --rc genhtml_function_coverage=1 00:07:58.594 --rc genhtml_legend=1 00:07:58.594 --rc geninfo_all_blocks=1 00:07:58.594 --rc geninfo_unexecuted_blocks=1 00:07:58.594 00:07:58.594 ' 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:58.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.594 --rc genhtml_branch_coverage=1 00:07:58.594 --rc genhtml_function_coverage=1 00:07:58.594 --rc genhtml_legend=1 00:07:58.594 --rc geninfo_all_blocks=1 00:07:58.594 --rc geninfo_unexecuted_blocks=1 00:07:58.594 00:07:58.594 ' 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.594 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.595 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.595 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.595 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.595 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:58.595 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.595 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:58.595 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:58.595 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:58.595 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:58.595 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:58.595 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:58.595 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:58.595 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:58.595 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:58.595 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:58.595 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:58.595 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:58.595 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:58.595 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:58.595 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:58.595 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:58.595 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:58.595 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:58.595 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:58.595 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.595 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:58.595 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.595 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:58.595 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:58.595 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:58.595 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:08.602 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:08.602 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:08.603 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:08.603 Found net devices under 0000:31:00.0: cvl_0_0 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:08.603 Found net devices under 0000:31:00.1: cvl_0_1 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:08.603 13:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:08.603 13:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:08.603 13:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:08.603 13:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:08.603 13:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:08.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:08.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.551 ms 00:08:08.603 00:08:08.603 --- 10.0.0.2 ping statistics --- 00:08:08.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.603 rtt min/avg/max/mdev = 0.551/0.551/0.551/0.000 ms 00:08:08.603 13:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:08.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:08.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:08:08.603 00:08:08.603 --- 10.0.0.1 ping statistics --- 00:08:08.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.603 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:08:08.603 13:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:08.603 13:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:08.603 13:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:08.603 13:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:08.603 13:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:08.603 13:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:08.603 13:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:08.603 13:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:08.603 13:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:08.603 13:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:08.603 13:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:08.603 13:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:08.603 13:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:08.603 13:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3649773 00:08:08.603 13:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3649773 00:08:08.603 13:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:08.603 13:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 3649773 ']' 00:08:08.603 13:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.603 13:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:08.603 13:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.603 13:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:08.603 13:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:08.603 [2024-11-07 13:13:15.271395] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:08:08.603 [2024-11-07 13:13:15.271526] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.603 [2024-11-07 13:13:15.434592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.603 [2024-11-07 13:13:15.534379] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:08.603 [2024-11-07 13:13:15.534423] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:08.603 [2024-11-07 13:13:15.534435] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:08.603 [2024-11-07 13:13:15.534446] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:08.603 [2024-11-07 13:13:15.534457] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:08.603 [2024-11-07 13:13:15.535646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.603 13:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:08.603 13:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:08:08.603 13:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:08.603 13:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:08.603 13:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:08.603 13:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:08.603 13:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:08.603 [2024-11-07 13:13:16.214765] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:08.603 13:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:08.603 13:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:08.603 13:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:08.604 13:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:08.604 ************************************ 00:08:08.604 START TEST lvs_grow_clean 00:08:08.604 ************************************ 00:08:08.604 13:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:08:08.604 13:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:08.604 13:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:08.604 13:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:08.604 13:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:08.604 13:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:08.604 13:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:08.604 13:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:08.604 13:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:08.604 13:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:08.604 13:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:08.604 13:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:08.864 13:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=55e70a60-5146-4644-9528-82ebb86a3b7f 00:08:08.864 13:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55e70a60-5146-4644-9528-82ebb86a3b7f 00:08:08.864 13:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:08.864 13:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:08.864 13:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:08.864 13:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 55e70a60-5146-4644-9528-82ebb86a3b7f lvol 150 00:08:09.125 13:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=df7662bb-d2e8-438a-aad0-db7f8fd1f407 00:08:09.125 13:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:09.125 13:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:09.125 [2024-11-07 13:13:17.128709] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:09.125 [2024-11-07 13:13:17.128783] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:09.386 true 00:08:09.386 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55e70a60-5146-4644-9528-82ebb86a3b7f 00:08:09.386 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:09.386 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:09.386 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:09.646 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 df7662bb-d2e8-438a-aad0-db7f8fd1f407 00:08:09.646 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:09.907 [2024-11-07 13:13:17.766753] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:09.907 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:10.167 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3650473 00:08:10.167 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:10.167 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3650473 /var/tmp/bdevperf.sock 00:08:10.167 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:10.167 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 3650473 ']' 00:08:10.167 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:10.167 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:10.167 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:10.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:10.167 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:10.167 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:10.167 [2024-11-07 13:13:18.011439] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:08:10.167 [2024-11-07 13:13:18.011549] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3650473 ] 00:08:10.167 [2024-11-07 13:13:18.162928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.428 [2024-11-07 13:13:18.259103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.000 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:11.000 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:08:11.000 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:11.261 Nvme0n1 00:08:11.261 13:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:11.261 [ 00:08:11.261 { 00:08:11.261 "name": "Nvme0n1", 00:08:11.261 "aliases": [ 00:08:11.261 "df7662bb-d2e8-438a-aad0-db7f8fd1f407" 00:08:11.261 ], 00:08:11.261 "product_name": "NVMe disk", 00:08:11.261 "block_size": 4096, 00:08:11.261 "num_blocks": 38912, 00:08:11.261 "uuid": "df7662bb-d2e8-438a-aad0-db7f8fd1f407", 00:08:11.261 "numa_id": 0, 00:08:11.261 "assigned_rate_limits": { 00:08:11.261 "rw_ios_per_sec": 0, 00:08:11.261 "rw_mbytes_per_sec": 0, 00:08:11.261 "r_mbytes_per_sec": 0, 00:08:11.261 "w_mbytes_per_sec": 0 00:08:11.261 }, 00:08:11.261 "claimed": false, 00:08:11.261 "zoned": false, 00:08:11.261 "supported_io_types": { 00:08:11.261 "read": true, 00:08:11.261 "write": true, 00:08:11.261 "unmap": true, 00:08:11.261 "flush": true, 00:08:11.261 "reset": true, 00:08:11.261 "nvme_admin": true, 00:08:11.261 "nvme_io": true, 00:08:11.261 "nvme_io_md": false, 00:08:11.261 "write_zeroes": true, 00:08:11.261 "zcopy": false, 00:08:11.261 "get_zone_info": false, 00:08:11.261 "zone_management": false, 00:08:11.261 "zone_append": false, 00:08:11.261 "compare": true, 00:08:11.261 "compare_and_write": true, 00:08:11.261 "abort": true, 00:08:11.261 "seek_hole": false, 00:08:11.261 "seek_data": false, 00:08:11.261 "copy": true, 00:08:11.261 "nvme_iov_md": false 00:08:11.261 }, 00:08:11.261 "memory_domains": [ 00:08:11.261 { 00:08:11.261 "dma_device_id": "system", 00:08:11.261 "dma_device_type": 1 00:08:11.261 } 00:08:11.261 ], 00:08:11.261 "driver_specific": { 00:08:11.261 "nvme": [ 00:08:11.261 { 00:08:11.261 "trid": { 00:08:11.261 "trtype": "TCP", 00:08:11.261 "adrfam": "IPv4", 00:08:11.261 "traddr": "10.0.0.2", 00:08:11.261 "trsvcid": "4420", 00:08:11.261 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:11.261 }, 00:08:11.261 "ctrlr_data": { 00:08:11.261 "cntlid": 1, 00:08:11.261 "vendor_id": "0x8086", 00:08:11.261 "model_number": "SPDK bdev Controller", 00:08:11.261 "serial_number": "SPDK0", 00:08:11.261 "firmware_revision": "25.01", 00:08:11.261 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:11.261 "oacs": { 00:08:11.261 "security": 0, 00:08:11.261 "format": 0, 00:08:11.261 "firmware": 0, 00:08:11.261 "ns_manage": 0 00:08:11.261 }, 00:08:11.261 "multi_ctrlr": true, 00:08:11.261 "ana_reporting": false 00:08:11.261 }, 00:08:11.261 "vs": { 00:08:11.261 "nvme_version": "1.3" 00:08:11.261 }, 00:08:11.261 "ns_data": { 00:08:11.261 "id": 1, 00:08:11.261 "can_share": true 00:08:11.261 } 00:08:11.261 } 00:08:11.261 ], 00:08:11.261 "mp_policy": "active_passive" 00:08:11.261 } 00:08:11.261 } 00:08:11.261 ] 00:08:11.261 13:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3650652 00:08:11.261 13:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:11.261 13:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:11.521 Running I/O for 10 seconds... 00:08:12.462 Latency(us) 00:08:12.462 [2024-11-07T12:13:20.469Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:12.462 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.462 Nvme0n1 : 1.00 16054.00 62.71 0.00 0.00 0.00 0.00 0.00 00:08:12.462 [2024-11-07T12:13:20.469Z] =================================================================================================================== 00:08:12.462 [2024-11-07T12:13:20.469Z] Total : 16054.00 62.71 0.00 0.00 0.00 0.00 0.00 00:08:12.462 00:08:13.421 13:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 55e70a60-5146-4644-9528-82ebb86a3b7f 00:08:13.421 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.421 Nvme0n1 : 2.00 16161.50 63.13 0.00 0.00 0.00 0.00 0.00 00:08:13.421 [2024-11-07T12:13:21.428Z] =================================================================================================================== 00:08:13.421 [2024-11-07T12:13:21.428Z] Total : 16161.50 63.13 0.00 0.00 0.00 0.00 0.00 00:08:13.421 00:08:13.421 true 00:08:13.421 13:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55e70a60-5146-4644-9528-82ebb86a3b7f 00:08:13.421 13:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:13.744 13:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:13.744 13:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:13.744 13:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3650652 00:08:14.687 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.687 Nvme0n1 : 3.00 16202.33 63.29 0.00 0.00 0.00 0.00 0.00 00:08:14.687 [2024-11-07T12:13:22.694Z] =================================================================================================================== 00:08:14.687 [2024-11-07T12:13:22.694Z] Total : 16202.33 63.29 0.00 0.00 0.00 0.00 0.00 00:08:14.687 00:08:15.627 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.627 Nvme0n1 : 4.00 16245.00 63.46 0.00 0.00 0.00 0.00 0.00 00:08:15.627 [2024-11-07T12:13:23.634Z] =================================================================================================================== 00:08:15.627 [2024-11-07T12:13:23.634Z] Total : 16245.00 63.46 0.00 0.00 0.00 0.00 0.00 00:08:15.627 00:08:16.569 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.569 Nvme0n1 : 5.00 16271.60 63.56 0.00 0.00 0.00 0.00 0.00 00:08:16.569 [2024-11-07T12:13:24.576Z] =================================================================================================================== 00:08:16.569 [2024-11-07T12:13:24.576Z] Total : 16271.60 63.56 0.00 0.00 0.00 0.00 0.00 00:08:16.569 00:08:17.511 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.511 Nvme0n1 : 6.00 16288.33 63.63 0.00 0.00 0.00 0.00 0.00 00:08:17.511 [2024-11-07T12:13:25.518Z] =================================================================================================================== 00:08:17.511 [2024-11-07T12:13:25.518Z] Total : 16288.33 63.63 0.00 0.00 0.00 0.00 0.00 00:08:17.511 00:08:18.453 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.453 Nvme0n1 : 7.00 16301.00 63.68 0.00 0.00 0.00 0.00 0.00 00:08:18.453 [2024-11-07T12:13:26.460Z] =================================================================================================================== 00:08:18.453 [2024-11-07T12:13:26.460Z] Total : 16301.00 63.68 0.00 0.00 0.00 0.00 0.00 00:08:18.453 00:08:19.394 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.394 Nvme0n1 : 8.00 16323.25 63.76 0.00 0.00 0.00 0.00 0.00 00:08:19.394 [2024-11-07T12:13:27.401Z] =================================================================================================================== 00:08:19.394 [2024-11-07T12:13:27.401Z] Total : 16323.25 63.76 0.00 0.00 0.00 0.00 0.00 00:08:19.394 00:08:20.376 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.376 Nvme0n1 : 9.00 16342.33 63.84 0.00 0.00 0.00 0.00 0.00 00:08:20.376 [2024-11-07T12:13:28.383Z] =================================================================================================================== 00:08:20.376 [2024-11-07T12:13:28.383Z] Total : 16342.33 63.84 0.00 0.00 0.00 0.00 0.00 00:08:20.376 00:08:21.760 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.760 Nvme0n1 : 10.00 16348.20 63.86 0.00 0.00 0.00 0.00 0.00 00:08:21.760 [2024-11-07T12:13:29.767Z] =================================================================================================================== 00:08:21.760 [2024-11-07T12:13:29.767Z] Total : 16348.20 63.86 0.00 0.00 0.00 0.00 0.00 00:08:21.760 00:08:21.760 00:08:21.760 Latency(us) 00:08:21.760 [2024-11-07T12:13:29.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:21.760 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.760 Nvme0n1 : 10.01 16348.76 63.86 0.00 0.00 7826.00 4751.36 16602.45 00:08:21.760 [2024-11-07T12:13:29.767Z] =================================================================================================================== 00:08:21.760 [2024-11-07T12:13:29.767Z] Total : 16348.76 63.86 0.00 0.00 7826.00 4751.36 16602.45 00:08:21.760 { 00:08:21.760 "results": [ 00:08:21.760 { 00:08:21.760 "job": "Nvme0n1", 00:08:21.760 "core_mask": "0x2", 00:08:21.760 "workload": "randwrite", 00:08:21.760 "status": "finished", 00:08:21.760 "queue_depth": 128, 00:08:21.760 "io_size": 4096, 00:08:21.760 "runtime": 10.007484, 00:08:21.760 "iops": 16348.764584584897, 00:08:21.760 "mibps": 63.86236165853475, 00:08:21.760 "io_failed": 0, 00:08:21.760 "io_timeout": 0, 00:08:21.760 "avg_latency_us": 7826.000039443392, 00:08:21.760 "min_latency_us": 4751.36, 00:08:21.760 "max_latency_us": 16602.453333333335 00:08:21.760 } 00:08:21.760 ], 00:08:21.760 "core_count": 1 00:08:21.760 } 00:08:21.760 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3650473 00:08:21.760 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 3650473 ']' 00:08:21.760 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 3650473 00:08:21.760 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:08:21.760 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:21.760 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3650473 00:08:21.760 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:21.760 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:21.760 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3650473' 00:08:21.760 killing process with pid 3650473 00:08:21.760 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 3650473 00:08:21.760 Received shutdown signal, test time was about 10.000000 seconds 00:08:21.760 00:08:21.760 Latency(us) 00:08:21.760 [2024-11-07T12:13:29.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:21.760 [2024-11-07T12:13:29.767Z] =================================================================================================================== 00:08:21.760 [2024-11-07T12:13:29.767Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:21.760 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 3650473 00:08:22.022 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:22.282 13:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:22.282 13:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55e70a60-5146-4644-9528-82ebb86a3b7f 00:08:22.282 13:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:22.543 13:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:22.543 13:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:22.543 13:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:22.804 [2024-11-07 13:13:30.552692] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:22.804 13:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55e70a60-5146-4644-9528-82ebb86a3b7f 00:08:22.804 13:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:22.804 13:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55e70a60-5146-4644-9528-82ebb86a3b7f 00:08:22.804 13:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:22.804 13:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.804 13:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:22.804 13:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.804 13:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:22.804 13:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.804 13:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:22.804 13:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:22.804 13:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55e70a60-5146-4644-9528-82ebb86a3b7f 00:08:22.804 request: 00:08:22.804 { 00:08:22.804 "uuid": "55e70a60-5146-4644-9528-82ebb86a3b7f", 00:08:22.804 "method": "bdev_lvol_get_lvstores", 00:08:22.804 "req_id": 1 00:08:22.805 } 00:08:22.805 Got JSON-RPC error response 00:08:22.805 response: 00:08:22.805 { 00:08:22.805 "code": -19, 00:08:22.805 "message": "No such device" 00:08:22.805 } 00:08:22.805 13:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:22.805 13:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:22.805 13:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:22.805 13:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:22.805 13:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:23.065 aio_bdev 00:08:23.065 13:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev df7662bb-d2e8-438a-aad0-db7f8fd1f407 00:08:23.065 13:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=df7662bb-d2e8-438a-aad0-db7f8fd1f407 00:08:23.065 13:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:23.065 13:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:08:23.065 13:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:23.065 13:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:23.066 13:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:23.326 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b df7662bb-d2e8-438a-aad0-db7f8fd1f407 -t 2000 00:08:23.326 [ 00:08:23.326 { 00:08:23.326 "name": "df7662bb-d2e8-438a-aad0-db7f8fd1f407", 00:08:23.326 "aliases": [ 00:08:23.326 "lvs/lvol" 00:08:23.326 ], 00:08:23.326 "product_name": "Logical Volume", 00:08:23.326 "block_size": 4096, 00:08:23.326 "num_blocks": 38912, 00:08:23.326 "uuid": "df7662bb-d2e8-438a-aad0-db7f8fd1f407", 00:08:23.326 "assigned_rate_limits": { 00:08:23.326 "rw_ios_per_sec": 0, 00:08:23.326 "rw_mbytes_per_sec": 0, 00:08:23.326 "r_mbytes_per_sec": 0, 00:08:23.326 "w_mbytes_per_sec": 0 00:08:23.326 }, 00:08:23.326 "claimed": false, 00:08:23.326 "zoned": false, 00:08:23.326 "supported_io_types": { 00:08:23.326 "read": true, 00:08:23.326 "write": true, 00:08:23.326 "unmap": true, 00:08:23.326 "flush": false, 00:08:23.326 "reset": true, 00:08:23.326 "nvme_admin": false, 00:08:23.326 "nvme_io": false, 00:08:23.326 "nvme_io_md": false, 00:08:23.326 "write_zeroes": true, 00:08:23.326 "zcopy": false, 00:08:23.326 "get_zone_info": false, 00:08:23.326 "zone_management": false, 00:08:23.326 "zone_append": false, 00:08:23.327 "compare": false, 00:08:23.327 "compare_and_write": false, 00:08:23.327 "abort": false, 00:08:23.327 "seek_hole": true, 00:08:23.327 "seek_data": true, 00:08:23.327 "copy": false, 00:08:23.327 "nvme_iov_md": false 00:08:23.327 }, 00:08:23.327 "driver_specific": { 00:08:23.327 "lvol": { 00:08:23.327 "lvol_store_uuid": "55e70a60-5146-4644-9528-82ebb86a3b7f", 00:08:23.327 "base_bdev": "aio_bdev", 00:08:23.327 "thin_provision": false, 00:08:23.327 "num_allocated_clusters": 38, 00:08:23.327 "snapshot": false, 00:08:23.327 "clone": false, 00:08:23.327 "esnap_clone": false 00:08:23.327 } 00:08:23.327 } 00:08:23.327 } 00:08:23.327 ] 00:08:23.327 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:08:23.327 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55e70a60-5146-4644-9528-82ebb86a3b7f 00:08:23.327 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:23.588 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:23.588 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55e70a60-5146-4644-9528-82ebb86a3b7f 00:08:23.588 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:23.848 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:23.848 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete df7662bb-d2e8-438a-aad0-db7f8fd1f407 00:08:23.848 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 55e70a60-5146-4644-9528-82ebb86a3b7f 00:08:24.108 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:24.108 13:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:24.368 00:08:24.368 real 0m15.874s 00:08:24.368 user 0m15.541s 00:08:24.368 sys 0m1.424s 00:08:24.368 13:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:24.368 13:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:24.368 ************************************ 00:08:24.368 END TEST lvs_grow_clean 00:08:24.368 ************************************ 00:08:24.368 13:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:24.368 13:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:24.368 13:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:24.368 13:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:24.368 ************************************ 00:08:24.368 START TEST lvs_grow_dirty 00:08:24.368 ************************************ 00:08:24.368 13:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:08:24.368 13:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:24.368 13:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:24.368 13:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:24.368 13:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:24.368 13:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:24.368 13:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:24.368 13:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:24.368 13:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:24.368 13:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:24.629 13:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:24.629 13:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:24.629 13:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=471864e8-b49b-4afd-a803-c1c678578b0e 00:08:24.629 13:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 471864e8-b49b-4afd-a803-c1c678578b0e 00:08:24.629 13:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:24.888 13:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:24.888 13:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:24.888 13:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 471864e8-b49b-4afd-a803-c1c678578b0e lvol 150 00:08:25.149 13:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=501a117e-664b-45fc-95d2-c80b5b3bfff8 00:08:25.149 13:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:25.149 13:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:25.149 [2024-11-07 13:13:33.052316] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:25.149 [2024-11-07 13:13:33.052391] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:25.149 true 00:08:25.149 13:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 471864e8-b49b-4afd-a803-c1c678578b0e 00:08:25.149 13:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:25.409 13:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:25.409 13:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:25.409 13:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 501a117e-664b-45fc-95d2-c80b5b3bfff8 00:08:25.670 13:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:25.930 [2024-11-07 13:13:33.722479] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:25.930 13:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:25.930 13:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3653529 00:08:25.930 13:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:25.930 13:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:25.930 13:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3653529 /var/tmp/bdevperf.sock 00:08:25.930 13:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 3653529 ']' 00:08:25.931 13:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:25.931 13:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:25.931 13:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:25.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:25.931 13:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:25.931 13:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:26.191 [2024-11-07 13:13:33.981375] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:08:26.191 [2024-11-07 13:13:33.981485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3653529 ] 00:08:26.191 [2024-11-07 13:13:34.129081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.452 [2024-11-07 13:13:34.203572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.023 13:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:27.024 13:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:08:27.024 13:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:27.284 Nvme0n1 00:08:27.284 13:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:27.545 [ 00:08:27.545 { 00:08:27.545 "name": "Nvme0n1", 00:08:27.545 "aliases": [ 00:08:27.545 "501a117e-664b-45fc-95d2-c80b5b3bfff8" 00:08:27.545 ], 00:08:27.545 "product_name": "NVMe disk", 00:08:27.545 "block_size": 4096, 00:08:27.545 "num_blocks": 38912, 00:08:27.545 "uuid": "501a117e-664b-45fc-95d2-c80b5b3bfff8", 00:08:27.545 "numa_id": 0, 00:08:27.545 "assigned_rate_limits": { 00:08:27.545 "rw_ios_per_sec": 0, 00:08:27.545 "rw_mbytes_per_sec": 0, 00:08:27.545 "r_mbytes_per_sec": 0, 00:08:27.545 "w_mbytes_per_sec": 0 00:08:27.545 }, 00:08:27.545 "claimed": false, 00:08:27.545 "zoned": false, 00:08:27.545 "supported_io_types": { 00:08:27.545 "read": true, 00:08:27.545 "write": true, 00:08:27.545 "unmap": true, 00:08:27.545 "flush": true, 00:08:27.545 "reset": true, 00:08:27.545 "nvme_admin": true, 00:08:27.545 "nvme_io": true, 00:08:27.545 "nvme_io_md": false, 00:08:27.545 "write_zeroes": true, 00:08:27.545 "zcopy": false, 00:08:27.545 "get_zone_info": false, 00:08:27.545 "zone_management": false, 00:08:27.545 "zone_append": false, 00:08:27.545 "compare": true, 00:08:27.545 "compare_and_write": true, 00:08:27.545 "abort": true, 00:08:27.545 "seek_hole": false, 00:08:27.545 "seek_data": false, 00:08:27.545 "copy": true, 00:08:27.545 "nvme_iov_md": false 00:08:27.545 }, 00:08:27.545 "memory_domains": [ 00:08:27.545 { 00:08:27.546 "dma_device_id": "system", 00:08:27.546 "dma_device_type": 1 00:08:27.546 } 00:08:27.546 ], 00:08:27.546 "driver_specific": { 00:08:27.546 "nvme": [ 00:08:27.546 { 00:08:27.546 "trid": { 00:08:27.546 "trtype": "TCP", 00:08:27.546 "adrfam": "IPv4", 00:08:27.546 "traddr": "10.0.0.2", 00:08:27.546 "trsvcid": "4420", 00:08:27.546 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:27.546 }, 00:08:27.546 "ctrlr_data": { 00:08:27.546 "cntlid": 1, 00:08:27.546 "vendor_id": "0x8086", 00:08:27.546 "model_number": "SPDK bdev Controller", 00:08:27.546 "serial_number": "SPDK0", 00:08:27.546 "firmware_revision": "25.01", 00:08:27.546 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:27.546 "oacs": { 00:08:27.546 "security": 0, 00:08:27.546 "format": 0, 00:08:27.546 "firmware": 0, 00:08:27.546 "ns_manage": 0 00:08:27.546 }, 00:08:27.546 "multi_ctrlr": true, 00:08:27.546 "ana_reporting": false 00:08:27.546 }, 00:08:27.546 "vs": { 00:08:27.546 "nvme_version": "1.3" 00:08:27.546 }, 00:08:27.546 "ns_data": { 00:08:27.546 "id": 1, 00:08:27.546 "can_share": true 00:08:27.546 } 00:08:27.546 } 00:08:27.546 ], 00:08:27.546 "mp_policy": "active_passive" 00:08:27.546 } 00:08:27.546 } 00:08:27.546 ] 00:08:27.546 13:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3653866 00:08:27.546 13:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:27.546 13:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:27.546 Running I/O for 10 seconds... 00:08:28.487 Latency(us) 00:08:28.487 [2024-11-07T12:13:36.494Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.487 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.487 Nvme0n1 : 1.00 16129.00 63.00 0.00 0.00 0.00 0.00 0.00 00:08:28.487 [2024-11-07T12:13:36.494Z] =================================================================================================================== 00:08:28.487 [2024-11-07T12:13:36.494Z] Total : 16129.00 63.00 0.00 0.00 0.00 0.00 0.00 00:08:28.487 00:08:29.429 13:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 471864e8-b49b-4afd-a803-c1c678578b0e 00:08:29.429 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.429 Nvme0n1 : 2.00 16231.00 63.40 0.00 0.00 0.00 0.00 0.00 00:08:29.429 [2024-11-07T12:13:37.436Z] =================================================================================================================== 00:08:29.429 [2024-11-07T12:13:37.436Z] Total : 16231.00 63.40 0.00 0.00 0.00 0.00 0.00 00:08:29.429 00:08:29.689 true 00:08:29.689 13:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 471864e8-b49b-4afd-a803-c1c678578b0e 00:08:29.689 13:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:29.950 13:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:29.950 13:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:29.950 13:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3653866 00:08:30.520 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.520 Nvme0n1 : 3.00 16277.33 63.58 0.00 0.00 0.00 0.00 0.00 00:08:30.520 [2024-11-07T12:13:38.527Z] =================================================================================================================== 00:08:30.520 [2024-11-07T12:13:38.527Z] Total : 16277.33 63.58 0.00 0.00 0.00 0.00 0.00 00:08:30.520 00:08:31.462 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.462 Nvme0n1 : 4.00 16323.75 63.76 0.00 0.00 0.00 0.00 0.00 00:08:31.462 [2024-11-07T12:13:39.469Z] =================================================================================================================== 00:08:31.462 [2024-11-07T12:13:39.469Z] Total : 16323.75 63.76 0.00 0.00 0.00 0.00 0.00 00:08:31.462 00:08:32.847 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.847 Nvme0n1 : 5.00 16334.80 63.81 0.00 0.00 0.00 0.00 0.00 00:08:32.847 [2024-11-07T12:13:40.854Z] =================================================================================================================== 00:08:32.847 [2024-11-07T12:13:40.854Z] Total : 16334.80 63.81 0.00 0.00 0.00 0.00 0.00 00:08:32.847 00:08:33.418 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.418 Nvme0n1 : 6.00 16348.00 63.86 0.00 0.00 0.00 0.00 0.00 00:08:33.418 [2024-11-07T12:13:41.425Z] =================================================================================================================== 00:08:33.418 [2024-11-07T12:13:41.425Z] Total : 16348.00 63.86 0.00 0.00 0.00 0.00 0.00 00:08:33.418 00:08:34.802 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.802 Nvme0n1 : 7.00 16366.29 63.93 0.00 0.00 0.00 0.00 0.00 00:08:34.802 [2024-11-07T12:13:42.809Z] =================================================================================================================== 00:08:34.802 [2024-11-07T12:13:42.809Z] Total : 16366.29 63.93 0.00 0.00 0.00 0.00 0.00 00:08:34.802 00:08:35.743 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.743 Nvme0n1 : 8.00 16376.62 63.97 0.00 0.00 0.00 0.00 0.00 00:08:35.743 [2024-11-07T12:13:43.750Z] =================================================================================================================== 00:08:35.743 [2024-11-07T12:13:43.750Z] Total : 16376.62 63.97 0.00 0.00 0.00 0.00 0.00 00:08:35.743 00:08:36.688 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.688 Nvme0n1 : 9.00 16388.00 64.02 0.00 0.00 0.00 0.00 0.00 00:08:36.688 [2024-11-07T12:13:44.695Z] =================================================================================================================== 00:08:36.688 [2024-11-07T12:13:44.695Z] Total : 16388.00 64.02 0.00 0.00 0.00 0.00 0.00 00:08:36.688 00:08:37.630 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.630 Nvme0n1 : 10.00 16400.10 64.06 0.00 0.00 0.00 0.00 0.00 00:08:37.630 [2024-11-07T12:13:45.637Z] =================================================================================================================== 00:08:37.630 [2024-11-07T12:13:45.637Z] Total : 16400.10 64.06 0.00 0.00 0.00 0.00 0.00 00:08:37.630 00:08:37.630 00:08:37.630 Latency(us) 00:08:37.630 [2024-11-07T12:13:45.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.630 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.630 Nvme0n1 : 10.00 16405.80 64.09 0.00 0.00 7799.03 4751.36 15073.28 00:08:37.630 [2024-11-07T12:13:45.637Z] =================================================================================================================== 00:08:37.630 [2024-11-07T12:13:45.637Z] Total : 16405.80 64.09 0.00 0.00 7799.03 4751.36 15073.28 00:08:37.630 { 00:08:37.630 "results": [ 00:08:37.630 { 00:08:37.630 "job": "Nvme0n1", 00:08:37.630 "core_mask": "0x2", 00:08:37.631 "workload": "randwrite", 00:08:37.631 "status": "finished", 00:08:37.631 "queue_depth": 128, 00:08:37.631 "io_size": 4096, 00:08:37.631 "runtime": 10.004328, 00:08:37.631 "iops": 16405.799569946128, 00:08:37.631 "mibps": 64.08515457010206, 00:08:37.631 "io_failed": 0, 00:08:37.631 "io_timeout": 0, 00:08:37.631 "avg_latency_us": 7799.029250731641, 00:08:37.631 "min_latency_us": 4751.36, 00:08:37.631 "max_latency_us": 15073.28 00:08:37.631 } 00:08:37.631 ], 00:08:37.631 "core_count": 1 00:08:37.631 } 00:08:37.631 13:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3653529 00:08:37.631 13:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 3653529 ']' 00:08:37.631 13:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 3653529 00:08:37.631 13:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:08:37.631 13:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:37.631 13:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3653529 00:08:37.631 13:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:37.631 13:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:37.631 13:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3653529' 00:08:37.631 killing process with pid 3653529 00:08:37.631 13:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 3653529 00:08:37.631 Received shutdown signal, test time was about 10.000000 seconds 00:08:37.631 00:08:37.631 Latency(us) 00:08:37.631 [2024-11-07T12:13:45.638Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.631 [2024-11-07T12:13:45.638Z] =================================================================================================================== 00:08:37.631 [2024-11-07T12:13:45.638Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:37.631 13:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 3653529 00:08:38.201 13:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:38.201 13:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:38.461 13:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 471864e8-b49b-4afd-a803-c1c678578b0e 00:08:38.461 13:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:38.722 13:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:38.722 13:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:38.722 13:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3649773 00:08:38.722 13:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3649773 00:08:38.722 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3649773 Killed "${NVMF_APP[@]}" "$@" 00:08:38.722 13:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:38.722 13:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:38.722 13:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:38.722 13:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:38.722 13:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:38.722 13:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3656188 00:08:38.722 13:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3656188 00:08:38.722 13:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:38.722 13:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 3656188 ']' 00:08:38.722 13:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.722 13:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:38.722 13:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.722 13:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:38.722 13:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:38.722 [2024-11-07 13:13:46.709820] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:08:38.722 [2024-11-07 13:13:46.709937] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.982 [2024-11-07 13:13:46.870243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.982 [2024-11-07 13:13:46.966817] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:38.982 [2024-11-07 13:13:46.966868] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:38.982 [2024-11-07 13:13:46.966881] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:38.982 [2024-11-07 13:13:46.966893] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:38.982 [2024-11-07 13:13:46.966904] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:38.982 [2024-11-07 13:13:46.968132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.553 13:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:39.553 13:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:08:39.553 13:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:39.553 13:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:39.553 13:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:39.553 13:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.553 13:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:39.814 [2024-11-07 13:13:47.664677] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:39.815 [2024-11-07 13:13:47.664830] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:39.815 [2024-11-07 13:13:47.664881] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:39.815 13:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:39.815 13:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 501a117e-664b-45fc-95d2-c80b5b3bfff8 00:08:39.815 13:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=501a117e-664b-45fc-95d2-c80b5b3bfff8 00:08:39.815 13:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:39.815 13:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:08:39.815 13:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:39.815 13:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:39.815 13:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:40.076 13:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 501a117e-664b-45fc-95d2-c80b5b3bfff8 -t 2000 00:08:40.076 [ 00:08:40.076 { 00:08:40.076 "name": "501a117e-664b-45fc-95d2-c80b5b3bfff8", 00:08:40.076 "aliases": [ 00:08:40.076 "lvs/lvol" 00:08:40.076 ], 00:08:40.076 "product_name": "Logical Volume", 00:08:40.076 "block_size": 4096, 00:08:40.076 "num_blocks": 38912, 00:08:40.076 "uuid": "501a117e-664b-45fc-95d2-c80b5b3bfff8", 00:08:40.076 "assigned_rate_limits": { 00:08:40.076 "rw_ios_per_sec": 0, 00:08:40.076 "rw_mbytes_per_sec": 0, 00:08:40.076 "r_mbytes_per_sec": 0, 00:08:40.076 "w_mbytes_per_sec": 0 00:08:40.076 }, 00:08:40.076 "claimed": false, 00:08:40.076 "zoned": false, 00:08:40.076 "supported_io_types": { 00:08:40.076 "read": true, 00:08:40.076 "write": true, 00:08:40.076 "unmap": true, 00:08:40.076 "flush": false, 00:08:40.076 "reset": true, 00:08:40.076 "nvme_admin": false, 00:08:40.076 "nvme_io": false, 00:08:40.076 "nvme_io_md": false, 00:08:40.076 "write_zeroes": true, 00:08:40.076 "zcopy": false, 00:08:40.076 "get_zone_info": false, 00:08:40.076 "zone_management": false, 00:08:40.076 "zone_append": false, 00:08:40.076 "compare": false, 00:08:40.076 "compare_and_write": false, 00:08:40.076 "abort": false, 00:08:40.076 "seek_hole": true, 00:08:40.076 "seek_data": true, 00:08:40.076 "copy": false, 00:08:40.076 "nvme_iov_md": false 00:08:40.076 }, 00:08:40.076 "driver_specific": { 00:08:40.076 "lvol": { 00:08:40.076 "lvol_store_uuid": "471864e8-b49b-4afd-a803-c1c678578b0e", 00:08:40.076 "base_bdev": "aio_bdev", 00:08:40.076 "thin_provision": false, 00:08:40.076 "num_allocated_clusters": 38, 00:08:40.076 "snapshot": false, 00:08:40.076 "clone": false, 00:08:40.076 "esnap_clone": false 00:08:40.076 } 00:08:40.076 } 00:08:40.076 } 00:08:40.076 ] 00:08:40.076 13:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:08:40.076 13:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 471864e8-b49b-4afd-a803-c1c678578b0e 00:08:40.076 13:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:40.338 13:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:40.338 13:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 471864e8-b49b-4afd-a803-c1c678578b0e 00:08:40.338 13:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:40.598 13:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:40.598 13:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:40.598 [2024-11-07 13:13:48.532583] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:40.598 13:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 471864e8-b49b-4afd-a803-c1c678578b0e 00:08:40.598 13:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:40.598 13:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 471864e8-b49b-4afd-a803-c1c678578b0e 00:08:40.599 13:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:40.599 13:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:40.599 13:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:40.599 13:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:40.599 13:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:40.599 13:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:40.599 13:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:40.599 13:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:40.599 13:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 471864e8-b49b-4afd-a803-c1c678578b0e 00:08:40.860 request: 00:08:40.860 { 00:08:40.860 "uuid": "471864e8-b49b-4afd-a803-c1c678578b0e", 00:08:40.860 "method": "bdev_lvol_get_lvstores", 00:08:40.860 "req_id": 1 00:08:40.860 } 00:08:40.860 Got JSON-RPC error response 00:08:40.860 response: 00:08:40.860 { 00:08:40.860 "code": -19, 00:08:40.860 "message": "No such device" 00:08:40.860 } 00:08:40.860 13:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:40.860 13:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:40.860 13:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:40.860 13:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:40.860 13:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:41.120 aio_bdev 00:08:41.120 13:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 501a117e-664b-45fc-95d2-c80b5b3bfff8 00:08:41.120 13:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=501a117e-664b-45fc-95d2-c80b5b3bfff8 00:08:41.120 13:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:41.120 13:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:08:41.120 13:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:41.120 13:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:41.120 13:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:41.120 13:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 501a117e-664b-45fc-95d2-c80b5b3bfff8 -t 2000 00:08:41.381 [ 00:08:41.381 { 00:08:41.381 "name": "501a117e-664b-45fc-95d2-c80b5b3bfff8", 00:08:41.381 "aliases": [ 00:08:41.381 "lvs/lvol" 00:08:41.381 ], 00:08:41.381 "product_name": "Logical Volume", 00:08:41.381 "block_size": 4096, 00:08:41.381 "num_blocks": 38912, 00:08:41.381 "uuid": "501a117e-664b-45fc-95d2-c80b5b3bfff8", 00:08:41.381 "assigned_rate_limits": { 00:08:41.381 "rw_ios_per_sec": 0, 00:08:41.381 "rw_mbytes_per_sec": 0, 00:08:41.381 "r_mbytes_per_sec": 0, 00:08:41.381 "w_mbytes_per_sec": 0 00:08:41.381 }, 00:08:41.381 "claimed": false, 00:08:41.381 "zoned": false, 00:08:41.381 "supported_io_types": { 00:08:41.381 "read": true, 00:08:41.381 "write": true, 00:08:41.381 "unmap": true, 00:08:41.381 "flush": false, 00:08:41.381 "reset": true, 00:08:41.381 "nvme_admin": false, 00:08:41.381 "nvme_io": false, 00:08:41.381 "nvme_io_md": false, 00:08:41.381 "write_zeroes": true, 00:08:41.381 "zcopy": false, 00:08:41.381 "get_zone_info": false, 00:08:41.381 "zone_management": false, 00:08:41.381 "zone_append": false, 00:08:41.381 "compare": false, 00:08:41.381 "compare_and_write": false, 00:08:41.381 "abort": false, 00:08:41.381 "seek_hole": true, 00:08:41.381 "seek_data": true, 00:08:41.381 "copy": false, 00:08:41.381 "nvme_iov_md": false 00:08:41.381 }, 00:08:41.381 "driver_specific": { 00:08:41.381 "lvol": { 00:08:41.381 "lvol_store_uuid": "471864e8-b49b-4afd-a803-c1c678578b0e", 00:08:41.381 "base_bdev": "aio_bdev", 00:08:41.381 "thin_provision": false, 00:08:41.381 "num_allocated_clusters": 38, 00:08:41.381 "snapshot": false, 00:08:41.381 "clone": false, 00:08:41.381 "esnap_clone": false 00:08:41.381 } 00:08:41.381 } 00:08:41.381 } 00:08:41.381 ] 00:08:41.381 13:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:08:41.381 13:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 471864e8-b49b-4afd-a803-c1c678578b0e 00:08:41.381 13:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:41.643 13:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:41.643 13:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 471864e8-b49b-4afd-a803-c1c678578b0e 00:08:41.643 13:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:41.643 13:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:41.643 13:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 501a117e-664b-45fc-95d2-c80b5b3bfff8 00:08:41.903 13:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 471864e8-b49b-4afd-a803-c1c678578b0e 00:08:42.163 13:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:42.163 13:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:42.424 00:08:42.424 real 0m18.005s 00:08:42.424 user 0m46.645s 00:08:42.424 sys 0m3.081s 00:08:42.424 13:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:42.424 13:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:42.424 ************************************ 00:08:42.424 END TEST lvs_grow_dirty 00:08:42.424 ************************************ 00:08:42.424 13:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:42.424 13:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:08:42.424 13:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:08:42.424 13:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:08:42.424 13:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:42.424 13:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:08:42.424 13:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:08:42.424 13:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:08:42.424 13:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:42.424 nvmf_trace.0 00:08:42.424 13:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:08:42.424 13:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:42.424 13:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:42.424 13:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:42.424 13:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:42.424 13:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:42.424 13:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:42.424 13:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:42.424 rmmod nvme_tcp 00:08:42.424 rmmod nvme_fabrics 00:08:42.424 rmmod nvme_keyring 00:08:42.424 13:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:42.424 13:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:42.424 13:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:42.424 13:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3656188 ']' 00:08:42.424 13:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3656188 00:08:42.424 13:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 3656188 ']' 00:08:42.424 13:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 3656188 00:08:42.424 13:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:08:42.424 13:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:42.424 13:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3656188 00:08:42.424 13:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:42.424 13:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:42.424 13:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3656188' 00:08:42.424 killing process with pid 3656188 00:08:42.424 13:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 3656188 00:08:42.424 13:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 3656188 00:08:43.367 13:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:43.367 13:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:43.367 13:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:43.367 13:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:43.367 13:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:43.367 13:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:43.367 13:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:43.367 13:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:43.367 13:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:43.367 13:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.367 13:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:43.367 13:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.281 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:45.281 00:08:45.281 real 0m46.944s 00:08:45.281 user 1m9.362s 00:08:45.281 sys 0m11.545s 00:08:45.281 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:45.281 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:45.281 ************************************ 00:08:45.281 END TEST nvmf_lvs_grow 00:08:45.281 ************************************ 00:08:45.542 13:13:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:45.543 ************************************ 00:08:45.543 START TEST nvmf_bdev_io_wait 00:08:45.543 ************************************ 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:45.543 * Looking for test storage... 00:08:45.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:45.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.543 --rc genhtml_branch_coverage=1 00:08:45.543 --rc genhtml_function_coverage=1 00:08:45.543 --rc genhtml_legend=1 00:08:45.543 --rc geninfo_all_blocks=1 00:08:45.543 --rc geninfo_unexecuted_blocks=1 00:08:45.543 00:08:45.543 ' 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:45.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.543 --rc genhtml_branch_coverage=1 00:08:45.543 --rc genhtml_function_coverage=1 00:08:45.543 --rc genhtml_legend=1 00:08:45.543 --rc geninfo_all_blocks=1 00:08:45.543 --rc geninfo_unexecuted_blocks=1 00:08:45.543 00:08:45.543 ' 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:45.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.543 --rc genhtml_branch_coverage=1 00:08:45.543 --rc genhtml_function_coverage=1 00:08:45.543 --rc genhtml_legend=1 00:08:45.543 --rc geninfo_all_blocks=1 00:08:45.543 --rc geninfo_unexecuted_blocks=1 00:08:45.543 00:08:45.543 ' 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:45.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.543 --rc genhtml_branch_coverage=1 00:08:45.543 --rc genhtml_function_coverage=1 00:08:45.543 --rc genhtml_legend=1 00:08:45.543 --rc geninfo_all_blocks=1 00:08:45.543 --rc geninfo_unexecuted_blocks=1 00:08:45.543 00:08:45.543 ' 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:45.543 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:45.804 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.804 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.804 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.804 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.804 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.804 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.804 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:45.804 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.804 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:45.804 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:45.804 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:45.804 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:45.804 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.804 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.804 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:45.804 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:45.804 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:45.804 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:45.804 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:45.804 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:45.804 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:45.804 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:45.804 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:45.804 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:45.804 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:45.804 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:45.804 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:45.804 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.804 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.804 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.804 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:45.804 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:45.804 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:45.804 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.946 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:53.946 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:53.946 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:53.946 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:53.946 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:53.946 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:53.947 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:53.947 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:53.947 Found net devices under 0000:31:00.0: cvl_0_0 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:53.947 Found net devices under 0000:31:00.1: cvl_0_1 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:53.947 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:53.948 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:53.948 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:53.948 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:53.948 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:53.948 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:53.948 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:53.948 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:53.948 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:53.948 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:53.948 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:53.948 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:53.948 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:53.948 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:53.948 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:53.948 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:53.948 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:53.948 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:53.948 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:53.948 13:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:54.210 13:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:54.210 13:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:54.210 13:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:54.210 13:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:54.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:54.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.755 ms 00:08:54.210 00:08:54.210 --- 10.0.0.2 ping statistics --- 00:08:54.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.210 rtt min/avg/max/mdev = 0.755/0.755/0.755/0.000 ms 00:08:54.210 13:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:54.210 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:54.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:08:54.210 00:08:54.210 --- 10.0.0.1 ping statistics --- 00:08:54.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.210 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:08:54.210 13:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:54.210 13:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:54.210 13:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:54.210 13:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:54.210 13:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:54.210 13:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:54.210 13:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:54.210 13:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:54.210 13:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:54.210 13:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:54.210 13:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:54.210 13:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:54.210 13:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:54.210 13:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3661903 00:08:54.210 13:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3661903 00:08:54.210 13:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:54.210 13:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 3661903 ']' 00:08:54.210 13:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.210 13:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:54.210 13:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.210 13:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:54.211 13:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:54.471 [2024-11-07 13:14:02.221539] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:08:54.471 [2024-11-07 13:14:02.221665] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.471 [2024-11-07 13:14:02.377914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:54.732 [2024-11-07 13:14:02.477248] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:54.732 [2024-11-07 13:14:02.477296] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:54.732 [2024-11-07 13:14:02.477308] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:54.732 [2024-11-07 13:14:02.477320] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:54.732 [2024-11-07 13:14:02.477329] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:54.732 [2024-11-07 13:14:02.479617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.732 [2024-11-07 13:14:02.479699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:54.732 [2024-11-07 13:14:02.479817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.732 [2024-11-07 13:14:02.479842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:54.992 13:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:54.992 13:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:08:54.992 13:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:54.992 13:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:54.992 13:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.253 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:55.253 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:55.253 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.253 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.253 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.253 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:55.253 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.253 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.253 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.253 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:55.253 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.253 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.253 [2024-11-07 13:14:03.219472] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:55.253 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.253 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:55.253 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.253 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.515 Malloc0 00:08:55.515 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.515 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:55.515 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.515 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.515 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.515 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:55.515 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.515 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.515 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.515 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:55.515 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.515 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.515 [2024-11-07 13:14:03.317806] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:55.515 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.515 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3662040 00:08:55.515 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3662043 00:08:55.515 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:55.515 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:55.515 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:55.515 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:55.515 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:55.515 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:55.515 { 00:08:55.515 "params": { 00:08:55.515 "name": "Nvme$subsystem", 00:08:55.515 "trtype": "$TEST_TRANSPORT", 00:08:55.515 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:55.516 "adrfam": "ipv4", 00:08:55.516 "trsvcid": "$NVMF_PORT", 00:08:55.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:55.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:55.516 "hdgst": ${hdgst:-false}, 00:08:55.516 "ddgst": ${ddgst:-false} 00:08:55.516 }, 00:08:55.516 "method": "bdev_nvme_attach_controller" 00:08:55.516 } 00:08:55.516 EOF 00:08:55.516 )") 00:08:55.516 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3662046 00:08:55.516 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:55.516 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:55.516 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:55.516 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:55.516 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:55.516 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3662050 00:08:55.516 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:55.516 { 00:08:55.516 "params": { 00:08:55.516 "name": "Nvme$subsystem", 00:08:55.516 "trtype": "$TEST_TRANSPORT", 00:08:55.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:55.516 "adrfam": "ipv4", 00:08:55.516 "trsvcid": "$NVMF_PORT", 00:08:55.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:55.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:55.516 "hdgst": ${hdgst:-false}, 00:08:55.516 "ddgst": ${ddgst:-false} 00:08:55.516 }, 00:08:55.516 "method": "bdev_nvme_attach_controller" 00:08:55.516 } 00:08:55.516 EOF 00:08:55.516 )") 00:08:55.516 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:55.516 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:55.516 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:55.516 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:55.516 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:55.516 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:55.516 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:55.516 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:55.516 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:55.516 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:55.516 { 00:08:55.516 "params": { 00:08:55.516 "name": "Nvme$subsystem", 00:08:55.516 "trtype": "$TEST_TRANSPORT", 00:08:55.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:55.516 "adrfam": "ipv4", 00:08:55.516 "trsvcid": "$NVMF_PORT", 00:08:55.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:55.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:55.516 "hdgst": ${hdgst:-false}, 00:08:55.516 "ddgst": ${ddgst:-false} 00:08:55.516 }, 00:08:55.516 "method": "bdev_nvme_attach_controller" 00:08:55.516 } 00:08:55.516 EOF 00:08:55.516 )") 00:08:55.516 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:55.516 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:55.516 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:55.516 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:55.516 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:55.516 { 00:08:55.516 "params": { 00:08:55.516 "name": "Nvme$subsystem", 00:08:55.516 "trtype": "$TEST_TRANSPORT", 00:08:55.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:55.516 "adrfam": "ipv4", 00:08:55.516 "trsvcid": "$NVMF_PORT", 00:08:55.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:55.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:55.516 "hdgst": ${hdgst:-false}, 00:08:55.516 "ddgst": ${ddgst:-false} 00:08:55.516 }, 00:08:55.516 "method": "bdev_nvme_attach_controller" 00:08:55.516 } 00:08:55.516 EOF 00:08:55.516 )") 00:08:55.516 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:55.516 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3662040 00:08:55.516 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:55.516 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:55.516 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:55.516 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:55.516 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:55.516 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:55.516 "params": { 00:08:55.516 "name": "Nvme1", 00:08:55.516 "trtype": "tcp", 00:08:55.516 "traddr": "10.0.0.2", 00:08:55.516 "adrfam": "ipv4", 00:08:55.516 "trsvcid": "4420", 00:08:55.516 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:55.516 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:55.516 "hdgst": false, 00:08:55.516 "ddgst": false 00:08:55.516 }, 00:08:55.516 "method": "bdev_nvme_attach_controller" 00:08:55.516 }' 00:08:55.516 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:55.516 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:55.516 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:55.516 "params": { 00:08:55.516 "name": "Nvme1", 00:08:55.516 "trtype": "tcp", 00:08:55.516 "traddr": "10.0.0.2", 00:08:55.516 "adrfam": "ipv4", 00:08:55.516 "trsvcid": "4420", 00:08:55.516 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:55.516 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:55.516 "hdgst": false, 00:08:55.516 "ddgst": false 00:08:55.516 }, 00:08:55.516 "method": "bdev_nvme_attach_controller" 00:08:55.516 }' 00:08:55.516 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:55.516 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:55.516 "params": { 00:08:55.516 "name": "Nvme1", 00:08:55.516 "trtype": "tcp", 00:08:55.516 "traddr": "10.0.0.2", 00:08:55.516 "adrfam": "ipv4", 00:08:55.516 "trsvcid": "4420", 00:08:55.516 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:55.516 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:55.516 "hdgst": false, 00:08:55.516 "ddgst": false 00:08:55.516 }, 00:08:55.516 "method": "bdev_nvme_attach_controller" 00:08:55.516 }' 00:08:55.516 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:55.516 13:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:55.516 "params": { 00:08:55.516 "name": "Nvme1", 00:08:55.516 "trtype": "tcp", 00:08:55.516 "traddr": "10.0.0.2", 00:08:55.516 "adrfam": "ipv4", 00:08:55.516 "trsvcid": "4420", 00:08:55.516 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:55.516 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:55.516 "hdgst": false, 00:08:55.516 "ddgst": false 00:08:55.516 }, 00:08:55.516 "method": "bdev_nvme_attach_controller" 00:08:55.516 }' 00:08:55.516 [2024-11-07 13:14:03.401016] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:08:55.516 [2024-11-07 13:14:03.401132] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:55.516 [2024-11-07 13:14:03.403767] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:08:55.516 [2024-11-07 13:14:03.403885] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:55.516 [2024-11-07 13:14:03.403892] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:08:55.516 [2024-11-07 13:14:03.403992] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:55.516 [2024-11-07 13:14:03.404714] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:08:55.516 [2024-11-07 13:14:03.404808] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:55.777 [2024-11-07 13:14:03.613207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.777 [2024-11-07 13:14:03.671013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.777 [2024-11-07 13:14:03.710236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:55.777 [2024-11-07 13:14:03.735620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.777 [2024-11-07 13:14:03.764979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:55.777 [2024-11-07 13:14:03.767159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.037 [2024-11-07 13:14:03.832832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:56.037 [2024-11-07 13:14:03.862891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:56.297 Running I/O for 1 seconds... 00:08:56.297 Running I/O for 1 seconds... 00:08:56.297 Running I/O for 1 seconds... 00:08:56.557 Running I/O for 1 seconds... 00:08:57.127 13306.00 IOPS, 51.98 MiB/s 00:08:57.127 Latency(us) 00:08:57.128 [2024-11-07T12:14:05.135Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.128 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:57.128 Nvme1n1 : 1.01 13365.87 52.21 0.00 0.00 9544.28 4014.08 16602.45 00:08:57.128 [2024-11-07T12:14:05.135Z] =================================================================================================================== 00:08:57.128 [2024-11-07T12:14:05.135Z] Total : 13365.87 52.21 0.00 0.00 9544.28 4014.08 16602.45 00:08:57.388 6842.00 IOPS, 26.73 MiB/s 00:08:57.388 Latency(us) 00:08:57.388 [2024-11-07T12:14:05.395Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.388 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:57.388 Nvme1n1 : 1.02 6860.12 26.80 0.00 0.00 18489.84 4969.81 27088.21 00:08:57.388 [2024-11-07T12:14:05.395Z] =================================================================================================================== 00:08:57.388 [2024-11-07T12:14:05.395Z] Total : 6860.12 26.80 0.00 0.00 18489.84 4969.81 27088.21 00:08:57.388 174176.00 IOPS, 680.38 MiB/s 00:08:57.388 Latency(us) 00:08:57.388 [2024-11-07T12:14:05.395Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.388 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:57.388 Nvme1n1 : 1.00 173812.56 678.96 0.00 0.00 732.50 332.80 2075.31 00:08:57.388 [2024-11-07T12:14:05.395Z] =================================================================================================================== 00:08:57.388 [2024-11-07T12:14:05.395Z] Total : 173812.56 678.96 0.00 0.00 732.50 332.80 2075.31 00:08:57.388 6891.00 IOPS, 26.92 MiB/s 00:08:57.388 Latency(us) 00:08:57.388 [2024-11-07T12:14:05.395Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.388 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:57.388 Nvme1n1 : 1.01 6971.45 27.23 0.00 0.00 18293.04 5543.25 39758.51 00:08:57.388 [2024-11-07T12:14:05.395Z] =================================================================================================================== 00:08:57.388 [2024-11-07T12:14:05.395Z] Total : 6971.45 27.23 0.00 0.00 18293.04 5543.25 39758.51 00:08:57.648 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3662043 00:08:57.907 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3662046 00:08:57.907 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3662050 00:08:57.907 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:57.907 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.907 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.907 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.907 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:57.907 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:57.907 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:57.907 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:57.907 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:57.907 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:57.907 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:57.907 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:57.907 rmmod nvme_tcp 00:08:57.907 rmmod nvme_fabrics 00:08:57.907 rmmod nvme_keyring 00:08:57.907 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:57.907 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:57.907 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:57.907 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3661903 ']' 00:08:57.907 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3661903 00:08:57.907 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 3661903 ']' 00:08:57.907 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 3661903 00:08:57.907 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:08:57.907 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:57.907 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3661903 00:08:58.167 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:58.167 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:58.167 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3661903' 00:08:58.167 killing process with pid 3661903 00:08:58.167 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 3661903 00:08:58.167 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 3661903 00:08:58.736 13:14:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:58.737 13:14:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:58.737 13:14:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:58.737 13:14:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:58.737 13:14:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:58.737 13:14:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:58.737 13:14:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:58.737 13:14:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:58.737 13:14:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:58.737 13:14:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.737 13:14:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:58.737 13:14:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.774 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:00.774 00:09:00.774 real 0m15.398s 00:09:00.774 user 0m25.917s 00:09:00.774 sys 0m8.435s 00:09:00.774 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:00.774 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:00.774 ************************************ 00:09:00.774 END TEST nvmf_bdev_io_wait 00:09:00.774 ************************************ 00:09:01.098 13:14:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:01.098 13:14:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:01.098 13:14:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:01.098 13:14:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:01.098 ************************************ 00:09:01.098 START TEST nvmf_queue_depth 00:09:01.098 ************************************ 00:09:01.098 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:01.098 * Looking for test storage... 00:09:01.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.098 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:01.098 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:09:01.098 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:01.098 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:01.098 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:01.098 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:01.098 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:01.098 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:01.098 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:01.098 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:01.098 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:01.098 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:01.098 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:01.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.099 --rc genhtml_branch_coverage=1 00:09:01.099 --rc genhtml_function_coverage=1 00:09:01.099 --rc genhtml_legend=1 00:09:01.099 --rc geninfo_all_blocks=1 00:09:01.099 --rc geninfo_unexecuted_blocks=1 00:09:01.099 00:09:01.099 ' 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:01.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.099 --rc genhtml_branch_coverage=1 00:09:01.099 --rc genhtml_function_coverage=1 00:09:01.099 --rc genhtml_legend=1 00:09:01.099 --rc geninfo_all_blocks=1 00:09:01.099 --rc geninfo_unexecuted_blocks=1 00:09:01.099 00:09:01.099 ' 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:01.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.099 --rc genhtml_branch_coverage=1 00:09:01.099 --rc genhtml_function_coverage=1 00:09:01.099 --rc genhtml_legend=1 00:09:01.099 --rc geninfo_all_blocks=1 00:09:01.099 --rc geninfo_unexecuted_blocks=1 00:09:01.099 00:09:01.099 ' 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:01.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.099 --rc genhtml_branch_coverage=1 00:09:01.099 --rc genhtml_function_coverage=1 00:09:01.099 --rc genhtml_legend=1 00:09:01.099 --rc geninfo_all_blocks=1 00:09:01.099 --rc geninfo_unexecuted_blocks=1 00:09:01.099 00:09:01.099 ' 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.099 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.100 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.100 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:01.100 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.100 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:01.100 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:01.100 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:01.100 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.100 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.100 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.100 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:01.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:01.100 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:01.100 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:01.100 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:01.100 13:14:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:01.100 13:14:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:01.100 13:14:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:01.100 13:14:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:01.100 13:14:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:01.100 13:14:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.100 13:14:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:01.100 13:14:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:01.100 13:14:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:01.100 13:14:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.100 13:14:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:01.100 13:14:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.100 13:14:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:01.100 13:14:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:01.100 13:14:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:01.100 13:14:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:09.250 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:09.250 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:09.250 Found net devices under 0000:31:00.0: cvl_0_0 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:09.250 Found net devices under 0000:31:00.1: cvl_0_1 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:09.250 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:09.251 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:09.251 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:09.251 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:09.251 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:09.251 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:09.251 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:09.251 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:09.251 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:09.251 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:09.251 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:09.251 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:09.251 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:09.251 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:09.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:09.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.608 ms 00:09:09.251 00:09:09.251 --- 10.0.0.2 ping statistics --- 00:09:09.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.251 rtt min/avg/max/mdev = 0.608/0.608/0.608/0.000 ms 00:09:09.251 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:09.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:09.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:09:09.251 00:09:09.251 --- 10.0.0.1 ping statistics --- 00:09:09.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.251 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:09:09.251 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:09.251 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:09.251 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:09.251 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:09.251 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:09.251 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:09.251 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:09.251 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:09.251 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:09.251 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:09.251 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:09.251 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:09.251 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:09.251 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3667296 00:09:09.251 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3667296 00:09:09.251 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:09.251 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3667296 ']' 00:09:09.251 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.251 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:09.251 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.251 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:09.251 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:09.251 [2024-11-07 13:14:16.866229] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:09:09.251 [2024-11-07 13:14:16.866359] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.251 [2024-11-07 13:14:17.050573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.251 [2024-11-07 13:14:17.174303] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:09.251 [2024-11-07 13:14:17.174367] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:09.251 [2024-11-07 13:14:17.174380] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:09.251 [2024-11-07 13:14:17.174393] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:09.251 [2024-11-07 13:14:17.174405] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:09.251 [2024-11-07 13:14:17.175907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.823 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:09.823 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:09:09.823 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:09.823 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:09.823 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:09.823 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:09.823 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:09.823 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.823 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:09.823 [2024-11-07 13:14:17.698393] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:09.823 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.823 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:09.823 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.823 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:09.823 Malloc0 00:09:09.823 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.823 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:09.823 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.823 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:09.823 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.823 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:09.823 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.823 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:09.823 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.823 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:09.823 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.823 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:09.823 [2024-11-07 13:14:17.810981] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:09.823 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.823 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3667642 00:09:09.823 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:09.824 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:09.824 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3667642 /var/tmp/bdevperf.sock 00:09:09.824 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3667642 ']' 00:09:09.824 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:09.824 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:09.824 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:09.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:09.824 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:09.824 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:10.084 [2024-11-07 13:14:17.906568] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:09:10.084 [2024-11-07 13:14:17.906699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3667642 ] 00:09:10.084 [2024-11-07 13:14:18.058749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.344 [2024-11-07 13:14:18.157245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.914 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:10.914 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:09:10.914 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:10.914 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.914 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:10.914 NVMe0n1 00:09:10.914 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.914 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:11.175 Running I/O for 10 seconds... 00:09:13.069 9216.00 IOPS, 36.00 MiB/s [2024-11-07T12:14:22.017Z] 9909.00 IOPS, 38.71 MiB/s [2024-11-07T12:14:23.400Z] 10168.00 IOPS, 39.72 MiB/s [2024-11-07T12:14:24.341Z] 10241.25 IOPS, 40.00 MiB/s [2024-11-07T12:14:25.283Z] 10267.80 IOPS, 40.11 MiB/s [2024-11-07T12:14:26.224Z] 10373.33 IOPS, 40.52 MiB/s [2024-11-07T12:14:27.165Z] 10379.29 IOPS, 40.54 MiB/s [2024-11-07T12:14:28.105Z] 10370.38 IOPS, 40.51 MiB/s [2024-11-07T12:14:29.047Z] 10404.22 IOPS, 40.64 MiB/s [2024-11-07T12:14:29.307Z] 10441.80 IOPS, 40.79 MiB/s 00:09:21.300 Latency(us) 00:09:21.300 [2024-11-07T12:14:29.307Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:21.300 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:21.300 Verification LBA range: start 0x0 length 0x4000 00:09:21.300 NVMe0n1 : 10.07 10440.20 40.78 0.00 0.00 97590.41 27962.03 76895.57 00:09:21.300 [2024-11-07T12:14:29.307Z] =================================================================================================================== 00:09:21.300 [2024-11-07T12:14:29.307Z] Total : 10440.20 40.78 0.00 0.00 97590.41 27962.03 76895.57 00:09:21.300 { 00:09:21.300 "results": [ 00:09:21.300 { 00:09:21.300 "job": "NVMe0n1", 00:09:21.300 "core_mask": "0x1", 00:09:21.300 "workload": "verify", 00:09:21.300 "status": "finished", 00:09:21.300 "verify_range": { 00:09:21.300 "start": 0, 00:09:21.300 "length": 16384 00:09:21.300 }, 00:09:21.300 "queue_depth": 1024, 00:09:21.300 "io_size": 4096, 00:09:21.300 "runtime": 10.072411, 00:09:21.300 "iops": 10440.201457228066, 00:09:21.300 "mibps": 40.78203694229713, 00:09:21.300 "io_failed": 0, 00:09:21.300 "io_timeout": 0, 00:09:21.300 "avg_latency_us": 97590.41202203669, 00:09:21.300 "min_latency_us": 27962.02666666667, 00:09:21.300 "max_latency_us": 76895.57333333333 00:09:21.300 } 00:09:21.300 ], 00:09:21.300 "core_count": 1 00:09:21.300 } 00:09:21.300 13:14:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3667642 00:09:21.300 13:14:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3667642 ']' 00:09:21.300 13:14:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3667642 00:09:21.300 13:14:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:09:21.300 13:14:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:21.300 13:14:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3667642 00:09:21.300 13:14:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:21.300 13:14:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:21.300 13:14:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3667642' 00:09:21.300 killing process with pid 3667642 00:09:21.300 13:14:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3667642 00:09:21.300 Received shutdown signal, test time was about 10.000000 seconds 00:09:21.300 00:09:21.300 Latency(us) 00:09:21.300 [2024-11-07T12:14:29.307Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:21.300 [2024-11-07T12:14:29.307Z] =================================================================================================================== 00:09:21.300 [2024-11-07T12:14:29.307Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:21.300 13:14:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3667642 00:09:22.242 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:22.242 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:22.242 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:22.242 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:22.242 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:22.242 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:22.242 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:22.242 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:22.242 rmmod nvme_tcp 00:09:22.242 rmmod nvme_fabrics 00:09:22.242 rmmod nvme_keyring 00:09:22.242 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:22.242 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:22.242 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:22.242 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3667296 ']' 00:09:22.242 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3667296 00:09:22.242 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3667296 ']' 00:09:22.242 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3667296 00:09:22.242 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:09:22.242 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:22.242 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3667296 00:09:22.242 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:22.242 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:22.242 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3667296' 00:09:22.242 killing process with pid 3667296 00:09:22.242 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3667296 00:09:22.242 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3667296 00:09:23.183 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:23.183 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:23.183 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:23.183 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:23.183 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:23.183 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:23.183 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:23.183 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:23.183 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:23.183 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.183 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:23.183 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.095 13:14:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:25.095 00:09:25.095 real 0m24.147s 00:09:25.095 user 0m27.732s 00:09:25.095 sys 0m7.470s 00:09:25.095 13:14:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:25.095 13:14:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:25.095 ************************************ 00:09:25.095 END TEST nvmf_queue_depth 00:09:25.095 ************************************ 00:09:25.095 13:14:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:25.095 13:14:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:25.095 13:14:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:25.095 13:14:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:25.095 ************************************ 00:09:25.095 START TEST nvmf_target_multipath 00:09:25.095 ************************************ 00:09:25.096 13:14:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:25.096 * Looking for test storage... 00:09:25.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:25.096 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:25.096 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:09:25.096 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:25.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.358 --rc genhtml_branch_coverage=1 00:09:25.358 --rc genhtml_function_coverage=1 00:09:25.358 --rc genhtml_legend=1 00:09:25.358 --rc geninfo_all_blocks=1 00:09:25.358 --rc geninfo_unexecuted_blocks=1 00:09:25.358 00:09:25.358 ' 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:25.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.358 --rc genhtml_branch_coverage=1 00:09:25.358 --rc genhtml_function_coverage=1 00:09:25.358 --rc genhtml_legend=1 00:09:25.358 --rc geninfo_all_blocks=1 00:09:25.358 --rc geninfo_unexecuted_blocks=1 00:09:25.358 00:09:25.358 ' 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:25.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.358 --rc genhtml_branch_coverage=1 00:09:25.358 --rc genhtml_function_coverage=1 00:09:25.358 --rc genhtml_legend=1 00:09:25.358 --rc geninfo_all_blocks=1 00:09:25.358 --rc geninfo_unexecuted_blocks=1 00:09:25.358 00:09:25.358 ' 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:25.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.358 --rc genhtml_branch_coverage=1 00:09:25.358 --rc genhtml_function_coverage=1 00:09:25.358 --rc genhtml_legend=1 00:09:25.358 --rc geninfo_all_blocks=1 00:09:25.358 --rc geninfo_unexecuted_blocks=1 00:09:25.358 00:09:25.358 ' 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:25.358 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:25.359 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:25.359 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:25.359 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:25.359 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:25.359 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:25.359 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:25.359 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:25.359 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:25.359 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:25.359 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:25.359 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:25.359 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:25.359 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:25.359 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:25.359 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.359 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.359 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.359 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:25.359 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:25.359 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:25.359 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:33.499 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:33.499 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.499 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:33.500 Found net devices under 0000:31:00.0: cvl_0_0 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:33.500 Found net devices under 0000:31:00.1: cvl_0_1 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:33.500 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:33.761 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:33.761 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:33.761 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:33.761 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:33.761 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:33.761 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:09:33.761 00:09:33.761 --- 10.0.0.2 ping statistics --- 00:09:33.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.761 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:09:33.761 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:33.761 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:33.761 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:09:33.761 00:09:33.761 --- 10.0.0.1 ping statistics --- 00:09:33.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.761 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:09:33.761 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:33.761 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:33.761 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:33.761 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:33.761 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:33.761 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:33.761 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:33.761 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:33.761 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:33.761 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:33.761 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:33.761 only one NIC for nvmf test 00:09:33.761 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:33.761 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:33.761 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:33.761 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:33.761 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:33.761 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:33.761 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:33.761 rmmod nvme_tcp 00:09:33.761 rmmod nvme_fabrics 00:09:33.761 rmmod nvme_keyring 00:09:33.761 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:33.761 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:33.761 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:33.761 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:33.761 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:33.761 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:33.761 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:33.761 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:33.761 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:33.761 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:33.761 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:33.761 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:33.761 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:33.761 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.761 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.761 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.308 13:14:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:36.308 13:14:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:36.308 13:14:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:36.308 13:14:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:36.308 13:14:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:36.308 13:14:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:36.308 13:14:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:36.308 13:14:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:36.308 13:14:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:36.308 13:14:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:36.308 13:14:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:36.308 13:14:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:36.308 13:14:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:36.308 13:14:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:36.308 13:14:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:36.308 13:14:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:36.308 13:14:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:36.308 13:14:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:36.308 13:14:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:36.308 13:14:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:36.308 13:14:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:36.308 13:14:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:36.308 13:14:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.308 13:14:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.308 13:14:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.308 13:14:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:36.308 00:09:36.308 real 0m10.902s 00:09:36.308 user 0m2.391s 00:09:36.308 sys 0m6.433s 00:09:36.308 13:14:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:36.308 13:14:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:36.308 ************************************ 00:09:36.308 END TEST nvmf_target_multipath 00:09:36.308 ************************************ 00:09:36.308 13:14:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:36.308 13:14:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:36.308 13:14:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:36.308 13:14:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:36.308 ************************************ 00:09:36.308 START TEST nvmf_zcopy 00:09:36.308 ************************************ 00:09:36.308 13:14:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:36.308 * Looking for test storage... 00:09:36.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:36.308 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:36.308 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:09:36.308 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:36.308 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:36.308 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:36.308 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:36.308 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:36.308 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.308 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:36.308 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:36.308 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:36.308 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:36.308 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:36.308 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:36.308 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:36.308 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:36.308 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:36.308 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:36.308 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.308 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:36.308 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:36.308 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.308 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:36.308 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:36.308 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:36.308 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:36.308 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.308 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:36.308 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:36.308 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:36.308 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:36.308 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:36.308 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.308 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:36.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.308 --rc genhtml_branch_coverage=1 00:09:36.308 --rc genhtml_function_coverage=1 00:09:36.308 --rc genhtml_legend=1 00:09:36.308 --rc geninfo_all_blocks=1 00:09:36.308 --rc geninfo_unexecuted_blocks=1 00:09:36.308 00:09:36.308 ' 00:09:36.308 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:36.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.308 --rc genhtml_branch_coverage=1 00:09:36.308 --rc genhtml_function_coverage=1 00:09:36.308 --rc genhtml_legend=1 00:09:36.308 --rc geninfo_all_blocks=1 00:09:36.308 --rc geninfo_unexecuted_blocks=1 00:09:36.308 00:09:36.308 ' 00:09:36.308 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:36.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.308 --rc genhtml_branch_coverage=1 00:09:36.308 --rc genhtml_function_coverage=1 00:09:36.308 --rc genhtml_legend=1 00:09:36.308 --rc geninfo_all_blocks=1 00:09:36.308 --rc geninfo_unexecuted_blocks=1 00:09:36.308 00:09:36.308 ' 00:09:36.308 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:36.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.308 --rc genhtml_branch_coverage=1 00:09:36.309 --rc genhtml_function_coverage=1 00:09:36.309 --rc genhtml_legend=1 00:09:36.309 --rc geninfo_all_blocks=1 00:09:36.309 --rc geninfo_unexecuted_blocks=1 00:09:36.309 00:09:36.309 ' 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:36.309 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:36.309 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:44.456 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:44.456 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:44.456 Found net devices under 0000:31:00.0: cvl_0_0 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:44.456 Found net devices under 0000:31:00.1: cvl_0_1 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:44.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:44.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.701 ms 00:09:44.456 00:09:44.456 --- 10.0.0.2 ping statistics --- 00:09:44.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.456 rtt min/avg/max/mdev = 0.701/0.701/0.701/0.000 ms 00:09:44.456 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:44.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:44.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:09:44.457 00:09:44.457 --- 10.0.0.1 ping statistics --- 00:09:44.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.457 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:09:44.457 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:44.457 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:44.457 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:44.457 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:44.457 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:44.457 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:44.457 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:44.457 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:44.457 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:44.457 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:44.457 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:44.457 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:44.457 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:44.731 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3679607 00:09:44.731 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3679607 00:09:44.731 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:44.731 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 3679607 ']' 00:09:44.731 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.731 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:44.731 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.731 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:44.731 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:44.731 [2024-11-07 13:14:52.558628] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:09:44.731 [2024-11-07 13:14:52.558762] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:44.994 [2024-11-07 13:14:52.742928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.994 [2024-11-07 13:14:52.868016] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:44.995 [2024-11-07 13:14:52.868079] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:44.995 [2024-11-07 13:14:52.868092] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:44.995 [2024-11-07 13:14:52.868106] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:44.995 [2024-11-07 13:14:52.868118] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:44.995 [2024-11-07 13:14:52.869577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.567 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:45.567 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:09:45.567 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:45.567 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:45.567 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.567 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:45.567 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:45.567 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:45.567 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.567 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.567 [2024-11-07 13:14:53.390611] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:45.567 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.567 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:45.567 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.567 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.567 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.567 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:45.567 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.567 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.567 [2024-11-07 13:14:53.414957] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:45.567 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.567 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:45.567 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.567 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.567 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.567 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:45.567 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.567 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.567 malloc0 00:09:45.567 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.567 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:45.567 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.567 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.567 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.567 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:45.567 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:45.567 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:45.567 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:45.567 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:45.567 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:45.567 { 00:09:45.567 "params": { 00:09:45.567 "name": "Nvme$subsystem", 00:09:45.567 "trtype": "$TEST_TRANSPORT", 00:09:45.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:45.567 "adrfam": "ipv4", 00:09:45.567 "trsvcid": "$NVMF_PORT", 00:09:45.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:45.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:45.567 "hdgst": ${hdgst:-false}, 00:09:45.567 "ddgst": ${ddgst:-false} 00:09:45.567 }, 00:09:45.567 "method": "bdev_nvme_attach_controller" 00:09:45.567 } 00:09:45.567 EOF 00:09:45.567 )") 00:09:45.567 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:45.567 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:45.567 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:45.567 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:45.567 "params": { 00:09:45.567 "name": "Nvme1", 00:09:45.567 "trtype": "tcp", 00:09:45.567 "traddr": "10.0.0.2", 00:09:45.567 "adrfam": "ipv4", 00:09:45.567 "trsvcid": "4420", 00:09:45.567 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:45.567 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:45.567 "hdgst": false, 00:09:45.567 "ddgst": false 00:09:45.567 }, 00:09:45.567 "method": "bdev_nvme_attach_controller" 00:09:45.567 }' 00:09:45.828 [2024-11-07 13:14:53.573470] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:09:45.828 [2024-11-07 13:14:53.573595] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3679936 ] 00:09:45.828 [2024-11-07 13:14:53.724492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.828 [2024-11-07 13:14:53.823097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.398 Running I/O for 10 seconds... 00:09:48.725 6533.00 IOPS, 51.04 MiB/s [2024-11-07T12:14:57.673Z] 7646.00 IOPS, 59.73 MiB/s [2024-11-07T12:14:58.615Z] 8006.67 IOPS, 62.55 MiB/s [2024-11-07T12:14:59.554Z] 8205.75 IOPS, 64.11 MiB/s [2024-11-07T12:15:00.504Z] 8320.60 IOPS, 65.00 MiB/s [2024-11-07T12:15:01.446Z] 8385.67 IOPS, 65.51 MiB/s [2024-11-07T12:15:02.387Z] 8438.29 IOPS, 65.92 MiB/s [2024-11-07T12:15:03.772Z] 8477.38 IOPS, 66.23 MiB/s [2024-11-07T12:15:04.342Z] 8514.78 IOPS, 66.52 MiB/s [2024-11-07T12:15:04.602Z] 8540.30 IOPS, 66.72 MiB/s 00:09:56.595 Latency(us) 00:09:56.595 [2024-11-07T12:15:04.602Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:56.595 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:56.595 Verification LBA range: start 0x0 length 0x1000 00:09:56.595 Nvme1n1 : 10.01 8542.51 66.74 0.00 0.00 14929.04 1549.65 29928.11 00:09:56.595 [2024-11-07T12:15:04.602Z] =================================================================================================================== 00:09:56.595 [2024-11-07T12:15:04.602Z] Total : 8542.51 66.74 0.00 0.00 14929.04 1549.65 29928.11 00:09:57.165 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3682160 00:09:57.165 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:57.165 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:57.165 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:57.165 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:57.165 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:57.165 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:57.165 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:57.165 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:57.165 { 00:09:57.165 "params": { 00:09:57.165 "name": "Nvme$subsystem", 00:09:57.165 "trtype": "$TEST_TRANSPORT", 00:09:57.165 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:57.165 "adrfam": "ipv4", 00:09:57.165 "trsvcid": "$NVMF_PORT", 00:09:57.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:57.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:57.165 "hdgst": ${hdgst:-false}, 00:09:57.165 "ddgst": ${ddgst:-false} 00:09:57.165 }, 00:09:57.165 "method": "bdev_nvme_attach_controller" 00:09:57.165 } 00:09:57.165 EOF 00:09:57.165 )") 00:09:57.165 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:57.165 [2024-11-07 13:15:04.949506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.165 [2024-11-07 13:15:04.949542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.165 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:57.165 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:57.165 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:57.165 "params": { 00:09:57.165 "name": "Nvme1", 00:09:57.165 "trtype": "tcp", 00:09:57.165 "traddr": "10.0.0.2", 00:09:57.165 "adrfam": "ipv4", 00:09:57.165 "trsvcid": "4420", 00:09:57.165 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:57.165 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:57.165 "hdgst": false, 00:09:57.165 "ddgst": false 00:09:57.165 }, 00:09:57.165 "method": "bdev_nvme_attach_controller" 00:09:57.165 }' 00:09:57.165 [2024-11-07 13:15:04.961508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.165 [2024-11-07 13:15:04.961529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.165 [2024-11-07 13:15:04.973518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.165 [2024-11-07 13:15:04.973537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.165 [2024-11-07 13:15:04.985562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.165 [2024-11-07 13:15:04.985581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.165 [2024-11-07 13:15:04.997585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.165 [2024-11-07 13:15:04.997603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.165 [2024-11-07 13:15:05.009610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.165 [2024-11-07 13:15:05.009628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.165 [2024-11-07 13:15:05.021230] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:09:57.165 [2024-11-07 13:15:05.021326] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3682160 ] 00:09:57.165 [2024-11-07 13:15:05.021664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.165 [2024-11-07 13:15:05.021680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.165 [2024-11-07 13:15:05.033677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.165 [2024-11-07 13:15:05.033694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.165 [2024-11-07 13:15:05.045700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.165 [2024-11-07 13:15:05.045717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.165 [2024-11-07 13:15:05.057745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.165 [2024-11-07 13:15:05.057762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.165 [2024-11-07 13:15:05.069765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.165 [2024-11-07 13:15:05.069783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.165 [2024-11-07 13:15:05.081804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.165 [2024-11-07 13:15:05.081820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.165 [2024-11-07 13:15:05.093833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.165 [2024-11-07 13:15:05.093850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.165 [2024-11-07 13:15:05.105856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.165 [2024-11-07 13:15:05.105880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.165 [2024-11-07 13:15:05.117903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.165 [2024-11-07 13:15:05.117920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.165 [2024-11-07 13:15:05.129937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.165 [2024-11-07 13:15:05.129953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.165 [2024-11-07 13:15:05.141953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.165 [2024-11-07 13:15:05.141970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.165 [2024-11-07 13:15:05.153995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.165 [2024-11-07 13:15:05.154011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.166 [2024-11-07 13:15:05.159238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.166 [2024-11-07 13:15:05.166043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.166 [2024-11-07 13:15:05.166059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.426 [2024-11-07 13:15:05.178060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.426 [2024-11-07 13:15:05.178077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.426 [2024-11-07 13:15:05.190092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.426 [2024-11-07 13:15:05.190108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.426 [2024-11-07 13:15:05.202109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.426 [2024-11-07 13:15:05.202125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.426 [2024-11-07 13:15:05.214152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.426 [2024-11-07 13:15:05.214169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.426 [2024-11-07 13:15:05.226183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.426 [2024-11-07 13:15:05.226200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.426 [2024-11-07 13:15:05.238208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.426 [2024-11-07 13:15:05.238225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.426 [2024-11-07 13:15:05.250248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.426 [2024-11-07 13:15:05.250264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.426 [2024-11-07 13:15:05.257178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.426 [2024-11-07 13:15:05.262272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.426 [2024-11-07 13:15:05.262289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.426 [2024-11-07 13:15:05.274313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.426 [2024-11-07 13:15:05.274330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.426 [2024-11-07 13:15:05.286339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.426 [2024-11-07 13:15:05.286356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.426 [2024-11-07 13:15:05.298360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.426 [2024-11-07 13:15:05.298376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.427 [2024-11-07 13:15:05.310410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.427 [2024-11-07 13:15:05.310427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.427 [2024-11-07 13:15:05.322434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.427 [2024-11-07 13:15:05.322451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.427 [2024-11-07 13:15:05.334453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.427 [2024-11-07 13:15:05.334470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.427 [2024-11-07 13:15:05.346497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.427 [2024-11-07 13:15:05.346513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.427 [2024-11-07 13:15:05.358513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.427 [2024-11-07 13:15:05.358528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.427 [2024-11-07 13:15:05.370556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.427 [2024-11-07 13:15:05.370572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.427 [2024-11-07 13:15:05.382587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.427 [2024-11-07 13:15:05.382602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.427 [2024-11-07 13:15:05.394623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.427 [2024-11-07 13:15:05.394639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.427 [2024-11-07 13:15:05.406651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.427 [2024-11-07 13:15:05.406667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.427 [2024-11-07 13:15:05.418681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.427 [2024-11-07 13:15:05.418698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.427 [2024-11-07 13:15:05.430702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.427 [2024-11-07 13:15:05.430718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.687 [2024-11-07 13:15:05.442744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.687 [2024-11-07 13:15:05.442760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.687 [2024-11-07 13:15:05.454796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.687 [2024-11-07 13:15:05.454813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.687 [2024-11-07 13:15:05.466802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.687 [2024-11-07 13:15:05.466818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.687 [2024-11-07 13:15:05.478835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.687 [2024-11-07 13:15:05.478851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.687 [2024-11-07 13:15:05.490856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.687 [2024-11-07 13:15:05.490877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.687 [2024-11-07 13:15:05.502902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.687 [2024-11-07 13:15:05.502918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.687 [2024-11-07 13:15:05.514950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.687 [2024-11-07 13:15:05.514970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.687 [2024-11-07 13:15:05.526961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.687 [2024-11-07 13:15:05.526978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.687 [2024-11-07 13:15:05.539005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.687 [2024-11-07 13:15:05.539022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.687 [2024-11-07 13:15:05.551024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.687 [2024-11-07 13:15:05.551043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.687 [2024-11-07 13:15:05.563069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.687 [2024-11-07 13:15:05.563085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.687 [2024-11-07 13:15:05.575101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.687 [2024-11-07 13:15:05.575117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.687 [2024-11-07 13:15:05.587136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.688 [2024-11-07 13:15:05.587153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.688 [2024-11-07 13:15:05.599165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.688 [2024-11-07 13:15:05.599181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.688 [2024-11-07 13:15:05.648345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.688 [2024-11-07 13:15:05.648365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.688 [2024-11-07 13:15:05.659310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.688 [2024-11-07 13:15:05.659326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.688 Running I/O for 5 seconds... 00:09:57.688 [2024-11-07 13:15:05.674966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.688 [2024-11-07 13:15:05.674992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.688 [2024-11-07 13:15:05.688908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.688 [2024-11-07 13:15:05.688930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.948 [2024-11-07 13:15:05.702214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.948 [2024-11-07 13:15:05.702234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.948 [2024-11-07 13:15:05.715641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.948 [2024-11-07 13:15:05.715660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.948 [2024-11-07 13:15:05.729065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.948 [2024-11-07 13:15:05.729084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.948 [2024-11-07 13:15:05.742643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.948 [2024-11-07 13:15:05.742662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.948 [2024-11-07 13:15:05.756552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.948 [2024-11-07 13:15:05.756570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.948 [2024-11-07 13:15:05.770120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.948 [2024-11-07 13:15:05.770139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.948 [2024-11-07 13:15:05.783729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.948 [2024-11-07 13:15:05.783749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.948 [2024-11-07 13:15:05.797197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.948 [2024-11-07 13:15:05.797216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.948 [2024-11-07 13:15:05.810725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.948 [2024-11-07 13:15:05.810744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.948 [2024-11-07 13:15:05.824357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.948 [2024-11-07 13:15:05.824376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.948 [2024-11-07 13:15:05.838284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.948 [2024-11-07 13:15:05.838307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.948 [2024-11-07 13:15:05.852236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.948 [2024-11-07 13:15:05.852254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.948 [2024-11-07 13:15:05.865658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.948 [2024-11-07 13:15:05.865676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.948 [2024-11-07 13:15:05.879234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.948 [2024-11-07 13:15:05.879253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.948 [2024-11-07 13:15:05.892767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.948 [2024-11-07 13:15:05.892785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.948 [2024-11-07 13:15:05.906090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.948 [2024-11-07 13:15:05.906109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.949 [2024-11-07 13:15:05.919795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.949 [2024-11-07 13:15:05.919814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.949 [2024-11-07 13:15:05.933078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.949 [2024-11-07 13:15:05.933097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.949 [2024-11-07 13:15:05.946437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.949 [2024-11-07 13:15:05.946456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.208 [2024-11-07 13:15:05.960051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.209 [2024-11-07 13:15:05.960071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.209 [2024-11-07 13:15:05.973098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.209 [2024-11-07 13:15:05.973116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.209 [2024-11-07 13:15:05.986792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.209 [2024-11-07 13:15:05.986811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.209 [2024-11-07 13:15:06.000749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.209 [2024-11-07 13:15:06.000768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.209 [2024-11-07 13:15:06.012030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.209 [2024-11-07 13:15:06.012049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.209 [2024-11-07 13:15:06.026705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.209 [2024-11-07 13:15:06.026724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.209 [2024-11-07 13:15:06.040282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.209 [2024-11-07 13:15:06.040301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.209 [2024-11-07 13:15:06.053474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.209 [2024-11-07 13:15:06.053492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.209 [2024-11-07 13:15:06.067001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.209 [2024-11-07 13:15:06.067020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.209 [2024-11-07 13:15:06.080611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.209 [2024-11-07 13:15:06.080629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.209 [2024-11-07 13:15:06.093992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.209 [2024-11-07 13:15:06.094014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.209 [2024-11-07 13:15:06.107462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.209 [2024-11-07 13:15:06.107481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.209 [2024-11-07 13:15:06.121378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.209 [2024-11-07 13:15:06.121397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.209 [2024-11-07 13:15:06.135989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.209 [2024-11-07 13:15:06.136007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.209 [2024-11-07 13:15:06.150912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.209 [2024-11-07 13:15:06.150931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.209 [2024-11-07 13:15:06.165126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.209 [2024-11-07 13:15:06.165145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.209 [2024-11-07 13:15:06.178599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.209 [2024-11-07 13:15:06.178618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.209 [2024-11-07 13:15:06.192365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.209 [2024-11-07 13:15:06.192384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.209 [2024-11-07 13:15:06.206152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.209 [2024-11-07 13:15:06.206171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.468 [2024-11-07 13:15:06.219696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.468 [2024-11-07 13:15:06.219715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.468 [2024-11-07 13:15:06.233118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.468 [2024-11-07 13:15:06.233137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.468 [2024-11-07 13:15:06.246981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.468 [2024-11-07 13:15:06.247000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.468 [2024-11-07 13:15:06.260539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.468 [2024-11-07 13:15:06.260557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.468 [2024-11-07 13:15:06.274084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.468 [2024-11-07 13:15:06.274103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.468 [2024-11-07 13:15:06.287405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.468 [2024-11-07 13:15:06.287423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.468 [2024-11-07 13:15:06.301053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.468 [2024-11-07 13:15:06.301071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.468 [2024-11-07 13:15:06.314352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.468 [2024-11-07 13:15:06.314370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.468 [2024-11-07 13:15:06.328049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.468 [2024-11-07 13:15:06.328067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.468 [2024-11-07 13:15:06.342035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.468 [2024-11-07 13:15:06.342054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.468 [2024-11-07 13:15:06.353173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.468 [2024-11-07 13:15:06.353192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.468 [2024-11-07 13:15:06.367680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.468 [2024-11-07 13:15:06.367698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.468 [2024-11-07 13:15:06.381484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.468 [2024-11-07 13:15:06.381504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.468 [2024-11-07 13:15:06.394517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.468 [2024-11-07 13:15:06.394536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.468 [2024-11-07 13:15:06.408102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.468 [2024-11-07 13:15:06.408122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.468 [2024-11-07 13:15:06.421367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.468 [2024-11-07 13:15:06.421386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.468 [2024-11-07 13:15:06.434944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.468 [2024-11-07 13:15:06.434963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.468 [2024-11-07 13:15:06.448415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.468 [2024-11-07 13:15:06.448434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.468 [2024-11-07 13:15:06.462250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.468 [2024-11-07 13:15:06.462270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.728 [2024-11-07 13:15:06.476222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.728 [2024-11-07 13:15:06.476241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.728 [2024-11-07 13:15:06.487447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.728 [2024-11-07 13:15:06.487466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.728 [2024-11-07 13:15:06.501945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.728 [2024-11-07 13:15:06.501964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.728 [2024-11-07 13:15:06.513670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.728 [2024-11-07 13:15:06.513688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.728 [2024-11-07 13:15:06.528210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.728 [2024-11-07 13:15:06.528228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.728 [2024-11-07 13:15:06.539464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.728 [2024-11-07 13:15:06.539489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.728 [2024-11-07 13:15:06.553647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.728 [2024-11-07 13:15:06.553665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.728 [2024-11-07 13:15:06.567594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.728 [2024-11-07 13:15:06.567614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.728 [2024-11-07 13:15:06.579362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.728 [2024-11-07 13:15:06.579381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.728 [2024-11-07 13:15:06.593193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.728 [2024-11-07 13:15:06.593212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.728 [2024-11-07 13:15:06.606502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.728 [2024-11-07 13:15:06.606521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.728 [2024-11-07 13:15:06.620229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.728 [2024-11-07 13:15:06.620248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.728 [2024-11-07 13:15:06.631873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.728 [2024-11-07 13:15:06.631892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.728 [2024-11-07 13:15:06.646140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.728 [2024-11-07 13:15:06.646158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.728 [2024-11-07 13:15:06.659660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.728 [2024-11-07 13:15:06.659678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.728 17394.00 IOPS, 135.89 MiB/s [2024-11-07T12:15:06.735Z] [2024-11-07 13:15:06.673479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.728 [2024-11-07 13:15:06.673498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.728 [2024-11-07 13:15:06.687124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.728 [2024-11-07 13:15:06.687143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.728 [2024-11-07 13:15:06.700837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.728 [2024-11-07 13:15:06.700856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.728 [2024-11-07 13:15:06.714257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.728 [2024-11-07 13:15:06.714275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.728 [2024-11-07 13:15:06.728030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.728 [2024-11-07 13:15:06.728050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.994 [2024-11-07 13:15:06.741643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.994 [2024-11-07 13:15:06.741662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.994 [2024-11-07 13:15:06.755400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.994 [2024-11-07 13:15:06.755418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.994 [2024-11-07 13:15:06.768732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.994 [2024-11-07 13:15:06.768751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.994 [2024-11-07 13:15:06.782365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.994 [2024-11-07 13:15:06.782384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.994 [2024-11-07 13:15:06.795955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.994 [2024-11-07 13:15:06.795973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.994 [2024-11-07 13:15:06.809286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.994 [2024-11-07 13:15:06.809305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.994 [2024-11-07 13:15:06.823084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.994 [2024-11-07 13:15:06.823103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.994 [2024-11-07 13:15:06.836596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.994 [2024-11-07 13:15:06.836615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.994 [2024-11-07 13:15:06.850080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.994 [2024-11-07 13:15:06.850103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.994 [2024-11-07 13:15:06.863939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.994 [2024-11-07 13:15:06.863958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.994 [2024-11-07 13:15:06.877407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.994 [2024-11-07 13:15:06.877426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.994 [2024-11-07 13:15:06.890956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.994 [2024-11-07 13:15:06.890975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.994 [2024-11-07 13:15:06.904678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.994 [2024-11-07 13:15:06.904696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.994 [2024-11-07 13:15:06.918268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.994 [2024-11-07 13:15:06.918286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.994 [2024-11-07 13:15:06.931941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.994 [2024-11-07 13:15:06.931959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.994 [2024-11-07 13:15:06.945650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.994 [2024-11-07 13:15:06.945669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.994 [2024-11-07 13:15:06.958952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.994 [2024-11-07 13:15:06.958971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.994 [2024-11-07 13:15:06.972144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.994 [2024-11-07 13:15:06.972163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.994 [2024-11-07 13:15:06.985912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.994 [2024-11-07 13:15:06.985931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.305 [2024-11-07 13:15:06.999773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.305 [2024-11-07 13:15:06.999792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.305 [2024-11-07 13:15:07.013304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.305 [2024-11-07 13:15:07.013322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.305 [2024-11-07 13:15:07.026468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.305 [2024-11-07 13:15:07.026487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.305 [2024-11-07 13:15:07.039478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.305 [2024-11-07 13:15:07.039496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.305 [2024-11-07 13:15:07.053331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.305 [2024-11-07 13:15:07.053350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.305 [2024-11-07 13:15:07.066528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.305 [2024-11-07 13:15:07.066547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.305 [2024-11-07 13:15:07.080750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.305 [2024-11-07 13:15:07.080768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.305 [2024-11-07 13:15:07.094092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.305 [2024-11-07 13:15:07.094110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.305 [2024-11-07 13:15:07.107537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.305 [2024-11-07 13:15:07.107562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.305 [2024-11-07 13:15:07.121076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.305 [2024-11-07 13:15:07.121095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.305 [2024-11-07 13:15:07.134904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.305 [2024-11-07 13:15:07.134923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.305 [2024-11-07 13:15:07.148234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.305 [2024-11-07 13:15:07.148252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.305 [2024-11-07 13:15:07.161702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.305 [2024-11-07 13:15:07.161721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.305 [2024-11-07 13:15:07.175261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.305 [2024-11-07 13:15:07.175280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.305 [2024-11-07 13:15:07.189149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.306 [2024-11-07 13:15:07.189168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.306 [2024-11-07 13:15:07.202948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.306 [2024-11-07 13:15:07.202966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.306 [2024-11-07 13:15:07.216590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.306 [2024-11-07 13:15:07.216608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.306 [2024-11-07 13:15:07.230329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.306 [2024-11-07 13:15:07.230347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.306 [2024-11-07 13:15:07.243623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.306 [2024-11-07 13:15:07.243641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.306 [2024-11-07 13:15:07.257338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.306 [2024-11-07 13:15:07.257357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.306 [2024-11-07 13:15:07.270908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.306 [2024-11-07 13:15:07.270926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.306 [2024-11-07 13:15:07.284568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.306 [2024-11-07 13:15:07.284586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.640 [2024-11-07 13:15:07.297944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.640 [2024-11-07 13:15:07.297963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.640 [2024-11-07 13:15:07.311597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.640 [2024-11-07 13:15:07.311615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.640 [2024-11-07 13:15:07.324896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.640 [2024-11-07 13:15:07.324916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.640 [2024-11-07 13:15:07.338009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.640 [2024-11-07 13:15:07.338028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.640 [2024-11-07 13:15:07.351307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.640 [2024-11-07 13:15:07.351325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.641 [2024-11-07 13:15:07.364824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.641 [2024-11-07 13:15:07.364847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.641 [2024-11-07 13:15:07.378344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.641 [2024-11-07 13:15:07.378363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.641 [2024-11-07 13:15:07.391520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.641 [2024-11-07 13:15:07.391539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.641 [2024-11-07 13:15:07.405147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.641 [2024-11-07 13:15:07.405171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.641 [2024-11-07 13:15:07.418546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.641 [2024-11-07 13:15:07.418564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.641 [2024-11-07 13:15:07.431883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.641 [2024-11-07 13:15:07.431902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.641 [2024-11-07 13:15:07.445562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.641 [2024-11-07 13:15:07.445581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.641 [2024-11-07 13:15:07.459201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.641 [2024-11-07 13:15:07.459219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.641 [2024-11-07 13:15:07.472767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.641 [2024-11-07 13:15:07.472785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.641 [2024-11-07 13:15:07.486329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.641 [2024-11-07 13:15:07.486347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.641 [2024-11-07 13:15:07.499766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.641 [2024-11-07 13:15:07.499784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.641 [2024-11-07 13:15:07.513423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.641 [2024-11-07 13:15:07.513441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.641 [2024-11-07 13:15:07.526703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.641 [2024-11-07 13:15:07.526721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.641 [2024-11-07 13:15:07.540399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.641 [2024-11-07 13:15:07.540417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.641 [2024-11-07 13:15:07.554659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.641 [2024-11-07 13:15:07.554677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.641 [2024-11-07 13:15:07.568913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.641 [2024-11-07 13:15:07.568931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.641 [2024-11-07 13:15:07.580216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.641 [2024-11-07 13:15:07.580235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.641 [2024-11-07 13:15:07.594763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.641 [2024-11-07 13:15:07.594782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.641 [2024-11-07 13:15:07.608345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.641 [2024-11-07 13:15:07.608363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.902 [2024-11-07 13:15:07.622066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.902 [2024-11-07 13:15:07.622088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.902 [2024-11-07 13:15:07.635471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.902 [2024-11-07 13:15:07.635491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.902 [2024-11-07 13:15:07.649319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.902 [2024-11-07 13:15:07.649337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.902 [2024-11-07 13:15:07.662642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.902 [2024-11-07 13:15:07.662661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.902 17446.50 IOPS, 136.30 MiB/s [2024-11-07T12:15:07.909Z] [2024-11-07 13:15:07.676646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.902 [2024-11-07 13:15:07.676664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.902 [2024-11-07 13:15:07.689944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.902 [2024-11-07 13:15:07.689962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.902 [2024-11-07 13:15:07.703643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.902 [2024-11-07 13:15:07.703661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.902 [2024-11-07 13:15:07.717356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.902 [2024-11-07 13:15:07.717374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.902 [2024-11-07 13:15:07.730894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.902 [2024-11-07 13:15:07.730913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.902 [2024-11-07 13:15:07.744412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.902 [2024-11-07 13:15:07.744431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.902 [2024-11-07 13:15:07.757794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.902 [2024-11-07 13:15:07.757812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.902 [2024-11-07 13:15:07.771771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.902 [2024-11-07 13:15:07.771790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.902 [2024-11-07 13:15:07.782397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.902 [2024-11-07 13:15:07.782416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.902 [2024-11-07 13:15:07.796550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.902 [2024-11-07 13:15:07.796569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.902 [2024-11-07 13:15:07.810392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.902 [2024-11-07 13:15:07.810410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.902 [2024-11-07 13:15:07.824152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.902 [2024-11-07 13:15:07.824170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.902 [2024-11-07 13:15:07.837385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.902 [2024-11-07 13:15:07.837404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.902 [2024-11-07 13:15:07.851020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.902 [2024-11-07 13:15:07.851039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.902 [2024-11-07 13:15:07.864683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.902 [2024-11-07 13:15:07.864701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.902 [2024-11-07 13:15:07.878418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.902 [2024-11-07 13:15:07.878436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.902 [2024-11-07 13:15:07.891684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.902 [2024-11-07 13:15:07.891703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.902 [2024-11-07 13:15:07.905273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.902 [2024-11-07 13:15:07.905291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.163 [2024-11-07 13:15:07.918839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.163 [2024-11-07 13:15:07.918857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.163 [2024-11-07 13:15:07.932413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.163 [2024-11-07 13:15:07.932432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.164 [2024-11-07 13:15:07.945853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.164 [2024-11-07 13:15:07.945876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.164 [2024-11-07 13:15:07.959936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.164 [2024-11-07 13:15:07.959955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.164 [2024-11-07 13:15:07.973163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.164 [2024-11-07 13:15:07.973182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.164 [2024-11-07 13:15:07.986614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.164 [2024-11-07 13:15:07.986633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.164 [2024-11-07 13:15:08.000275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.164 [2024-11-07 13:15:08.000295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.164 [2024-11-07 13:15:08.013805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.164 [2024-11-07 13:15:08.013825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.164 [2024-11-07 13:15:08.027181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.164 [2024-11-07 13:15:08.027200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.164 [2024-11-07 13:15:08.040715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.164 [2024-11-07 13:15:08.040735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.164 [2024-11-07 13:15:08.053836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.164 [2024-11-07 13:15:08.053854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.164 [2024-11-07 13:15:08.067304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.164 [2024-11-07 13:15:08.067323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.164 [2024-11-07 13:15:08.080947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.164 [2024-11-07 13:15:08.080965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.164 [2024-11-07 13:15:08.094566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.164 [2024-11-07 13:15:08.094584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.164 [2024-11-07 13:15:08.108335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.164 [2024-11-07 13:15:08.108355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.164 [2024-11-07 13:15:08.121725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.164 [2024-11-07 13:15:08.121744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.164 [2024-11-07 13:15:08.135221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.164 [2024-11-07 13:15:08.135240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.164 [2024-11-07 13:15:08.148737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.164 [2024-11-07 13:15:08.148756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.164 [2024-11-07 13:15:08.162260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.164 [2024-11-07 13:15:08.162279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.425 [2024-11-07 13:15:08.175382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.425 [2024-11-07 13:15:08.175401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.425 [2024-11-07 13:15:08.188655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.425 [2024-11-07 13:15:08.188674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.425 [2024-11-07 13:15:08.202259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.425 [2024-11-07 13:15:08.202278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.425 [2024-11-07 13:15:08.216040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.425 [2024-11-07 13:15:08.216059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.425 [2024-11-07 13:15:08.229980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.425 [2024-11-07 13:15:08.229999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.425 [2024-11-07 13:15:08.243728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.425 [2024-11-07 13:15:08.243747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.425 [2024-11-07 13:15:08.257143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.425 [2024-11-07 13:15:08.257161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.425 [2024-11-07 13:15:08.271114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.425 [2024-11-07 13:15:08.271140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.425 [2024-11-07 13:15:08.284498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.425 [2024-11-07 13:15:08.284517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.425 [2024-11-07 13:15:08.298082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.425 [2024-11-07 13:15:08.298101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.425 [2024-11-07 13:15:08.311789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.425 [2024-11-07 13:15:08.311808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.425 [2024-11-07 13:15:08.325295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.425 [2024-11-07 13:15:08.325314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.425 [2024-11-07 13:15:08.338995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.425 [2024-11-07 13:15:08.339013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.425 [2024-11-07 13:15:08.352604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.425 [2024-11-07 13:15:08.352623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.425 [2024-11-07 13:15:08.366335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.425 [2024-11-07 13:15:08.366353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.425 [2024-11-07 13:15:08.380101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.425 [2024-11-07 13:15:08.380120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.425 [2024-11-07 13:15:08.393655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.425 [2024-11-07 13:15:08.393674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.426 [2024-11-07 13:15:08.407370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.426 [2024-11-07 13:15:08.407389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.426 [2024-11-07 13:15:08.421012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.426 [2024-11-07 13:15:08.421032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.687 [2024-11-07 13:15:08.434351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.687 [2024-11-07 13:15:08.434371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.687 [2024-11-07 13:15:08.447982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.687 [2024-11-07 13:15:08.448002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.687 [2024-11-07 13:15:08.461898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.687 [2024-11-07 13:15:08.461916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.687 [2024-11-07 13:15:08.475247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.687 [2024-11-07 13:15:08.475265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.687 [2024-11-07 13:15:08.488823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.687 [2024-11-07 13:15:08.488842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.687 [2024-11-07 13:15:08.502566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.687 [2024-11-07 13:15:08.502586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.687 [2024-11-07 13:15:08.515684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.687 [2024-11-07 13:15:08.515704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.687 [2024-11-07 13:15:08.529216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.687 [2024-11-07 13:15:08.529235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.687 [2024-11-07 13:15:08.542723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.687 [2024-11-07 13:15:08.542742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.687 [2024-11-07 13:15:08.555992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.687 [2024-11-07 13:15:08.556011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.687 [2024-11-07 13:15:08.569677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.687 [2024-11-07 13:15:08.569696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.687 [2024-11-07 13:15:08.583366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.687 [2024-11-07 13:15:08.583386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.687 [2024-11-07 13:15:08.596827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.687 [2024-11-07 13:15:08.596847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.687 [2024-11-07 13:15:08.610049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.687 [2024-11-07 13:15:08.610068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.687 [2024-11-07 13:15:08.623791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.687 [2024-11-07 13:15:08.623809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.687 [2024-11-07 13:15:08.637384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.687 [2024-11-07 13:15:08.637407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.687 [2024-11-07 13:15:08.651144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.687 [2024-11-07 13:15:08.651163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.687 [2024-11-07 13:15:08.664781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.687 [2024-11-07 13:15:08.664800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.687 17469.67 IOPS, 136.48 MiB/s [2024-11-07T12:15:08.694Z] [2024-11-07 13:15:08.678565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.687 [2024-11-07 13:15:08.678585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.948 [2024-11-07 13:15:08.692204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.948 [2024-11-07 13:15:08.692224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.948 [2024-11-07 13:15:08.706053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.948 [2024-11-07 13:15:08.706072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.948 [2024-11-07 13:15:08.719493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.948 [2024-11-07 13:15:08.719513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.948 [2024-11-07 13:15:08.732956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.948 [2024-11-07 13:15:08.732975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.948 [2024-11-07 13:15:08.746172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.948 [2024-11-07 13:15:08.746191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.948 [2024-11-07 13:15:08.759937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.948 [2024-11-07 13:15:08.759956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.948 [2024-11-07 13:15:08.773468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.948 [2024-11-07 13:15:08.773488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.948 [2024-11-07 13:15:08.787303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.948 [2024-11-07 13:15:08.787322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.948 [2024-11-07 13:15:08.800787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.948 [2024-11-07 13:15:08.800806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.948 [2024-11-07 13:15:08.814481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.948 [2024-11-07 13:15:08.814500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.948 [2024-11-07 13:15:08.828278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.948 [2024-11-07 13:15:08.828297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.948 [2024-11-07 13:15:08.841854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.948 [2024-11-07 13:15:08.841879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.948 [2024-11-07 13:15:08.855678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.948 [2024-11-07 13:15:08.855697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.948 [2024-11-07 13:15:08.868953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.948 [2024-11-07 13:15:08.868971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.948 [2024-11-07 13:15:08.882422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.948 [2024-11-07 13:15:08.882440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.948 [2024-11-07 13:15:08.895980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.948 [2024-11-07 13:15:08.896002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.948 [2024-11-07 13:15:08.909772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.948 [2024-11-07 13:15:08.909790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.948 [2024-11-07 13:15:08.923502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.948 [2024-11-07 13:15:08.923521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.948 [2024-11-07 13:15:08.937043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.948 [2024-11-07 13:15:08.937061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.948 [2024-11-07 13:15:08.950512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.948 [2024-11-07 13:15:08.950531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.208 [2024-11-07 13:15:08.963963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.208 [2024-11-07 13:15:08.963981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.208 [2024-11-07 13:15:08.977603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.208 [2024-11-07 13:15:08.977621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.209 [2024-11-07 13:15:08.991440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.209 [2024-11-07 13:15:08.991458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.209 [2024-11-07 13:15:09.005062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.209 [2024-11-07 13:15:09.005082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.209 [2024-11-07 13:15:09.018481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.209 [2024-11-07 13:15:09.018500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.209 [2024-11-07 13:15:09.032044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.209 [2024-11-07 13:15:09.032064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.209 [2024-11-07 13:15:09.045457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.209 [2024-11-07 13:15:09.045476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.209 [2024-11-07 13:15:09.059032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.209 [2024-11-07 13:15:09.059051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.209 [2024-11-07 13:15:09.072570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.209 [2024-11-07 13:15:09.072588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.209 [2024-11-07 13:15:09.086025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.209 [2024-11-07 13:15:09.086043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.209 [2024-11-07 13:15:09.099645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.209 [2024-11-07 13:15:09.099664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.209 [2024-11-07 13:15:09.113230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.209 [2024-11-07 13:15:09.113248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.209 [2024-11-07 13:15:09.126360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.209 [2024-11-07 13:15:09.126379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.209 [2024-11-07 13:15:09.140068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.209 [2024-11-07 13:15:09.140094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.209 [2024-11-07 13:15:09.153362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.209 [2024-11-07 13:15:09.153385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.209 [2024-11-07 13:15:09.166678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.209 [2024-11-07 13:15:09.166697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.209 [2024-11-07 13:15:09.180463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.209 [2024-11-07 13:15:09.180481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.209 [2024-11-07 13:15:09.193986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.209 [2024-11-07 13:15:09.194004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.209 [2024-11-07 13:15:09.207218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.209 [2024-11-07 13:15:09.207236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.470 [2024-11-07 13:15:09.220494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.470 [2024-11-07 13:15:09.220514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.470 [2024-11-07 13:15:09.234305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.470 [2024-11-07 13:15:09.234323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.470 [2024-11-07 13:15:09.247606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.470 [2024-11-07 13:15:09.247625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.470 [2024-11-07 13:15:09.261315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.470 [2024-11-07 13:15:09.261335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.470 [2024-11-07 13:15:09.275192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.470 [2024-11-07 13:15:09.275210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.470 [2024-11-07 13:15:09.286465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.470 [2024-11-07 13:15:09.286483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.470 [2024-11-07 13:15:09.299976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.470 [2024-11-07 13:15:09.299994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.470 [2024-11-07 13:15:09.313734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.470 [2024-11-07 13:15:09.313753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.470 [2024-11-07 13:15:09.327523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.470 [2024-11-07 13:15:09.327542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.470 [2024-11-07 13:15:09.341452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.470 [2024-11-07 13:15:09.341471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.470 [2024-11-07 13:15:09.351931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.470 [2024-11-07 13:15:09.351949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.470 [2024-11-07 13:15:09.365983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.470 [2024-11-07 13:15:09.366002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.470 [2024-11-07 13:15:09.379782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.470 [2024-11-07 13:15:09.379801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.470 [2024-11-07 13:15:09.393427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.470 [2024-11-07 13:15:09.393446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.470 [2024-11-07 13:15:09.407142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.470 [2024-11-07 13:15:09.407161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.470 [2024-11-07 13:15:09.420917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.470 [2024-11-07 13:15:09.420937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.470 [2024-11-07 13:15:09.434713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.470 [2024-11-07 13:15:09.434731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.470 [2024-11-07 13:15:09.448393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.470 [2024-11-07 13:15:09.448412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.470 [2024-11-07 13:15:09.462145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.470 [2024-11-07 13:15:09.462164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.731 [2024-11-07 13:15:09.475737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.731 [2024-11-07 13:15:09.475756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.731 [2024-11-07 13:15:09.489777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.731 [2024-11-07 13:15:09.489795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.731 [2024-11-07 13:15:09.501224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.731 [2024-11-07 13:15:09.501242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.731 [2024-11-07 13:15:09.514990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.731 [2024-11-07 13:15:09.515008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.731 [2024-11-07 13:15:09.528411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.731 [2024-11-07 13:15:09.528430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.731 [2024-11-07 13:15:09.542186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.731 [2024-11-07 13:15:09.542205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.731 [2024-11-07 13:15:09.555557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.731 [2024-11-07 13:15:09.555577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.731 [2024-11-07 13:15:09.569012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.731 [2024-11-07 13:15:09.569031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.731 [2024-11-07 13:15:09.582708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.731 [2024-11-07 13:15:09.582727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.731 [2024-11-07 13:15:09.596186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.731 [2024-11-07 13:15:09.596204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.731 [2024-11-07 13:15:09.609203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.731 [2024-11-07 13:15:09.609222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.731 [2024-11-07 13:15:09.622801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.731 [2024-11-07 13:15:09.622820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.731 [2024-11-07 13:15:09.636282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.731 [2024-11-07 13:15:09.636300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.731 [2024-11-07 13:15:09.649742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.731 [2024-11-07 13:15:09.649761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.731 [2024-11-07 13:15:09.663233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.731 [2024-11-07 13:15:09.663252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.731 17468.75 IOPS, 136.47 MiB/s [2024-11-07T12:15:09.738Z] [2024-11-07 13:15:09.677137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.731 [2024-11-07 13:15:09.677156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.731 [2024-11-07 13:15:09.690184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.731 [2024-11-07 13:15:09.690203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.731 [2024-11-07 13:15:09.704062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.731 [2024-11-07 13:15:09.704080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.731 [2024-11-07 13:15:09.717458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.731 [2024-11-07 13:15:09.717477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.731 [2024-11-07 13:15:09.730955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.732 [2024-11-07 13:15:09.730974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.994 [2024-11-07 13:15:09.745279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.994 [2024-11-07 13:15:09.745298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.994 [2024-11-07 13:15:09.758568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.994 [2024-11-07 13:15:09.758587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.994 [2024-11-07 13:15:09.772393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.994 [2024-11-07 13:15:09.772412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.994 [2024-11-07 13:15:09.785879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.994 [2024-11-07 13:15:09.785898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.994 [2024-11-07 13:15:09.799766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.994 [2024-11-07 13:15:09.799785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.994 [2024-11-07 13:15:09.813728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.994 [2024-11-07 13:15:09.813746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.994 [2024-11-07 13:15:09.825251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.994 [2024-11-07 13:15:09.825270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.994 [2024-11-07 13:15:09.839073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.994 [2024-11-07 13:15:09.839092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.994 [2024-11-07 13:15:09.853000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.994 [2024-11-07 13:15:09.853018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.994 [2024-11-07 13:15:09.868824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.994 [2024-11-07 13:15:09.868844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.994 [2024-11-07 13:15:09.882147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.994 [2024-11-07 13:15:09.882165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.994 [2024-11-07 13:15:09.896172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.994 [2024-11-07 13:15:09.896191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.994 [2024-11-07 13:15:09.909700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.994 [2024-11-07 13:15:09.909720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.994 [2024-11-07 13:15:09.923640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.994 [2024-11-07 13:15:09.923659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.994 [2024-11-07 13:15:09.937406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.994 [2024-11-07 13:15:09.937425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.994 [2024-11-07 13:15:09.951086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.994 [2024-11-07 13:15:09.951105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.994 [2024-11-07 13:15:09.965203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.994 [2024-11-07 13:15:09.965221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.994 [2024-11-07 13:15:09.976493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.994 [2024-11-07 13:15:09.976512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.994 [2024-11-07 13:15:09.989971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.994 [2024-11-07 13:15:09.989990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.256 [2024-11-07 13:15:10.004185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.256 [2024-11-07 13:15:10.004211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.256 [2024-11-07 13:15:10.019685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.256 [2024-11-07 13:15:10.019705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.256 [2024-11-07 13:15:10.033449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.256 [2024-11-07 13:15:10.033469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.256 [2024-11-07 13:15:10.047333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.256 [2024-11-07 13:15:10.047354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.256 [2024-11-07 13:15:10.057753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.256 [2024-11-07 13:15:10.057775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.256 [2024-11-07 13:15:10.071756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.256 [2024-11-07 13:15:10.071776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.256 [2024-11-07 13:15:10.085714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.256 [2024-11-07 13:15:10.085733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.256 [2024-11-07 13:15:10.099686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.256 [2024-11-07 13:15:10.099705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.256 [2024-11-07 13:15:10.113661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.256 [2024-11-07 13:15:10.113682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.256 [2024-11-07 13:15:10.127305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.256 [2024-11-07 13:15:10.127324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.256 [2024-11-07 13:15:10.141139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.256 [2024-11-07 13:15:10.141159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.256 [2024-11-07 13:15:10.154685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.256 [2024-11-07 13:15:10.154704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.256 [2024-11-07 13:15:10.168704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.256 [2024-11-07 13:15:10.168728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.256 [2024-11-07 13:15:10.182046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.256 [2024-11-07 13:15:10.182064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.256 [2024-11-07 13:15:10.195649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.256 [2024-11-07 13:15:10.195669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.256 [2024-11-07 13:15:10.209508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.256 [2024-11-07 13:15:10.209527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.256 [2024-11-07 13:15:10.223198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.256 [2024-11-07 13:15:10.223216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.256 [2024-11-07 13:15:10.237018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.256 [2024-11-07 13:15:10.237037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.256 [2024-11-07 13:15:10.250972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.256 [2024-11-07 13:15:10.250990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.518 [2024-11-07 13:15:10.261315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.518 [2024-11-07 13:15:10.261335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.518 [2024-11-07 13:15:10.275101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.518 [2024-11-07 13:15:10.275120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.518 [2024-11-07 13:15:10.288785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.518 [2024-11-07 13:15:10.288804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.518 [2024-11-07 13:15:10.302679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.518 [2024-11-07 13:15:10.302698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.518 [2024-11-07 13:15:10.314028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.518 [2024-11-07 13:15:10.314047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.518 [2024-11-07 13:15:10.327892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.518 [2024-11-07 13:15:10.327910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.518 [2024-11-07 13:15:10.341505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.518 [2024-11-07 13:15:10.341524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.518 [2024-11-07 13:15:10.354984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.518 [2024-11-07 13:15:10.355003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.518 [2024-11-07 13:15:10.368666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.518 [2024-11-07 13:15:10.368686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.518 [2024-11-07 13:15:10.381893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.518 [2024-11-07 13:15:10.381911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.518 [2024-11-07 13:15:10.395839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.518 [2024-11-07 13:15:10.395858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.518 [2024-11-07 13:15:10.407676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.518 [2024-11-07 13:15:10.407695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.518 [2024-11-07 13:15:10.421262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.518 [2024-11-07 13:15:10.421286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.518 [2024-11-07 13:15:10.434709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.518 [2024-11-07 13:15:10.434728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.518 [2024-11-07 13:15:10.448366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.518 [2024-11-07 13:15:10.448384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.518 [2024-11-07 13:15:10.461485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.518 [2024-11-07 13:15:10.461503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.518 [2024-11-07 13:15:10.475320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.518 [2024-11-07 13:15:10.475338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.518 [2024-11-07 13:15:10.488873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.518 [2024-11-07 13:15:10.488891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.518 [2024-11-07 13:15:10.502356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.518 [2024-11-07 13:15:10.502375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.518 [2024-11-07 13:15:10.516156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.518 [2024-11-07 13:15:10.516175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.780 [2024-11-07 13:15:10.529669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.780 [2024-11-07 13:15:10.529687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.780 [2024-11-07 13:15:10.543543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.780 [2024-11-07 13:15:10.543562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.780 [2024-11-07 13:15:10.557193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.780 [2024-11-07 13:15:10.557211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.780 [2024-11-07 13:15:10.570372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.780 [2024-11-07 13:15:10.570390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.780 [2024-11-07 13:15:10.584136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.780 [2024-11-07 13:15:10.584155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.780 [2024-11-07 13:15:10.598238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.780 [2024-11-07 13:15:10.598256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.780 [2024-11-07 13:15:10.611487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.780 [2024-11-07 13:15:10.611505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.780 [2024-11-07 13:15:10.625513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.780 [2024-11-07 13:15:10.625531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.780 [2024-11-07 13:15:10.639302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.780 [2024-11-07 13:15:10.639322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.780 [2024-11-07 13:15:10.652796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.780 [2024-11-07 13:15:10.652815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.780 [2024-11-07 13:15:10.666258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.780 [2024-11-07 13:15:10.666276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.780 17456.00 IOPS, 136.38 MiB/s [2024-11-07T12:15:10.787Z] [2024-11-07 13:15:10.679359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.780 [2024-11-07 13:15:10.679381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.780 00:10:02.780 Latency(us) 00:10:02.780 [2024-11-07T12:15:10.787Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:02.780 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:02.780 Nvme1n1 : 5.01 17455.39 136.37 0.00 0.00 7324.71 3386.03 16930.13 00:10:02.780 [2024-11-07T12:15:10.787Z] =================================================================================================================== 00:10:02.780 [2024-11-07T12:15:10.787Z] Total : 17455.39 136.37 0.00 0.00 7324.71 3386.03 16930.13 00:10:02.780 [2024-11-07 13:15:10.688702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.780 [2024-11-07 13:15:10.688720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.780 [2024-11-07 13:15:10.700720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.780 [2024-11-07 13:15:10.700738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.780 [2024-11-07 13:15:10.712756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.780 [2024-11-07 13:15:10.712772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.780 [2024-11-07 13:15:10.724810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.780 [2024-11-07 13:15:10.724833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.780 [2024-11-07 13:15:10.736814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.780 [2024-11-07 13:15:10.736831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.780 [2024-11-07 13:15:10.748861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.780 [2024-11-07 13:15:10.748884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.780 [2024-11-07 13:15:10.760879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.780 [2024-11-07 13:15:10.760896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.780 [2024-11-07 13:15:10.772929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.780 [2024-11-07 13:15:10.772945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.041 [2024-11-07 13:15:10.784985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.041 [2024-11-07 13:15:10.785005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.041 [2024-11-07 13:15:10.796971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.041 [2024-11-07 13:15:10.796988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.041 [2024-11-07 13:15:10.809012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.041 [2024-11-07 13:15:10.809028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.041 [2024-11-07 13:15:10.821040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.041 [2024-11-07 13:15:10.821056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.041 [2024-11-07 13:15:10.833062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.041 [2024-11-07 13:15:10.833078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.041 [2024-11-07 13:15:10.845102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.041 [2024-11-07 13:15:10.845123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.041 [2024-11-07 13:15:10.857122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.041 [2024-11-07 13:15:10.857139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.041 [2024-11-07 13:15:10.869163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.041 [2024-11-07 13:15:10.869179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.041 [2024-11-07 13:15:10.881194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.041 [2024-11-07 13:15:10.881210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.041 [2024-11-07 13:15:10.893226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.041 [2024-11-07 13:15:10.893243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.041 [2024-11-07 13:15:10.905260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.041 [2024-11-07 13:15:10.905276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.041 [2024-11-07 13:15:10.917295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.041 [2024-11-07 13:15:10.917311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.041 [2024-11-07 13:15:10.929309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.041 [2024-11-07 13:15:10.929325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.041 [2024-11-07 13:15:10.941367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.041 [2024-11-07 13:15:10.941387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.041 [2024-11-07 13:15:10.953378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.041 [2024-11-07 13:15:10.953394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.041 [2024-11-07 13:15:10.965419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.041 [2024-11-07 13:15:10.965435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.041 [2024-11-07 13:15:10.977447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.041 [2024-11-07 13:15:10.977463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.041 [2024-11-07 13:15:10.989468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.041 [2024-11-07 13:15:10.989484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.041 [2024-11-07 13:15:11.001521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.041 [2024-11-07 13:15:11.001537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.041 [2024-11-07 13:15:11.013539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.041 [2024-11-07 13:15:11.013556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.041 [2024-11-07 13:15:11.025558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.041 [2024-11-07 13:15:11.025573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.041 [2024-11-07 13:15:11.037597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.041 [2024-11-07 13:15:11.037614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.302 [2024-11-07 13:15:11.049619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.302 [2024-11-07 13:15:11.049636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.302 [2024-11-07 13:15:11.061697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.302 [2024-11-07 13:15:11.061714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.302 [2024-11-07 13:15:11.073689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.302 [2024-11-07 13:15:11.073705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.302 [2024-11-07 13:15:11.085713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.302 [2024-11-07 13:15:11.085730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.302 [2024-11-07 13:15:11.097754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.302 [2024-11-07 13:15:11.097771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.302 [2024-11-07 13:15:11.109791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.302 [2024-11-07 13:15:11.109808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.302 [2024-11-07 13:15:11.121807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.302 [2024-11-07 13:15:11.121824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.302 [2024-11-07 13:15:11.133851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.302 [2024-11-07 13:15:11.133874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.302 [2024-11-07 13:15:11.145891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.302 [2024-11-07 13:15:11.145907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.302 [2024-11-07 13:15:11.157914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.302 [2024-11-07 13:15:11.157930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.302 [2024-11-07 13:15:11.169949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.302 [2024-11-07 13:15:11.169965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.303 [2024-11-07 13:15:11.181963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.303 [2024-11-07 13:15:11.181979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.303 [2024-11-07 13:15:11.194008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.303 [2024-11-07 13:15:11.194024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.303 [2024-11-07 13:15:11.206046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.303 [2024-11-07 13:15:11.206062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.303 [2024-11-07 13:15:11.218068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.303 [2024-11-07 13:15:11.218084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.303 [2024-11-07 13:15:11.230096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.303 [2024-11-07 13:15:11.230112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.303 [2024-11-07 13:15:11.242122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.303 [2024-11-07 13:15:11.242138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.303 [2024-11-07 13:15:11.254162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.303 [2024-11-07 13:15:11.254178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.303 [2024-11-07 13:15:11.266192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.303 [2024-11-07 13:15:11.266208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.303 [2024-11-07 13:15:11.278229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.303 [2024-11-07 13:15:11.278245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3682160) - No such process 00:10:03.303 13:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3682160 00:10:03.303 13:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.303 13:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.303 13:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:03.303 13:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.303 13:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:03.303 13:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.303 13:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:03.303 delay0 00:10:03.563 13:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.563 13:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:03.563 13:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.563 13:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:03.563 13:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.563 13:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:03.563 [2024-11-07 13:15:11.513065] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:11.701 Initializing NVMe Controllers 00:10:11.701 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:11.701 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:11.701 Initialization complete. Launching workers. 00:10:11.701 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 242, failed: 27457 00:10:11.701 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 27576, failed to submit 123 00:10:11.701 success 27510, unsuccessful 66, failed 0 00:10:11.701 13:15:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:11.701 13:15:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:11.701 13:15:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:11.701 13:15:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:11.701 13:15:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:11.701 13:15:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:11.701 13:15:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:11.701 13:15:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:11.701 rmmod nvme_tcp 00:10:11.701 rmmod nvme_fabrics 00:10:11.701 rmmod nvme_keyring 00:10:11.701 13:15:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:11.701 13:15:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:11.701 13:15:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:11.701 13:15:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3679607 ']' 00:10:11.701 13:15:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3679607 00:10:11.701 13:15:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 3679607 ']' 00:10:11.701 13:15:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 3679607 00:10:11.701 13:15:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:10:11.701 13:15:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:11.701 13:15:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3679607 00:10:11.701 13:15:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:10:11.701 13:15:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:10:11.701 13:15:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3679607' 00:10:11.701 killing process with pid 3679607 00:10:11.701 13:15:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 3679607 00:10:11.701 13:15:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 3679607 00:10:11.701 13:15:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:11.701 13:15:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:11.701 13:15:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:11.701 13:15:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:11.701 13:15:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:11.701 13:15:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:11.701 13:15:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:11.701 13:15:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:11.701 13:15:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:11.701 13:15:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.701 13:15:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.701 13:15:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.616 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:13.616 00:10:13.616 real 0m37.508s 00:10:13.616 user 0m49.805s 00:10:13.616 sys 0m12.207s 00:10:13.616 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:13.616 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.616 ************************************ 00:10:13.616 END TEST nvmf_zcopy 00:10:13.616 ************************************ 00:10:13.616 13:15:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:13.616 13:15:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:13.616 13:15:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:13.616 13:15:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:13.616 ************************************ 00:10:13.616 START TEST nvmf_nmic 00:10:13.616 ************************************ 00:10:13.616 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:13.616 * Looking for test storage... 00:10:13.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:13.616 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:13.616 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:10:13.616 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:13.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.878 --rc genhtml_branch_coverage=1 00:10:13.878 --rc genhtml_function_coverage=1 00:10:13.878 --rc genhtml_legend=1 00:10:13.878 --rc geninfo_all_blocks=1 00:10:13.878 --rc geninfo_unexecuted_blocks=1 00:10:13.878 00:10:13.878 ' 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:13.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.878 --rc genhtml_branch_coverage=1 00:10:13.878 --rc genhtml_function_coverage=1 00:10:13.878 --rc genhtml_legend=1 00:10:13.878 --rc geninfo_all_blocks=1 00:10:13.878 --rc geninfo_unexecuted_blocks=1 00:10:13.878 00:10:13.878 ' 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:13.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.878 --rc genhtml_branch_coverage=1 00:10:13.878 --rc genhtml_function_coverage=1 00:10:13.878 --rc genhtml_legend=1 00:10:13.878 --rc geninfo_all_blocks=1 00:10:13.878 --rc geninfo_unexecuted_blocks=1 00:10:13.878 00:10:13.878 ' 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:13.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.878 --rc genhtml_branch_coverage=1 00:10:13.878 --rc genhtml_function_coverage=1 00:10:13.878 --rc genhtml_legend=1 00:10:13.878 --rc geninfo_all_blocks=1 00:10:13.878 --rc geninfo_unexecuted_blocks=1 00:10:13.878 00:10:13.878 ' 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:13.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:13.878 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:13.879 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:13.879 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:13.879 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.879 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:13.879 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.879 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:13.879 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:13.879 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:13.879 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:22.024 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:22.024 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:22.024 Found net devices under 0000:31:00.0: cvl_0_0 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:22.024 Found net devices under 0000:31:00.1: cvl_0_1 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:22.024 13:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:22.285 13:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:22.285 13:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:22.285 13:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:22.285 13:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:22.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:22.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:10:22.285 00:10:22.285 --- 10.0.0.2 ping statistics --- 00:10:22.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.285 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:10:22.285 13:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:22.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:22.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:10:22.285 00:10:22.285 --- 10.0.0.1 ping statistics --- 00:10:22.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.285 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:10:22.285 13:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:22.285 13:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:22.285 13:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:22.285 13:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:22.285 13:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:22.285 13:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:22.285 13:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:22.285 13:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:22.285 13:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:22.285 13:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:22.285 13:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:22.285 13:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:22.285 13:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:22.285 13:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3690135 00:10:22.285 13:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:22.285 13:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3690135 00:10:22.286 13:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 3690135 ']' 00:10:22.286 13:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.286 13:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:22.286 13:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.286 13:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:22.286 13:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:22.286 [2024-11-07 13:15:30.221786] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:10:22.286 [2024-11-07 13:15:30.221923] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:22.546 [2024-11-07 13:15:30.388306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:22.546 [2024-11-07 13:15:30.490412] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:22.546 [2024-11-07 13:15:30.490460] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:22.546 [2024-11-07 13:15:30.490472] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:22.546 [2024-11-07 13:15:30.490485] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:22.546 [2024-11-07 13:15:30.490497] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:22.546 [2024-11-07 13:15:30.492806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.546 [2024-11-07 13:15:30.492915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:22.546 [2024-11-07 13:15:30.493010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.546 [2024-11-07 13:15:30.493034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:23.117 13:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:23.117 13:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:10:23.117 13:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:23.117 13:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:23.117 13:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:23.117 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:23.117 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:23.117 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.117 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:23.117 [2024-11-07 13:15:31.038287] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:23.117 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.117 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:23.117 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.117 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:23.117 Malloc0 00:10:23.117 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.117 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:23.117 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.117 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:23.377 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.377 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:23.377 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.377 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:23.377 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.377 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:23.377 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.377 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:23.377 [2024-11-07 13:15:31.147773] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:23.377 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.377 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:23.377 test case1: single bdev can't be used in multiple subsystems 00:10:23.377 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:23.378 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.378 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:23.378 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.378 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:23.378 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.378 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:23.378 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.378 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:23.378 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:23.378 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.378 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:23.378 [2024-11-07 13:15:31.183615] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:23.378 [2024-11-07 13:15:31.183653] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:23.378 [2024-11-07 13:15:31.183666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.378 request: 00:10:23.378 { 00:10:23.378 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:23.378 "namespace": { 00:10:23.378 "bdev_name": "Malloc0", 00:10:23.378 "no_auto_visible": false 00:10:23.378 }, 00:10:23.378 "method": "nvmf_subsystem_add_ns", 00:10:23.378 "req_id": 1 00:10:23.378 } 00:10:23.378 Got JSON-RPC error response 00:10:23.378 response: 00:10:23.378 { 00:10:23.378 "code": -32602, 00:10:23.378 "message": "Invalid parameters" 00:10:23.378 } 00:10:23.378 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:23.378 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:23.378 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:23.378 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:23.378 Adding namespace failed - expected result. 00:10:23.378 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:23.378 test case2: host connect to nvmf target in multiple paths 00:10:23.378 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:23.378 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.378 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:23.378 [2024-11-07 13:15:31.195797] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:23.378 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.378 13:15:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:24.762 13:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:26.700 13:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:26.700 13:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:10:26.700 13:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:26.700 13:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:26.700 13:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:10:28.614 13:15:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:28.614 13:15:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:28.614 13:15:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:28.614 13:15:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:28.614 13:15:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:28.614 13:15:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:10:28.614 13:15:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:28.614 [global] 00:10:28.614 thread=1 00:10:28.614 invalidate=1 00:10:28.614 rw=write 00:10:28.614 time_based=1 00:10:28.614 runtime=1 00:10:28.614 ioengine=libaio 00:10:28.614 direct=1 00:10:28.614 bs=4096 00:10:28.614 iodepth=1 00:10:28.614 norandommap=0 00:10:28.614 numjobs=1 00:10:28.614 00:10:28.614 verify_dump=1 00:10:28.614 verify_backlog=512 00:10:28.614 verify_state_save=0 00:10:28.614 do_verify=1 00:10:28.614 verify=crc32c-intel 00:10:28.614 [job0] 00:10:28.614 filename=/dev/nvme0n1 00:10:28.614 Could not set queue depth (nvme0n1) 00:10:28.614 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:28.614 fio-3.35 00:10:28.614 Starting 1 thread 00:10:30.000 00:10:30.000 job0: (groupid=0, jobs=1): err= 0: pid=3691659: Thu Nov 7 13:15:37 2024 00:10:30.000 read: IOPS=623, BW=2494KiB/s (2553kB/s)(2496KiB/1001msec) 00:10:30.000 slat (nsec): min=6667, max=61650, avg=24308.36, stdev=6412.82 00:10:30.000 clat (usec): min=264, max=1164, avg=791.02, stdev=158.70 00:10:30.000 lat (usec): min=272, max=1189, avg=815.33, stdev=160.34 00:10:30.000 clat percentiles (usec): 00:10:30.000 | 1.00th=[ 424], 5.00th=[ 529], 10.00th=[ 594], 20.00th=[ 652], 00:10:30.000 | 30.00th=[ 709], 40.00th=[ 758], 50.00th=[ 791], 60.00th=[ 832], 00:10:30.000 | 70.00th=[ 865], 80.00th=[ 906], 90.00th=[ 1020], 95.00th=[ 1057], 00:10:30.000 | 99.00th=[ 1123], 99.50th=[ 1123], 99.90th=[ 1172], 99.95th=[ 1172], 00:10:30.000 | 99.99th=[ 1172] 00:10:30.000 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:30.000 slat (nsec): min=9331, max=69343, avg=27035.42, stdev=10916.66 00:10:30.000 clat (usec): min=107, max=1130, avg=441.77, stdev=134.76 00:10:30.000 lat (usec): min=119, max=1141, avg=468.81, stdev=136.97 00:10:30.000 clat percentiles (usec): 00:10:30.000 | 1.00th=[ 186], 5.00th=[ 223], 10.00th=[ 258], 20.00th=[ 318], 00:10:30.000 | 30.00th=[ 359], 40.00th=[ 408], 50.00th=[ 437], 60.00th=[ 478], 00:10:30.000 | 70.00th=[ 523], 80.00th=[ 570], 90.00th=[ 619], 95.00th=[ 668], 00:10:30.000 | 99.00th=[ 734], 99.50th=[ 758], 99.90th=[ 816], 99.95th=[ 1139], 00:10:30.000 | 99.99th=[ 1139] 00:10:30.000 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:30.000 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:30.000 lat (usec) : 250=5.40%, 500=36.89%, 750=34.04%, 1000=19.05% 00:10:30.000 lat (msec) : 2=4.61% 00:10:30.000 cpu : usr=2.10%, sys=4.50%, ctx=1648, majf=0, minf=1 00:10:30.000 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.000 issued rwts: total=624,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.000 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.000 00:10:30.000 Run status group 0 (all jobs): 00:10:30.000 READ: bw=2494KiB/s (2553kB/s), 2494KiB/s-2494KiB/s (2553kB/s-2553kB/s), io=2496KiB (2556kB), run=1001-1001msec 00:10:30.000 WRITE: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:10:30.000 00:10:30.000 Disk stats (read/write): 00:10:30.000 nvme0n1: ios=578/1024, merge=0/0, ticks=439/423, in_queue=862, util=93.89% 00:10:30.001 13:15:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:30.260 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:30.260 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:30.260 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:10:30.260 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:30.260 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:30.260 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:30.260 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:30.261 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:10:30.261 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:30.261 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:30.261 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:30.261 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:30.261 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:30.261 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:30.261 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:30.261 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:30.521 rmmod nvme_tcp 00:10:30.521 rmmod nvme_fabrics 00:10:30.521 rmmod nvme_keyring 00:10:30.521 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:30.521 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:30.521 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:30.521 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3690135 ']' 00:10:30.521 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3690135 00:10:30.521 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 3690135 ']' 00:10:30.521 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 3690135 00:10:30.521 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:10:30.521 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:30.521 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3690135 00:10:30.521 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:30.521 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:30.521 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3690135' 00:10:30.521 killing process with pid 3690135 00:10:30.521 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 3690135 00:10:30.521 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 3690135 00:10:31.465 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:31.465 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:31.465 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:31.465 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:31.465 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:31.465 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:31.465 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:31.465 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:31.465 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:31.465 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.465 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:31.465 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.381 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:33.381 00:10:33.381 real 0m19.857s 00:10:33.381 user 0m52.001s 00:10:33.381 sys 0m7.516s 00:10:33.381 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:33.381 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:33.381 ************************************ 00:10:33.381 END TEST nvmf_nmic 00:10:33.381 ************************************ 00:10:33.381 13:15:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:33.381 13:15:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:33.381 13:15:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:33.381 13:15:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:33.381 ************************************ 00:10:33.381 START TEST nvmf_fio_target 00:10:33.381 ************************************ 00:10:33.381 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:33.642 * Looking for test storage... 00:10:33.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:33.642 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:33.642 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:10:33.642 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:33.642 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:33.642 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:33.642 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:33.642 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:33.642 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:33.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.643 --rc genhtml_branch_coverage=1 00:10:33.643 --rc genhtml_function_coverage=1 00:10:33.643 --rc genhtml_legend=1 00:10:33.643 --rc geninfo_all_blocks=1 00:10:33.643 --rc geninfo_unexecuted_blocks=1 00:10:33.643 00:10:33.643 ' 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:33.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.643 --rc genhtml_branch_coverage=1 00:10:33.643 --rc genhtml_function_coverage=1 00:10:33.643 --rc genhtml_legend=1 00:10:33.643 --rc geninfo_all_blocks=1 00:10:33.643 --rc geninfo_unexecuted_blocks=1 00:10:33.643 00:10:33.643 ' 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:33.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.643 --rc genhtml_branch_coverage=1 00:10:33.643 --rc genhtml_function_coverage=1 00:10:33.643 --rc genhtml_legend=1 00:10:33.643 --rc geninfo_all_blocks=1 00:10:33.643 --rc geninfo_unexecuted_blocks=1 00:10:33.643 00:10:33.643 ' 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:33.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.643 --rc genhtml_branch_coverage=1 00:10:33.643 --rc genhtml_function_coverage=1 00:10:33.643 --rc genhtml_legend=1 00:10:33.643 --rc geninfo_all_blocks=1 00:10:33.643 --rc geninfo_unexecuted_blocks=1 00:10:33.643 00:10:33.643 ' 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:33.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.643 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:33.644 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.644 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:33.644 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:33.644 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:33.644 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:41.805 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:41.805 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:41.805 Found net devices under 0000:31:00.0: cvl_0_0 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:41.805 Found net devices under 0000:31:00.1: cvl_0_1 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:41.805 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:41.806 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:41.806 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:41.806 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:41.806 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:41.806 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:41.806 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:41.806 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:41.806 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:42.069 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:42.069 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:42.069 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:42.069 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:42.069 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:42.069 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:42.069 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:42.069 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:42.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:42.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:10:42.069 00:10:42.069 --- 10.0.0.2 ping statistics --- 00:10:42.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.069 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:10:42.069 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:42.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:42.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:10:42.069 00:10:42.069 --- 10.0.0.1 ping statistics --- 00:10:42.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.069 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:10:42.069 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:42.069 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:42.069 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:42.069 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:42.069 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:42.069 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:42.069 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:42.069 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:42.069 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:42.069 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:42.069 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:42.069 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:42.069 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.330 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3696940 00:10:42.330 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3696940 00:10:42.330 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:42.330 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 3696940 ']' 00:10:42.330 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.330 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:42.330 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.330 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:42.330 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.330 [2024-11-07 13:15:50.180604] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:10:42.330 [2024-11-07 13:15:50.180746] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:42.592 [2024-11-07 13:15:50.343914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:42.592 [2024-11-07 13:15:50.444867] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:42.592 [2024-11-07 13:15:50.444912] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:42.592 [2024-11-07 13:15:50.444924] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:42.592 [2024-11-07 13:15:50.444935] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:42.592 [2024-11-07 13:15:50.444944] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:42.592 [2024-11-07 13:15:50.447204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.592 [2024-11-07 13:15:50.447308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:42.592 [2024-11-07 13:15:50.447444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.592 [2024-11-07 13:15:50.447468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:43.164 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:43.164 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:10:43.164 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:43.164 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:43.164 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.164 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:43.164 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:43.164 [2024-11-07 13:15:51.148449] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:43.426 13:15:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:43.426 13:15:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:43.426 13:15:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:43.686 13:15:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:43.686 13:15:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:43.947 13:15:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:43.947 13:15:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:44.207 13:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:44.207 13:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:44.468 13:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:44.729 13:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:44.729 13:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:44.989 13:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:44.989 13:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:45.249 13:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:45.249 13:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:45.249 13:15:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:45.509 13:15:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:45.509 13:15:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:45.769 13:15:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:45.769 13:15:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:45.769 13:15:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:46.029 [2024-11-07 13:15:53.868306] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:46.029 13:15:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:46.289 13:15:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:46.289 13:15:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:48.199 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:48.199 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:10:48.199 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:48.199 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:10:48.199 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:10:48.199 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:10:50.133 13:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:50.133 13:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:50.133 13:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:50.133 13:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:10:50.133 13:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:50.133 13:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:10:50.133 13:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:50.133 [global] 00:10:50.133 thread=1 00:10:50.133 invalidate=1 00:10:50.133 rw=write 00:10:50.133 time_based=1 00:10:50.133 runtime=1 00:10:50.133 ioengine=libaio 00:10:50.133 direct=1 00:10:50.133 bs=4096 00:10:50.133 iodepth=1 00:10:50.133 norandommap=0 00:10:50.133 numjobs=1 00:10:50.133 00:10:50.133 verify_dump=1 00:10:50.133 verify_backlog=512 00:10:50.133 verify_state_save=0 00:10:50.133 do_verify=1 00:10:50.133 verify=crc32c-intel 00:10:50.133 [job0] 00:10:50.133 filename=/dev/nvme0n1 00:10:50.133 [job1] 00:10:50.133 filename=/dev/nvme0n2 00:10:50.133 [job2] 00:10:50.133 filename=/dev/nvme0n3 00:10:50.133 [job3] 00:10:50.133 filename=/dev/nvme0n4 00:10:50.133 Could not set queue depth (nvme0n1) 00:10:50.133 Could not set queue depth (nvme0n2) 00:10:50.133 Could not set queue depth (nvme0n3) 00:10:50.133 Could not set queue depth (nvme0n4) 00:10:50.396 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:50.396 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:50.396 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:50.396 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:50.396 fio-3.35 00:10:50.396 Starting 4 threads 00:10:51.800 00:10:51.800 job0: (groupid=0, jobs=1): err= 0: pid=3698632: Thu Nov 7 13:15:59 2024 00:10:51.800 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:51.800 slat (nsec): min=6646, max=50382, avg=25873.03, stdev=4041.59 00:10:51.800 clat (usec): min=445, max=1290, avg=918.95, stdev=141.35 00:10:51.800 lat (usec): min=473, max=1320, avg=944.82, stdev=142.07 00:10:51.800 clat percentiles (usec): 00:10:51.800 | 1.00th=[ 553], 5.00th=[ 652], 10.00th=[ 709], 20.00th=[ 791], 00:10:51.800 | 30.00th=[ 865], 40.00th=[ 922], 50.00th=[ 955], 60.00th=[ 996], 00:10:51.800 | 70.00th=[ 1012], 80.00th=[ 1037], 90.00th=[ 1074], 95.00th=[ 1090], 00:10:51.800 | 99.00th=[ 1172], 99.50th=[ 1188], 99.90th=[ 1287], 99.95th=[ 1287], 00:10:51.800 | 99.99th=[ 1287] 00:10:51.800 write: IOPS=844, BW=3377KiB/s (3458kB/s)(3380KiB/1001msec); 0 zone resets 00:10:51.800 slat (nsec): min=9437, max=67697, avg=32255.24, stdev=9629.86 00:10:51.800 clat (usec): min=273, max=1025, avg=566.61, stdev=117.53 00:10:51.800 lat (usec): min=283, max=1062, avg=598.86, stdev=120.09 00:10:51.800 clat percentiles (usec): 00:10:51.800 | 1.00th=[ 322], 5.00th=[ 367], 10.00th=[ 400], 20.00th=[ 465], 00:10:51.800 | 30.00th=[ 502], 40.00th=[ 537], 50.00th=[ 578], 60.00th=[ 603], 00:10:51.800 | 70.00th=[ 627], 80.00th=[ 676], 90.00th=[ 725], 95.00th=[ 758], 00:10:51.800 | 99.00th=[ 807], 99.50th=[ 840], 99.90th=[ 1029], 99.95th=[ 1029], 00:10:51.800 | 99.99th=[ 1029] 00:10:51.800 bw ( KiB/s): min= 4096, max= 4096, per=42.03%, avg=4096.00, stdev= 0.00, samples=1 00:10:51.800 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:51.800 lat (usec) : 500=18.94%, 750=45.84%, 1000=21.59% 00:10:51.800 lat (msec) : 2=13.63% 00:10:51.800 cpu : usr=2.40%, sys=4.60%, ctx=1360, majf=0, minf=1 00:10:51.800 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:51.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.800 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.800 issued rwts: total=512,845,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.800 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:51.800 job1: (groupid=0, jobs=1): err= 0: pid=3698650: Thu Nov 7 13:15:59 2024 00:10:51.800 read: IOPS=16, BW=67.1KiB/s (68.7kB/s)(68.0KiB/1013msec) 00:10:51.800 slat (nsec): min=25776, max=26318, avg=25949.06, stdev=166.76 00:10:51.800 clat (usec): min=1148, max=43012, avg=39638.28, stdev=9922.15 00:10:51.800 lat (usec): min=1174, max=43038, avg=39664.23, stdev=9922.08 00:10:51.800 clat percentiles (usec): 00:10:51.800 | 1.00th=[ 1156], 5.00th=[ 1156], 10.00th=[41681], 20.00th=[41681], 00:10:51.800 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:10:51.800 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[43254], 00:10:51.800 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:10:51.800 | 99.99th=[43254] 00:10:51.800 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:10:51.800 slat (nsec): min=10082, max=52389, avg=30282.38, stdev=9627.07 00:10:51.800 clat (usec): min=273, max=1023, avg=624.61, stdev=119.42 00:10:51.800 lat (usec): min=285, max=1057, avg=654.89, stdev=123.42 00:10:51.800 clat percentiles (usec): 00:10:51.800 | 1.00th=[ 351], 5.00th=[ 396], 10.00th=[ 465], 20.00th=[ 529], 00:10:51.800 | 30.00th=[ 578], 40.00th=[ 603], 50.00th=[ 627], 60.00th=[ 660], 00:10:51.800 | 70.00th=[ 693], 80.00th=[ 725], 90.00th=[ 766], 95.00th=[ 799], 00:10:51.800 | 99.00th=[ 914], 99.50th=[ 955], 99.90th=[ 1020], 99.95th=[ 1020], 00:10:51.800 | 99.99th=[ 1020] 00:10:51.800 bw ( KiB/s): min= 4096, max= 4096, per=42.03%, avg=4096.00, stdev= 0.00, samples=1 00:10:51.800 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:51.800 lat (usec) : 500=14.74%, 750=69.57%, 1000=12.29% 00:10:51.800 lat (msec) : 2=0.38%, 50=3.02% 00:10:51.800 cpu : usr=0.99%, sys=1.19%, ctx=530, majf=0, minf=1 00:10:51.800 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:51.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.800 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.800 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.800 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:51.800 job2: (groupid=0, jobs=1): err= 0: pid=3698673: Thu Nov 7 13:15:59 2024 00:10:51.800 read: IOPS=23, BW=92.7KiB/s (94.9kB/s)(96.0KiB/1036msec) 00:10:51.800 slat (nsec): min=26870, max=62264, avg=29331.00, stdev=7053.93 00:10:51.800 clat (usec): min=605, max=42601, avg=29448.86, stdev=18793.12 00:10:51.800 lat (usec): min=668, max=42629, avg=29478.19, stdev=18790.43 00:10:51.800 clat percentiles (usec): 00:10:51.800 | 1.00th=[ 603], 5.00th=[ 807], 10.00th=[ 840], 20.00th=[ 898], 00:10:51.800 | 30.00th=[34866], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:10:51.800 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:51.800 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:51.800 | 99.99th=[42730] 00:10:51.800 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:10:51.800 slat (nsec): min=10015, max=57595, avg=34203.82, stdev=8596.29 00:10:51.800 clat (usec): min=207, max=985, avg=600.29, stdev=124.90 00:10:51.800 lat (usec): min=243, max=1022, avg=634.49, stdev=128.01 00:10:51.800 clat percentiles (usec): 00:10:51.800 | 1.00th=[ 258], 5.00th=[ 396], 10.00th=[ 437], 20.00th=[ 502], 00:10:51.800 | 30.00th=[ 537], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 635], 00:10:51.800 | 70.00th=[ 668], 80.00th=[ 701], 90.00th=[ 750], 95.00th=[ 783], 00:10:51.800 | 99.00th=[ 898], 99.50th=[ 947], 99.90th=[ 988], 99.95th=[ 988], 00:10:51.800 | 99.99th=[ 988] 00:10:51.800 bw ( KiB/s): min= 4096, max= 4096, per=42.03%, avg=4096.00, stdev= 0.00, samples=1 00:10:51.800 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:51.800 lat (usec) : 250=0.75%, 500=17.91%, 750=67.91%, 1000=10.07% 00:10:51.800 lat (msec) : 2=0.19%, 50=3.17% 00:10:51.800 cpu : usr=0.68%, sys=2.42%, ctx=537, majf=0, minf=1 00:10:51.800 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:51.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.800 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.800 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.800 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:51.800 job3: (groupid=0, jobs=1): err= 0: pid=3698681: Thu Nov 7 13:15:59 2024 00:10:51.800 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:51.800 slat (nsec): min=7716, max=56064, avg=27759.34, stdev=2828.49 00:10:51.800 clat (usec): min=766, max=1223, avg=1054.23, stdev=63.42 00:10:51.800 lat (usec): min=793, max=1250, avg=1081.99, stdev=63.43 00:10:51.800 clat percentiles (usec): 00:10:51.800 | 1.00th=[ 865], 5.00th=[ 938], 10.00th=[ 979], 20.00th=[ 1012], 00:10:51.800 | 30.00th=[ 1029], 40.00th=[ 1045], 50.00th=[ 1057], 60.00th=[ 1074], 00:10:51.800 | 70.00th=[ 1090], 80.00th=[ 1106], 90.00th=[ 1123], 95.00th=[ 1156], 00:10:51.801 | 99.00th=[ 1205], 99.50th=[ 1221], 99.90th=[ 1221], 99.95th=[ 1221], 00:10:51.801 | 99.99th=[ 1221] 00:10:51.801 write: IOPS=654, BW=2617KiB/s (2680kB/s)(2620KiB/1001msec); 0 zone resets 00:10:51.801 slat (nsec): min=9537, max=56968, avg=31774.96, stdev=10088.97 00:10:51.801 clat (usec): min=238, max=1031, avg=635.02, stdev=129.27 00:10:51.801 lat (usec): min=250, max=1067, avg=666.79, stdev=134.03 00:10:51.801 clat percentiles (usec): 00:10:51.801 | 1.00th=[ 334], 5.00th=[ 416], 10.00th=[ 465], 20.00th=[ 529], 00:10:51.801 | 30.00th=[ 570], 40.00th=[ 611], 50.00th=[ 635], 60.00th=[ 668], 00:10:51.801 | 70.00th=[ 701], 80.00th=[ 734], 90.00th=[ 783], 95.00th=[ 840], 00:10:51.801 | 99.00th=[ 979], 99.50th=[ 1004], 99.90th=[ 1029], 99.95th=[ 1029], 00:10:51.801 | 99.99th=[ 1029] 00:10:51.801 bw ( KiB/s): min= 4096, max= 4096, per=42.03%, avg=4096.00, stdev= 0.00, samples=1 00:10:51.801 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:51.801 lat (usec) : 250=0.09%, 500=8.57%, 750=38.22%, 1000=15.85% 00:10:51.801 lat (msec) : 2=37.28% 00:10:51.801 cpu : usr=3.80%, sys=3.30%, ctx=1168, majf=0, minf=1 00:10:51.801 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:51.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.801 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.801 issued rwts: total=512,655,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.801 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:51.801 00:10:51.801 Run status group 0 (all jobs): 00:10:51.801 READ: bw=4112KiB/s (4211kB/s), 67.1KiB/s-2046KiB/s (68.7kB/s-2095kB/s), io=4260KiB (4362kB), run=1001-1036msec 00:10:51.801 WRITE: bw=9745KiB/s (9979kB/s), 1977KiB/s-3377KiB/s (2024kB/s-3458kB/s), io=9.86MiB (10.3MB), run=1001-1036msec 00:10:51.801 00:10:51.801 Disk stats (read/write): 00:10:51.801 nvme0n1: ios=535/540, merge=0/0, ticks=1295/305, in_queue=1600, util=84.07% 00:10:51.801 nvme0n2: ios=61/512, merge=0/0, ticks=738/308, in_queue=1046, util=87.84% 00:10:51.801 nvme0n3: ios=41/512, merge=0/0, ticks=1376/237, in_queue=1613, util=91.96% 00:10:51.801 nvme0n4: ios=465/512, merge=0/0, ticks=1306/265, in_queue=1571, util=94.21% 00:10:51.801 13:15:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:51.801 [global] 00:10:51.801 thread=1 00:10:51.801 invalidate=1 00:10:51.801 rw=randwrite 00:10:51.801 time_based=1 00:10:51.801 runtime=1 00:10:51.801 ioengine=libaio 00:10:51.801 direct=1 00:10:51.801 bs=4096 00:10:51.801 iodepth=1 00:10:51.801 norandommap=0 00:10:51.801 numjobs=1 00:10:51.801 00:10:51.801 verify_dump=1 00:10:51.801 verify_backlog=512 00:10:51.801 verify_state_save=0 00:10:51.801 do_verify=1 00:10:51.801 verify=crc32c-intel 00:10:51.801 [job0] 00:10:51.801 filename=/dev/nvme0n1 00:10:51.801 [job1] 00:10:51.801 filename=/dev/nvme0n2 00:10:51.801 [job2] 00:10:51.801 filename=/dev/nvme0n3 00:10:51.801 [job3] 00:10:51.801 filename=/dev/nvme0n4 00:10:51.801 Could not set queue depth (nvme0n1) 00:10:51.801 Could not set queue depth (nvme0n2) 00:10:51.801 Could not set queue depth (nvme0n3) 00:10:51.801 Could not set queue depth (nvme0n4) 00:10:52.067 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:52.067 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:52.067 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:52.067 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:52.067 fio-3.35 00:10:52.067 Starting 4 threads 00:10:53.483 00:10:53.484 job0: (groupid=0, jobs=1): err= 0: pid=3699183: Thu Nov 7 13:16:01 2024 00:10:53.484 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:53.484 slat (nsec): min=26935, max=60219, avg=27884.47, stdev=2915.53 00:10:53.484 clat (usec): min=581, max=1201, avg=1004.30, stdev=78.45 00:10:53.484 lat (usec): min=608, max=1228, avg=1032.18, stdev=78.28 00:10:53.484 clat percentiles (usec): 00:10:53.484 | 1.00th=[ 775], 5.00th=[ 857], 10.00th=[ 898], 20.00th=[ 955], 00:10:53.484 | 30.00th=[ 979], 40.00th=[ 996], 50.00th=[ 1012], 60.00th=[ 1029], 00:10:53.484 | 70.00th=[ 1045], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1123], 00:10:53.484 | 99.00th=[ 1172], 99.50th=[ 1172], 99.90th=[ 1205], 99.95th=[ 1205], 00:10:53.484 | 99.99th=[ 1205] 00:10:53.484 write: IOPS=704, BW=2817KiB/s (2885kB/s)(2820KiB/1001msec); 0 zone resets 00:10:53.484 slat (nsec): min=9319, max=67848, avg=30198.15, stdev=9961.83 00:10:53.484 clat (usec): min=283, max=1014, avg=624.07, stdev=123.18 00:10:53.484 lat (usec): min=295, max=1048, avg=654.27, stdev=127.60 00:10:53.484 clat percentiles (usec): 00:10:53.484 | 1.00th=[ 347], 5.00th=[ 412], 10.00th=[ 465], 20.00th=[ 515], 00:10:53.484 | 30.00th=[ 570], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 660], 00:10:53.484 | 70.00th=[ 693], 80.00th=[ 725], 90.00th=[ 775], 95.00th=[ 807], 00:10:53.484 | 99.00th=[ 955], 99.50th=[ 996], 99.90th=[ 1012], 99.95th=[ 1012], 00:10:53.484 | 99.99th=[ 1012] 00:10:53.484 bw ( KiB/s): min= 4096, max= 4096, per=38.10%, avg=4096.00, stdev= 0.00, samples=1 00:10:53.484 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:53.484 lat (usec) : 500=10.11%, 750=40.02%, 1000=25.88% 00:10:53.484 lat (msec) : 2=23.99% 00:10:53.484 cpu : usr=2.60%, sys=4.70%, ctx=1219, majf=0, minf=1 00:10:53.484 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:53.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.484 issued rwts: total=512,705,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.484 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:53.484 job1: (groupid=0, jobs=1): err= 0: pid=3699193: Thu Nov 7 13:16:01 2024 00:10:53.484 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:53.484 slat (nsec): min=8174, max=45006, avg=26666.42, stdev=2771.50 00:10:53.484 clat (usec): min=687, max=1513, avg=1127.11, stdev=149.60 00:10:53.484 lat (usec): min=713, max=1539, avg=1153.78, stdev=149.66 00:10:53.484 clat percentiles (usec): 00:10:53.484 | 1.00th=[ 832], 5.00th=[ 922], 10.00th=[ 963], 20.00th=[ 1004], 00:10:53.484 | 30.00th=[ 1029], 40.00th=[ 1057], 50.00th=[ 1090], 60.00th=[ 1156], 00:10:53.484 | 70.00th=[ 1237], 80.00th=[ 1287], 90.00th=[ 1336], 95.00th=[ 1369], 00:10:53.484 | 99.00th=[ 1434], 99.50th=[ 1450], 99.90th=[ 1516], 99.95th=[ 1516], 00:10:53.484 | 99.99th=[ 1516] 00:10:53.484 write: IOPS=602, BW=2410KiB/s (2467kB/s)(2412KiB/1001msec); 0 zone resets 00:10:53.484 slat (nsec): min=9640, max=56530, avg=30958.41, stdev=8613.45 00:10:53.484 clat (usec): min=216, max=1003, avg=631.96, stdev=128.73 00:10:53.484 lat (usec): min=227, max=1038, avg=662.92, stdev=131.62 00:10:53.484 clat percentiles (usec): 00:10:53.484 | 1.00th=[ 314], 5.00th=[ 400], 10.00th=[ 457], 20.00th=[ 529], 00:10:53.484 | 30.00th=[ 578], 40.00th=[ 611], 50.00th=[ 635], 60.00th=[ 668], 00:10:53.484 | 70.00th=[ 693], 80.00th=[ 734], 90.00th=[ 783], 95.00th=[ 840], 00:10:53.484 | 99.00th=[ 922], 99.50th=[ 988], 99.90th=[ 1004], 99.95th=[ 1004], 00:10:53.484 | 99.99th=[ 1004] 00:10:53.484 bw ( KiB/s): min= 4096, max= 4096, per=38.10%, avg=4096.00, stdev= 0.00, samples=1 00:10:53.484 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:53.484 lat (usec) : 250=0.09%, 500=7.44%, 750=37.58%, 1000=17.58% 00:10:53.484 lat (msec) : 2=37.31% 00:10:53.484 cpu : usr=1.80%, sys=3.20%, ctx=1116, majf=0, minf=1 00:10:53.484 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:53.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.484 issued rwts: total=512,603,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.484 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:53.484 job2: (groupid=0, jobs=1): err= 0: pid=3699212: Thu Nov 7 13:16:01 2024 00:10:53.484 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:53.484 slat (nsec): min=26552, max=61975, avg=27623.41, stdev=3060.38 00:10:53.484 clat (usec): min=617, max=1212, avg=1028.33, stdev=83.52 00:10:53.484 lat (usec): min=644, max=1239, avg=1055.96, stdev=83.46 00:10:53.484 clat percentiles (usec): 00:10:53.484 | 1.00th=[ 758], 5.00th=[ 865], 10.00th=[ 922], 20.00th=[ 979], 00:10:53.484 | 30.00th=[ 1004], 40.00th=[ 1029], 50.00th=[ 1037], 60.00th=[ 1057], 00:10:53.484 | 70.00th=[ 1074], 80.00th=[ 1090], 90.00th=[ 1123], 95.00th=[ 1139], 00:10:53.484 | 99.00th=[ 1172], 99.50th=[ 1172], 99.90th=[ 1221], 99.95th=[ 1221], 00:10:53.484 | 99.99th=[ 1221] 00:10:53.484 write: IOPS=688, BW=2753KiB/s (2819kB/s)(2756KiB/1001msec); 0 zone resets 00:10:53.484 slat (nsec): min=3852, max=54880, avg=29270.18, stdev=9945.47 00:10:53.484 clat (usec): min=182, max=980, avg=623.94, stdev=129.22 00:10:53.484 lat (usec): min=194, max=1014, avg=653.21, stdev=132.96 00:10:53.484 clat percentiles (usec): 00:10:53.484 | 1.00th=[ 251], 5.00th=[ 392], 10.00th=[ 453], 20.00th=[ 523], 00:10:53.484 | 30.00th=[ 578], 40.00th=[ 603], 50.00th=[ 627], 60.00th=[ 668], 00:10:53.484 | 70.00th=[ 701], 80.00th=[ 725], 90.00th=[ 775], 95.00th=[ 816], 00:10:53.484 | 99.00th=[ 914], 99.50th=[ 930], 99.90th=[ 979], 99.95th=[ 979], 00:10:53.484 | 99.99th=[ 979] 00:10:53.484 bw ( KiB/s): min= 4087, max= 4087, per=38.02%, avg=4087.00, stdev= 0.00, samples=1 00:10:53.484 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:10:53.484 lat (usec) : 250=0.50%, 500=8.83%, 750=39.38%, 1000=20.15% 00:10:53.484 lat (msec) : 2=31.14% 00:10:53.484 cpu : usr=1.70%, sys=3.60%, ctx=1205, majf=0, minf=1 00:10:53.484 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:53.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.484 issued rwts: total=512,689,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.484 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:53.484 job3: (groupid=0, jobs=1): err= 0: pid=3699219: Thu Nov 7 13:16:01 2024 00:10:53.484 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:53.484 slat (nsec): min=8290, max=61674, avg=28468.31, stdev=2829.44 00:10:53.484 clat (usec): min=519, max=1356, avg=1019.79, stdev=98.20 00:10:53.484 lat (usec): min=547, max=1384, avg=1048.26, stdev=98.10 00:10:53.484 clat percentiles (usec): 00:10:53.484 | 1.00th=[ 766], 5.00th=[ 848], 10.00th=[ 898], 20.00th=[ 955], 00:10:53.484 | 30.00th=[ 979], 40.00th=[ 1004], 50.00th=[ 1029], 60.00th=[ 1045], 00:10:53.484 | 70.00th=[ 1074], 80.00th=[ 1090], 90.00th=[ 1139], 95.00th=[ 1172], 00:10:53.484 | 99.00th=[ 1254], 99.50th=[ 1303], 99.90th=[ 1352], 99.95th=[ 1352], 00:10:53.484 | 99.99th=[ 1352] 00:10:53.484 write: IOPS=692, BW=2769KiB/s (2836kB/s)(2772KiB/1001msec); 0 zone resets 00:10:53.484 slat (nsec): min=9269, max=67606, avg=32482.14, stdev=8731.85 00:10:53.484 clat (usec): min=216, max=2936, avg=622.08, stdev=162.02 00:10:53.484 lat (usec): min=231, max=2975, avg=654.56, stdev=164.38 00:10:53.484 clat percentiles (usec): 00:10:53.484 | 1.00th=[ 322], 5.00th=[ 383], 10.00th=[ 441], 20.00th=[ 506], 00:10:53.484 | 30.00th=[ 553], 40.00th=[ 594], 50.00th=[ 619], 60.00th=[ 652], 00:10:53.484 | 70.00th=[ 693], 80.00th=[ 734], 90.00th=[ 783], 95.00th=[ 832], 00:10:53.484 | 99.00th=[ 971], 99.50th=[ 1020], 99.90th=[ 2933], 99.95th=[ 2933], 00:10:53.484 | 99.99th=[ 2933] 00:10:53.484 bw ( KiB/s): min= 4096, max= 4096, per=38.10%, avg=4096.00, stdev= 0.00, samples=1 00:10:53.484 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:53.484 lat (usec) : 250=0.17%, 500=10.54%, 750=37.93%, 1000=25.64% 00:10:53.484 lat (msec) : 2=25.64%, 4=0.08% 00:10:53.484 cpu : usr=2.60%, sys=4.90%, ctx=1206, majf=0, minf=1 00:10:53.484 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:53.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.484 issued rwts: total=512,693,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.484 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:53.484 00:10:53.484 Run status group 0 (all jobs): 00:10:53.484 READ: bw=8184KiB/s (8380kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:10:53.484 WRITE: bw=10.5MiB/s (11.0MB/s), 2410KiB/s-2817KiB/s (2467kB/s-2885kB/s), io=10.5MiB (11.0MB), run=1001-1001msec 00:10:53.484 00:10:53.484 Disk stats (read/write): 00:10:53.484 nvme0n1: ios=527/512, merge=0/0, ticks=503/262, in_queue=765, util=87.17% 00:10:53.485 nvme0n2: ios=473/512, merge=0/0, ticks=556/306, in_queue=862, util=91.13% 00:10:53.485 nvme0n3: ios=517/512, merge=0/0, ticks=576/309, in_queue=885, util=95.35% 00:10:53.485 nvme0n4: ios=512/512, merge=0/0, ticks=537/254, in_queue=791, util=97.12% 00:10:53.485 13:16:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:53.485 [global] 00:10:53.485 thread=1 00:10:53.485 invalidate=1 00:10:53.485 rw=write 00:10:53.485 time_based=1 00:10:53.485 runtime=1 00:10:53.485 ioengine=libaio 00:10:53.485 direct=1 00:10:53.485 bs=4096 00:10:53.485 iodepth=128 00:10:53.485 norandommap=0 00:10:53.485 numjobs=1 00:10:53.485 00:10:53.485 verify_dump=1 00:10:53.485 verify_backlog=512 00:10:53.485 verify_state_save=0 00:10:53.485 do_verify=1 00:10:53.485 verify=crc32c-intel 00:10:53.485 [job0] 00:10:53.485 filename=/dev/nvme0n1 00:10:53.485 [job1] 00:10:53.485 filename=/dev/nvme0n2 00:10:53.485 [job2] 00:10:53.485 filename=/dev/nvme0n3 00:10:53.485 [job3] 00:10:53.485 filename=/dev/nvme0n4 00:10:53.485 Could not set queue depth (nvme0n1) 00:10:53.485 Could not set queue depth (nvme0n2) 00:10:53.485 Could not set queue depth (nvme0n3) 00:10:53.485 Could not set queue depth (nvme0n4) 00:10:53.748 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:53.748 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:53.748 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:53.748 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:53.748 fio-3.35 00:10:53.748 Starting 4 threads 00:10:55.140 00:10:55.140 job0: (groupid=0, jobs=1): err= 0: pid=3699621: Thu Nov 7 13:16:02 2024 00:10:55.140 read: IOPS=3863, BW=15.1MiB/s (15.8MB/s)(15.2MiB/1004msec) 00:10:55.140 slat (nsec): min=928, max=25570k, avg=138114.11, stdev=1019176.77 00:10:55.140 clat (usec): min=3257, max=70782, avg=18145.26, stdev=17180.28 00:10:55.140 lat (usec): min=4124, max=70805, avg=18283.37, stdev=17313.66 00:10:55.140 clat percentiles (usec): 00:10:55.140 | 1.00th=[ 4359], 5.00th=[ 5997], 10.00th=[ 6259], 20.00th=[ 7111], 00:10:55.140 | 30.00th=[ 7373], 40.00th=[ 7570], 50.00th=[ 7963], 60.00th=[11207], 00:10:55.140 | 70.00th=[19268], 80.00th=[30802], 90.00th=[49546], 95.00th=[58459], 00:10:55.140 | 99.00th=[61080], 99.50th=[62653], 99.90th=[69731], 99.95th=[70779], 00:10:55.140 | 99.99th=[70779] 00:10:55.140 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:10:55.140 slat (nsec): min=1599, max=14024k, avg=99432.59, stdev=709934.50 00:10:55.140 clat (usec): min=1218, max=52962, avg=13912.60, stdev=8253.95 00:10:55.140 lat (usec): min=1229, max=52969, avg=14012.03, stdev=8306.36 00:10:55.140 clat percentiles (usec): 00:10:55.140 | 1.00th=[ 1909], 5.00th=[ 4113], 10.00th=[ 6259], 20.00th=[ 7111], 00:10:55.140 | 30.00th=[ 7701], 40.00th=[ 8717], 50.00th=[12387], 60.00th=[13566], 00:10:55.140 | 70.00th=[16188], 80.00th=[22414], 90.00th=[25822], 95.00th=[27919], 00:10:55.140 | 99.00th=[38536], 99.50th=[40109], 99.90th=[52167], 99.95th=[52167], 00:10:55.140 | 99.99th=[53216] 00:10:55.140 bw ( KiB/s): min=14992, max=17776, per=21.17%, avg=16384.00, stdev=1968.59, samples=2 00:10:55.140 iops : min= 3748, max= 4444, avg=4096.00, stdev=492.15, samples=2 00:10:55.140 lat (msec) : 2=0.65%, 4=1.81%, 10=47.29%, 20=24.74%, 50=20.53% 00:10:55.140 lat (msec) : 100=4.99% 00:10:55.140 cpu : usr=2.79%, sys=5.18%, ctx=350, majf=0, minf=1 00:10:55.140 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:55.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.140 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:55.140 issued rwts: total=3879,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.140 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:55.140 job1: (groupid=0, jobs=1): err= 0: pid=3699627: Thu Nov 7 13:16:02 2024 00:10:55.140 read: IOPS=3541, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1012msec) 00:10:55.140 slat (nsec): min=1057, max=59445k, avg=115054.16, stdev=1261290.49 00:10:55.140 clat (usec): min=2466, max=83645, avg=15002.88, stdev=11812.77 00:10:55.140 lat (usec): min=2476, max=83675, avg=15117.94, stdev=11911.78 00:10:55.140 clat percentiles (usec): 00:10:55.140 | 1.00th=[ 4752], 5.00th=[ 5342], 10.00th=[ 5669], 20.00th=[ 7177], 00:10:55.140 | 30.00th=[ 7898], 40.00th=[10552], 50.00th=[12780], 60.00th=[13566], 00:10:55.140 | 70.00th=[14877], 80.00th=[18744], 90.00th=[26084], 95.00th=[38536], 00:10:55.140 | 99.00th=[79168], 99.50th=[79168], 99.90th=[79168], 99.95th=[79168], 00:10:55.140 | 99.99th=[83362] 00:10:55.140 write: IOPS=4211, BW=16.5MiB/s (17.2MB/s)(16.6MiB/1012msec); 0 zone resets 00:10:55.140 slat (nsec): min=1656, max=28983k, avg=106963.54, stdev=830017.55 00:10:55.140 clat (usec): min=1395, max=96897, avg=16664.09, stdev=15462.93 00:10:55.140 lat (usec): min=1399, max=96904, avg=16771.05, stdev=15533.75 00:10:55.140 clat percentiles (usec): 00:10:55.140 | 1.00th=[ 2540], 5.00th=[ 4555], 10.00th=[ 5145], 20.00th=[ 6849], 00:10:55.140 | 30.00th=[ 8848], 40.00th=[10159], 50.00th=[11338], 60.00th=[12911], 00:10:55.140 | 70.00th=[17433], 80.00th=[22414], 90.00th=[35390], 95.00th=[45876], 00:10:55.140 | 99.00th=[90702], 99.50th=[93848], 99.90th=[96994], 99.95th=[96994], 00:10:55.140 | 99.99th=[96994] 00:10:55.140 bw ( KiB/s): min=16384, max=16696, per=21.37%, avg=16540.00, stdev=220.62, samples=2 00:10:55.140 iops : min= 4096, max= 4174, avg=4135.00, stdev=55.15, samples=2 00:10:55.140 lat (msec) : 2=0.45%, 4=1.25%, 10=36.80%, 20=40.11%, 50=18.71% 00:10:55.140 lat (msec) : 100=2.69% 00:10:55.140 cpu : usr=3.56%, sys=5.04%, ctx=318, majf=0, minf=1 00:10:55.140 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:55.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.140 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:55.140 issued rwts: total=3584,4262,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.140 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:55.140 job2: (groupid=0, jobs=1): err= 0: pid=3699643: Thu Nov 7 13:16:02 2024 00:10:55.140 read: IOPS=5059, BW=19.8MiB/s (20.7MB/s)(20.0MiB/1012msec) 00:10:55.140 slat (nsec): min=1060, max=11050k, avg=86264.92, stdev=609205.65 00:10:55.140 clat (usec): min=4339, max=27992, avg=10868.04, stdev=3267.62 00:10:55.140 lat (usec): min=4346, max=28000, avg=10954.30, stdev=3313.93 00:10:55.140 clat percentiles (usec): 00:10:55.140 | 1.00th=[ 6063], 5.00th=[ 7504], 10.00th=[ 7898], 20.00th=[ 8356], 00:10:55.140 | 30.00th=[ 8848], 40.00th=[ 9372], 50.00th=[10159], 60.00th=[10814], 00:10:55.140 | 70.00th=[11731], 80.00th=[12911], 90.00th=[15401], 95.00th=[17957], 00:10:55.140 | 99.00th=[21627], 99.50th=[22676], 99.90th=[25822], 99.95th=[27919], 00:10:55.140 | 99.99th=[27919] 00:10:55.140 write: IOPS=5523, BW=21.6MiB/s (22.6MB/s)(21.8MiB/1012msec); 0 zone resets 00:10:55.140 slat (nsec): min=1739, max=10439k, avg=93770.03, stdev=557760.50 00:10:55.140 clat (usec): min=2728, max=52583, avg=12945.94, stdev=8847.17 00:10:55.140 lat (usec): min=2736, max=52585, avg=13039.71, stdev=8905.37 00:10:55.140 clat percentiles (usec): 00:10:55.140 | 1.00th=[ 4015], 5.00th=[ 5211], 10.00th=[ 5669], 20.00th=[ 6587], 00:10:55.140 | 30.00th=[ 7308], 40.00th=[ 8225], 50.00th=[10421], 60.00th=[11731], 00:10:55.140 | 70.00th=[13042], 80.00th=[17433], 90.00th=[26346], 95.00th=[32900], 00:10:55.140 | 99.00th=[41681], 99.50th=[44827], 99.90th=[52691], 99.95th=[52691], 00:10:55.140 | 99.99th=[52691] 00:10:55.140 bw ( KiB/s): min=16072, max=27632, per=28.24%, avg=21852.00, stdev=8174.15, samples=2 00:10:55.140 iops : min= 4018, max= 6908, avg=5463.00, stdev=2043.54, samples=2 00:10:55.140 lat (msec) : 4=0.51%, 10=48.01%, 20=41.52%, 50=9.82%, 100=0.13% 00:10:55.140 cpu : usr=5.14%, sys=6.43%, ctx=380, majf=0, minf=1 00:10:55.140 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:55.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.140 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:55.140 issued rwts: total=5120,5590,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.140 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:55.140 job3: (groupid=0, jobs=1): err= 0: pid=3699655: Thu Nov 7 13:16:02 2024 00:10:55.140 read: IOPS=4777, BW=18.7MiB/s (19.6MB/s)(18.7MiB/1003msec) 00:10:55.140 slat (nsec): min=960, max=7968.2k, avg=92196.58, stdev=535084.87 00:10:55.140 clat (usec): min=2245, max=32518, avg=12221.64, stdev=4961.60 00:10:55.140 lat (usec): min=2251, max=32523, avg=12313.83, stdev=4984.93 00:10:55.140 clat percentiles (usec): 00:10:55.140 | 1.00th=[ 3589], 5.00th=[ 5604], 10.00th=[ 7570], 20.00th=[ 9372], 00:10:55.140 | 30.00th=[10159], 40.00th=[10552], 50.00th=[11338], 60.00th=[12518], 00:10:55.140 | 70.00th=[13435], 80.00th=[13960], 90.00th=[17695], 95.00th=[23987], 00:10:55.140 | 99.00th=[28443], 99.50th=[32375], 99.90th=[32375], 99.95th=[32637], 00:10:55.140 | 99.99th=[32637] 00:10:55.140 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:10:55.140 slat (nsec): min=1669, max=18041k, avg=78138.12, stdev=511433.46 00:10:55.140 clat (usec): min=809, max=97316, avg=12201.76, stdev=10547.45 00:10:55.140 lat (usec): min=843, max=97327, avg=12279.90, stdev=10556.76 00:10:55.140 clat percentiles (usec): 00:10:55.140 | 1.00th=[ 1860], 5.00th=[ 2704], 10.00th=[ 4883], 20.00th=[ 7504], 00:10:55.140 | 30.00th=[ 8455], 40.00th=[ 9765], 50.00th=[10290], 60.00th=[10814], 00:10:55.140 | 70.00th=[11600], 80.00th=[12518], 90.00th=[22938], 95.00th=[27132], 00:10:55.140 | 99.00th=[72877], 99.50th=[81265], 99.90th=[96994], 99.95th=[96994], 00:10:55.140 | 99.99th=[96994] 00:10:55.140 bw ( KiB/s): min=20480, max=24576, per=29.11%, avg=22528.00, stdev=2896.31, samples=2 00:10:55.140 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:10:55.140 lat (usec) : 1000=0.06% 00:10:55.140 lat (msec) : 2=0.89%, 4=4.06%, 10=30.11%, 20=54.59%, 50=9.32% 00:10:55.140 lat (msec) : 100=0.98% 00:10:55.140 cpu : usr=4.29%, sys=5.79%, ctx=516, majf=0, minf=1 00:10:55.140 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:55.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.140 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:55.140 issued rwts: total=4792,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.140 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:55.140 00:10:55.140 Run status group 0 (all jobs): 00:10:55.140 READ: bw=67.1MiB/s (70.3MB/s), 13.8MiB/s-19.8MiB/s (14.5MB/s-20.7MB/s), io=67.9MiB (71.2MB), run=1003-1012msec 00:10:55.140 WRITE: bw=75.6MiB/s (79.2MB/s), 15.9MiB/s-21.9MiB/s (16.7MB/s-23.0MB/s), io=76.5MiB (80.2MB), run=1003-1012msec 00:10:55.140 00:10:55.140 Disk stats (read/write): 00:10:55.140 nvme0n1: ios=3122/3327, merge=0/0, ticks=24214/18794, in_queue=43008, util=86.57% 00:10:55.140 nvme0n2: ios=2772/3584, merge=0/0, ticks=26123/32123, in_queue=58246, util=86.65% 00:10:55.140 nvme0n3: ios=4659/4887, merge=0/0, ticks=47550/52316, in_queue=99866, util=95.14% 00:10:55.140 nvme0n4: ios=3955/4608, merge=0/0, ticks=18713/27890, in_queue=46603, util=94.33% 00:10:55.140 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:55.140 [global] 00:10:55.140 thread=1 00:10:55.140 invalidate=1 00:10:55.140 rw=randwrite 00:10:55.140 time_based=1 00:10:55.140 runtime=1 00:10:55.141 ioengine=libaio 00:10:55.141 direct=1 00:10:55.141 bs=4096 00:10:55.141 iodepth=128 00:10:55.141 norandommap=0 00:10:55.141 numjobs=1 00:10:55.141 00:10:55.141 verify_dump=1 00:10:55.141 verify_backlog=512 00:10:55.141 verify_state_save=0 00:10:55.141 do_verify=1 00:10:55.141 verify=crc32c-intel 00:10:55.141 [job0] 00:10:55.141 filename=/dev/nvme0n1 00:10:55.141 [job1] 00:10:55.141 filename=/dev/nvme0n2 00:10:55.141 [job2] 00:10:55.141 filename=/dev/nvme0n3 00:10:55.141 [job3] 00:10:55.141 filename=/dev/nvme0n4 00:10:55.141 Could not set queue depth (nvme0n1) 00:10:55.141 Could not set queue depth (nvme0n2) 00:10:55.141 Could not set queue depth (nvme0n3) 00:10:55.141 Could not set queue depth (nvme0n4) 00:10:55.399 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:55.399 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:55.399 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:55.399 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:55.399 fio-3.35 00:10:55.399 Starting 4 threads 00:10:56.807 00:10:56.807 job0: (groupid=0, jobs=1): err= 0: pid=3700149: Thu Nov 7 13:16:04 2024 00:10:56.807 read: IOPS=5064, BW=19.8MiB/s (20.7MB/s)(20.0MiB/1011msec) 00:10:56.807 slat (nsec): min=924, max=13356k, avg=87500.53, stdev=662879.43 00:10:56.807 clat (usec): min=3404, max=36670, avg=11250.56, stdev=4549.17 00:10:56.807 lat (usec): min=3865, max=36696, avg=11338.06, stdev=4600.49 00:10:56.807 clat percentiles (usec): 00:10:56.807 | 1.00th=[ 6325], 5.00th=[ 6915], 10.00th=[ 7242], 20.00th=[ 7635], 00:10:56.807 | 30.00th=[ 8160], 40.00th=[ 8586], 50.00th=[ 9503], 60.00th=[11207], 00:10:56.807 | 70.00th=[12518], 80.00th=[14353], 90.00th=[17433], 95.00th=[21365], 00:10:56.807 | 99.00th=[26346], 99.50th=[27919], 99.90th=[30802], 99.95th=[30802], 00:10:56.807 | 99.99th=[36439] 00:10:56.807 write: IOPS=5460, BW=21.3MiB/s (22.4MB/s)(21.6MiB/1011msec); 0 zone resets 00:10:56.807 slat (nsec): min=1608, max=12999k, avg=91483.62, stdev=600491.12 00:10:56.807 clat (usec): min=1214, max=40181, avg=12748.75, stdev=6436.39 00:10:56.807 lat (usec): min=1225, max=40189, avg=12840.23, stdev=6473.62 00:10:56.807 clat percentiles (usec): 00:10:56.807 | 1.00th=[ 3326], 5.00th=[ 4817], 10.00th=[ 5735], 20.00th=[ 7504], 00:10:56.807 | 30.00th=[ 8356], 40.00th=[10421], 50.00th=[11600], 60.00th=[14222], 00:10:56.807 | 70.00th=[15139], 80.00th=[15533], 90.00th=[21627], 95.00th=[25297], 00:10:56.807 | 99.00th=[35914], 99.50th=[38536], 99.90th=[40109], 99.95th=[40109], 00:10:56.807 | 99.99th=[40109] 00:10:56.807 bw ( KiB/s): min=20312, max=22840, per=21.83%, avg=21576.00, stdev=1787.57, samples=2 00:10:56.807 iops : min= 5078, max= 5710, avg=5394.00, stdev=446.89, samples=2 00:10:56.807 lat (msec) : 2=0.10%, 4=1.13%, 10=43.59%, 20=45.33%, 50=9.85% 00:10:56.807 cpu : usr=3.47%, sys=6.44%, ctx=444, majf=0, minf=1 00:10:56.807 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:56.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.807 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:56.807 issued rwts: total=5120,5521,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.807 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:56.807 job1: (groupid=0, jobs=1): err= 0: pid=3700151: Thu Nov 7 13:16:04 2024 00:10:56.807 read: IOPS=6097, BW=23.8MiB/s (25.0MB/s)(24.1MiB/1010msec) 00:10:56.807 slat (nsec): min=966, max=16390k, avg=86967.38, stdev=648953.20 00:10:56.807 clat (usec): min=2212, max=27938, avg=10954.37, stdev=3573.98 00:10:56.807 lat (usec): min=2252, max=27941, avg=11041.34, stdev=3607.18 00:10:56.807 clat percentiles (usec): 00:10:56.807 | 1.00th=[ 4621], 5.00th=[ 7111], 10.00th=[ 7439], 20.00th=[ 8356], 00:10:56.807 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10945], 00:10:56.807 | 70.00th=[11863], 80.00th=[13042], 90.00th=[16188], 95.00th=[17957], 00:10:56.807 | 99.00th=[23200], 99.50th=[25297], 99.90th=[27132], 99.95th=[27919], 00:10:56.807 | 99.99th=[27919] 00:10:56.807 write: IOPS=6590, BW=25.7MiB/s (27.0MB/s)(26.0MiB/1010msec); 0 zone resets 00:10:56.807 slat (nsec): min=1637, max=7502.2k, avg=61842.64, stdev=282075.54 00:10:56.808 clat (usec): min=1245, max=27935, avg=9075.58, stdev=2453.92 00:10:56.808 lat (usec): min=1254, max=27937, avg=9137.42, stdev=2468.89 00:10:56.808 clat percentiles (usec): 00:10:56.808 | 1.00th=[ 3458], 5.00th=[ 4752], 10.00th=[ 5669], 20.00th=[ 7046], 00:10:56.808 | 30.00th=[ 8029], 40.00th=[ 9241], 50.00th=[ 9896], 60.00th=[10028], 00:10:56.808 | 70.00th=[10159], 80.00th=[10290], 90.00th=[11207], 95.00th=[12125], 00:10:56.808 | 99.00th=[16909], 99.50th=[18220], 99.90th=[19792], 99.95th=[21890], 00:10:56.808 | 99.99th=[27919] 00:10:56.808 bw ( KiB/s): min=25992, max=26352, per=26.49%, avg=26172.00, stdev=254.56, samples=2 00:10:56.808 iops : min= 6498, max= 6588, avg=6543.00, stdev=63.64, samples=2 00:10:56.808 lat (msec) : 2=0.07%, 4=1.35%, 10=53.56%, 20=43.81%, 50=1.21% 00:10:56.808 cpu : usr=4.96%, sys=6.05%, ctx=761, majf=0, minf=1 00:10:56.808 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:56.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.808 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:56.808 issued rwts: total=6158,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.808 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:56.808 job2: (groupid=0, jobs=1): err= 0: pid=3700166: Thu Nov 7 13:16:04 2024 00:10:56.808 read: IOPS=6616, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1006msec) 00:10:56.808 slat (nsec): min=1029, max=9698.8k, avg=71834.15, stdev=535375.77 00:10:56.808 clat (usec): min=3922, max=24597, avg=9726.89, stdev=2938.20 00:10:56.808 lat (usec): min=3926, max=24605, avg=9798.72, stdev=2974.18 00:10:56.808 clat percentiles (usec): 00:10:56.808 | 1.00th=[ 5407], 5.00th=[ 6718], 10.00th=[ 7046], 20.00th=[ 7701], 00:10:56.808 | 30.00th=[ 7963], 40.00th=[ 8160], 50.00th=[ 8717], 60.00th=[ 9241], 00:10:56.808 | 70.00th=[10290], 80.00th=[12125], 90.00th=[14091], 95.00th=[15795], 00:10:56.808 | 99.00th=[17957], 99.50th=[21103], 99.90th=[24511], 99.95th=[24511], 00:10:56.808 | 99.99th=[24511] 00:10:56.808 write: IOPS=7089, BW=27.7MiB/s (29.0MB/s)(27.9MiB/1006msec); 0 zone resets 00:10:56.808 slat (nsec): min=1659, max=11285k, avg=67501.44, stdev=483858.26 00:10:56.808 clat (usec): min=1198, max=35260, avg=8776.56, stdev=3879.05 00:10:56.808 lat (usec): min=1209, max=35269, avg=8844.06, stdev=3909.60 00:10:56.808 clat percentiles (usec): 00:10:56.808 | 1.00th=[ 3163], 5.00th=[ 4424], 10.00th=[ 5014], 20.00th=[ 6456], 00:10:56.808 | 30.00th=[ 7439], 40.00th=[ 8029], 50.00th=[ 8356], 60.00th=[ 8717], 00:10:56.808 | 70.00th=[ 9241], 80.00th=[ 9896], 90.00th=[12125], 95.00th=[13435], 00:10:56.808 | 99.00th=[30802], 99.50th=[32375], 99.90th=[34866], 99.95th=[35390], 00:10:56.808 | 99.99th=[35390] 00:10:56.808 bw ( KiB/s): min=24744, max=31288, per=28.35%, avg=28016.00, stdev=4627.31, samples=2 00:10:56.808 iops : min= 6186, max= 7822, avg=7004.00, stdev=1156.83, samples=2 00:10:56.808 lat (msec) : 2=0.07%, 4=1.48%, 10=72.49%, 20=24.55%, 50=1.41% 00:10:56.808 cpu : usr=4.58%, sys=8.46%, ctx=553, majf=0, minf=1 00:10:56.808 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:56.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.808 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:56.808 issued rwts: total=6656,7132,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.808 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:56.808 job3: (groupid=0, jobs=1): err= 0: pid=3700173: Thu Nov 7 13:16:04 2024 00:10:56.808 read: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec) 00:10:56.808 slat (nsec): min=932, max=10926k, avg=87954.07, stdev=591387.25 00:10:56.808 clat (usec): min=5112, max=38994, avg=11176.83, stdev=5106.80 00:10:56.808 lat (usec): min=5117, max=39011, avg=11264.78, stdev=5157.50 00:10:56.808 clat percentiles (usec): 00:10:56.808 | 1.00th=[ 5997], 5.00th=[ 7701], 10.00th=[ 7963], 20.00th=[ 8356], 00:10:56.808 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[10028], 00:10:56.808 | 70.00th=[10945], 80.00th=[12125], 90.00th=[16909], 95.00th=[22152], 00:10:56.808 | 99.00th=[35390], 99.50th=[36439], 99.90th=[38011], 99.95th=[38011], 00:10:56.808 | 99.99th=[39060] 00:10:56.808 write: IOPS=5638, BW=22.0MiB/s (23.1MB/s)(22.1MiB/1005msec); 0 zone resets 00:10:56.808 slat (nsec): min=1540, max=10188k, avg=84359.83, stdev=458734.19 00:10:56.808 clat (usec): min=1151, max=39354, avg=11398.84, stdev=5913.18 00:10:56.808 lat (usec): min=1196, max=40219, avg=11483.20, stdev=5957.33 00:10:56.808 clat percentiles (usec): 00:10:56.808 | 1.00th=[ 5080], 5.00th=[ 6718], 10.00th=[ 7767], 20.00th=[ 8225], 00:10:56.808 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9241], 00:10:56.808 | 70.00th=[11207], 80.00th=[15008], 90.00th=[15401], 95.00th=[26346], 00:10:56.808 | 99.00th=[37487], 99.50th=[38011], 99.90th=[39584], 99.95th=[39584], 00:10:56.808 | 99.99th=[39584] 00:10:56.808 bw ( KiB/s): min=16384, max=28672, per=22.80%, avg=22528.00, stdev=8688.93, samples=2 00:10:56.808 iops : min= 4096, max= 7168, avg=5632.00, stdev=2172.23, samples=2 00:10:56.808 lat (msec) : 2=0.01%, 10=63.69%, 20=28.68%, 50=7.62% 00:10:56.808 cpu : usr=3.19%, sys=5.68%, ctx=564, majf=0, minf=2 00:10:56.808 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:56.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.808 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:56.808 issued rwts: total=5632,5667,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.808 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:56.808 00:10:56.808 Run status group 0 (all jobs): 00:10:56.808 READ: bw=91.1MiB/s (95.5MB/s), 19.8MiB/s-25.8MiB/s (20.7MB/s-27.1MB/s), io=92.1MiB (96.5MB), run=1005-1011msec 00:10:56.808 WRITE: bw=96.5MiB/s (101MB/s), 21.3MiB/s-27.7MiB/s (22.4MB/s-29.0MB/s), io=97.6MiB (102MB), run=1005-1011msec 00:10:56.808 00:10:56.808 Disk stats (read/write): 00:10:56.808 nvme0n1: ios=4524/4608, merge=0/0, ticks=43045/39426, in_queue=82471, util=83.97% 00:10:56.808 nvme0n2: ios=5174/5455, merge=0/0, ticks=51129/45499, in_queue=96628, util=88.57% 00:10:56.808 nvme0n3: ios=5429/5632, merge=0/0, ticks=51109/49608, in_queue=100717, util=93.14% 00:10:56.808 nvme0n4: ios=4755/5120, merge=0/0, ticks=25386/26207, in_queue=51593, util=97.11% 00:10:56.808 13:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:56.808 13:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3700454 00:10:56.808 13:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:56.808 13:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:56.808 [global] 00:10:56.808 thread=1 00:10:56.808 invalidate=1 00:10:56.808 rw=read 00:10:56.808 time_based=1 00:10:56.808 runtime=10 00:10:56.808 ioengine=libaio 00:10:56.808 direct=1 00:10:56.808 bs=4096 00:10:56.808 iodepth=1 00:10:56.808 norandommap=1 00:10:56.808 numjobs=1 00:10:56.808 00:10:56.808 [job0] 00:10:56.808 filename=/dev/nvme0n1 00:10:56.808 [job1] 00:10:56.808 filename=/dev/nvme0n2 00:10:56.808 [job2] 00:10:56.808 filename=/dev/nvme0n3 00:10:56.808 [job3] 00:10:56.808 filename=/dev/nvme0n4 00:10:56.808 Could not set queue depth (nvme0n1) 00:10:56.808 Could not set queue depth (nvme0n2) 00:10:56.808 Could not set queue depth (nvme0n3) 00:10:56.808 Could not set queue depth (nvme0n4) 00:10:57.073 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.073 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.073 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.073 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.073 fio-3.35 00:10:57.073 Starting 4 threads 00:10:59.748 13:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:59.748 13:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:59.748 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=1974272, buflen=4096 00:10:59.748 fio: pid=3700680, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:00.007 13:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:00.007 13:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:00.007 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=647168, buflen=4096 00:11:00.007 fio: pid=3700674, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:00.267 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=12963840, buflen=4096 00:11:00.267 fio: pid=3700666, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:00.267 13:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:00.267 13:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:00.267 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=311296, buflen=4096 00:11:00.267 fio: pid=3700668, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:00.527 13:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:00.527 13:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:00.527 00:11:00.527 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3700666: Thu Nov 7 13:16:08 2024 00:11:00.527 read: IOPS=1070, BW=4280KiB/s (4383kB/s)(12.4MiB/2958msec) 00:11:00.527 slat (usec): min=6, max=31088, avg=53.74, stdev=870.88 00:11:00.527 clat (usec): min=409, max=1272, avg=868.08, stdev=91.87 00:11:00.527 lat (usec): min=435, max=32289, avg=921.84, stdev=884.20 00:11:00.527 clat percentiles (usec): 00:11:00.527 | 1.00th=[ 660], 5.00th=[ 725], 10.00th=[ 766], 20.00th=[ 799], 00:11:00.527 | 30.00th=[ 824], 40.00th=[ 848], 50.00th=[ 865], 60.00th=[ 881], 00:11:00.527 | 70.00th=[ 898], 80.00th=[ 922], 90.00th=[ 988], 95.00th=[ 1057], 00:11:00.527 | 99.00th=[ 1123], 99.50th=[ 1139], 99.90th=[ 1188], 99.95th=[ 1237], 00:11:00.527 | 99.99th=[ 1270] 00:11:00.527 bw ( KiB/s): min= 4264, max= 4624, per=92.98%, avg=4515.20, stdev=147.36, samples=5 00:11:00.527 iops : min= 1066, max= 1156, avg=1128.80, stdev=36.84, samples=5 00:11:00.527 lat (usec) : 500=0.03%, 750=7.61%, 1000=83.10% 00:11:00.527 lat (msec) : 2=9.22% 00:11:00.527 cpu : usr=1.32%, sys=3.08%, ctx=3171, majf=0, minf=1 00:11:00.527 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:00.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.527 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.527 issued rwts: total=3166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.527 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:00.528 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3700668: Thu Nov 7 13:16:08 2024 00:11:00.528 read: IOPS=24, BW=95.1KiB/s (97.4kB/s)(304KiB/3197msec) 00:11:00.528 slat (usec): min=25, max=19605, avg=413.16, stdev=2361.12 00:11:00.528 clat (usec): min=1053, max=43012, avg=41354.05, stdev=4700.89 00:11:00.528 lat (usec): min=1110, max=61000, avg=41772.28, stdev=5267.61 00:11:00.528 clat percentiles (usec): 00:11:00.528 | 1.00th=[ 1057], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:11:00.528 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:11:00.528 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:11:00.528 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:11:00.528 | 99.99th=[43254] 00:11:00.528 bw ( KiB/s): min= 89, max= 96, per=1.94%, avg=94.83, stdev= 2.86, samples=6 00:11:00.528 iops : min= 22, max= 24, avg=23.67, stdev= 0.82, samples=6 00:11:00.528 lat (msec) : 2=1.30%, 50=97.40% 00:11:00.528 cpu : usr=0.16%, sys=0.00%, ctx=81, majf=0, minf=2 00:11:00.528 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:00.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.528 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.528 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.528 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:00.528 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3700674: Thu Nov 7 13:16:08 2024 00:11:00.528 read: IOPS=56, BW=225KiB/s (231kB/s)(632KiB/2803msec) 00:11:00.528 slat (usec): min=7, max=13854, avg=113.91, stdev=1096.62 00:11:00.528 clat (usec): min=556, max=42129, avg=17481.19, stdev=19720.02 00:11:00.528 lat (usec): min=596, max=54990, avg=17595.65, stdev=19855.50 00:11:00.528 clat percentiles (usec): 00:11:00.528 | 1.00th=[ 766], 5.00th=[ 955], 10.00th=[ 988], 20.00th=[ 1020], 00:11:00.528 | 30.00th=[ 1057], 40.00th=[ 1090], 50.00th=[ 1123], 60.00th=[40633], 00:11:00.528 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:00.528 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:00.528 | 99.99th=[42206] 00:11:00.528 bw ( KiB/s): min= 96, max= 640, per=4.96%, avg=241.60, stdev=235.04, samples=5 00:11:00.528 iops : min= 24, max= 160, avg=60.40, stdev=58.76, samples=5 00:11:00.528 lat (usec) : 750=0.63%, 1000=13.21% 00:11:00.528 lat (msec) : 2=44.65%, 50=40.88% 00:11:00.528 cpu : usr=0.07%, sys=0.25%, ctx=160, majf=0, minf=2 00:11:00.528 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:00.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.528 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.528 issued rwts: total=159,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.528 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:00.528 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3700680: Thu Nov 7 13:16:08 2024 00:11:00.528 read: IOPS=183, BW=733KiB/s (750kB/s)(1928KiB/2631msec) 00:11:00.528 slat (nsec): min=25112, max=60962, avg=27386.15, stdev=2961.00 00:11:00.528 clat (usec): min=812, max=43017, avg=5377.09, stdev=12566.92 00:11:00.528 lat (usec): min=840, max=43042, avg=5404.48, stdev=12566.26 00:11:00.528 clat percentiles (usec): 00:11:00.528 | 1.00th=[ 881], 5.00th=[ 947], 10.00th=[ 971], 20.00th=[ 1004], 00:11:00.528 | 30.00th=[ 1037], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1090], 00:11:00.528 | 70.00th=[ 1106], 80.00th=[ 1156], 90.00th=[41157], 95.00th=[42206], 00:11:00.528 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:11:00.528 | 99.99th=[43254] 00:11:00.528 bw ( KiB/s): min= 96, max= 3448, per=15.77%, avg=766.40, stdev=1499.06, samples=5 00:11:00.528 iops : min= 24, max= 862, avg=191.60, stdev=374.76, samples=5 00:11:00.528 lat (usec) : 1000=18.43% 00:11:00.528 lat (msec) : 2=70.81%, 50=10.56% 00:11:00.528 cpu : usr=0.38%, sys=0.65%, ctx=483, majf=0, minf=2 00:11:00.528 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:00.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.528 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.528 issued rwts: total=483,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.528 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:00.528 00:11:00.528 Run status group 0 (all jobs): 00:11:00.528 READ: bw=4856KiB/s (4972kB/s), 95.1KiB/s-4280KiB/s (97.4kB/s-4383kB/s), io=15.2MiB (15.9MB), run=2631-3197msec 00:11:00.528 00:11:00.528 Disk stats (read/write): 00:11:00.528 nvme0n1: ios=3056/0, merge=0/0, ticks=2576/0, in_queue=2576, util=91.72% 00:11:00.528 nvme0n2: ios=73/0, merge=0/0, ticks=3018/0, in_queue=3018, util=94.76% 00:11:00.528 nvme0n3: ios=153/0, merge=0/0, ticks=2546/0, in_queue=2546, util=95.99% 00:11:00.528 nvme0n4: ios=481/0, merge=0/0, ticks=2502/0, in_queue=2502, util=96.42% 00:11:00.528 13:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:00.528 13:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:00.789 13:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:00.789 13:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:01.049 13:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:01.049 13:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:01.309 13:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:01.309 13:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:01.569 13:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:01.569 13:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3700454 00:11:01.569 13:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:01.569 13:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:02.149 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.149 13:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:02.149 13:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:11:02.149 13:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:02.149 13:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:02.149 13:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:02.149 13:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:02.149 13:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:11:02.149 13:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:02.149 13:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:02.149 nvmf hotplug test: fio failed as expected 00:11:02.149 13:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:02.409 13:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:02.409 13:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:02.409 13:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:02.409 13:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:02.409 13:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:02.409 13:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:02.409 13:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:02.409 13:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:02.409 13:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:02.409 13:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:02.409 13:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:02.409 rmmod nvme_tcp 00:11:02.409 rmmod nvme_fabrics 00:11:02.409 rmmod nvme_keyring 00:11:02.409 13:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:02.409 13:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:02.409 13:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:02.409 13:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3696940 ']' 00:11:02.409 13:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3696940 00:11:02.409 13:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 3696940 ']' 00:11:02.409 13:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 3696940 00:11:02.409 13:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:11:02.409 13:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:02.409 13:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3696940 00:11:02.409 13:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:02.409 13:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:02.409 13:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3696940' 00:11:02.409 killing process with pid 3696940 00:11:02.409 13:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 3696940 00:11:02.409 13:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 3696940 00:11:03.350 13:16:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:03.350 13:16:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:03.350 13:16:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:03.350 13:16:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:03.350 13:16:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:03.350 13:16:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:03.350 13:16:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:03.350 13:16:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:03.350 13:16:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:03.350 13:16:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.350 13:16:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.350 13:16:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:05.264 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:05.264 00:11:05.264 real 0m31.825s 00:11:05.264 user 2m39.574s 00:11:05.264 sys 0m10.386s 00:11:05.264 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:05.264 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.264 ************************************ 00:11:05.264 END TEST nvmf_fio_target 00:11:05.264 ************************************ 00:11:05.264 13:16:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:05.264 13:16:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:05.264 13:16:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:05.264 13:16:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:05.264 ************************************ 00:11:05.264 START TEST nvmf_bdevio 00:11:05.264 ************************************ 00:11:05.264 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:05.526 * Looking for test storage... 00:11:05.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:05.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.526 --rc genhtml_branch_coverage=1 00:11:05.526 --rc genhtml_function_coverage=1 00:11:05.526 --rc genhtml_legend=1 00:11:05.526 --rc geninfo_all_blocks=1 00:11:05.526 --rc geninfo_unexecuted_blocks=1 00:11:05.526 00:11:05.526 ' 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:05.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.526 --rc genhtml_branch_coverage=1 00:11:05.526 --rc genhtml_function_coverage=1 00:11:05.526 --rc genhtml_legend=1 00:11:05.526 --rc geninfo_all_blocks=1 00:11:05.526 --rc geninfo_unexecuted_blocks=1 00:11:05.526 00:11:05.526 ' 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:05.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.526 --rc genhtml_branch_coverage=1 00:11:05.526 --rc genhtml_function_coverage=1 00:11:05.526 --rc genhtml_legend=1 00:11:05.526 --rc geninfo_all_blocks=1 00:11:05.526 --rc geninfo_unexecuted_blocks=1 00:11:05.526 00:11:05.526 ' 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:05.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.526 --rc genhtml_branch_coverage=1 00:11:05.526 --rc genhtml_function_coverage=1 00:11:05.526 --rc genhtml_legend=1 00:11:05.526 --rc geninfo_all_blocks=1 00:11:05.526 --rc geninfo_unexecuted_blocks=1 00:11:05.526 00:11:05.526 ' 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:05.526 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.527 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.527 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.527 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:05.527 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.527 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:05.527 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:05.527 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:05.527 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:05.527 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:05.527 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:05.527 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:05.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:05.527 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:05.527 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:05.527 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:05.527 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:05.527 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:05.527 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:05.527 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:05.527 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:05.527 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:05.527 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:05.527 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:05.527 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.527 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:05.527 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:05.527 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:05.527 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:05.527 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:05.527 13:16:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:13.670 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:13.670 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:13.670 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:13.671 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:13.671 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:13.671 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.671 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:13.671 Found net devices under 0000:31:00.0: cvl_0_0 00:11:13.671 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.671 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:13.671 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.671 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:13.671 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:13.671 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:13.671 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:13.671 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.671 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:13.671 Found net devices under 0000:31:00.1: cvl_0_1 00:11:13.671 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.671 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:13.671 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:13.671 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:13.671 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:13.671 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:13.671 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:13.671 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:13.671 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:13.671 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:13.671 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:13.671 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:13.671 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:13.671 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:13.671 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:13.671 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:13.671 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:13.671 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:13.671 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:13.671 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:13.671 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:13.931 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:13.931 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:13.931 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:13.931 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:13.931 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:13.931 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:13.931 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:13.931 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:13.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:13.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:11:13.931 00:11:13.931 --- 10.0.0.2 ping statistics --- 00:11:13.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.931 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:11:13.931 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:13.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:13.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:11:13.931 00:11:13.931 --- 10.0.0.1 ping statistics --- 00:11:13.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.931 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:11:13.931 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:13.931 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:13.931 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:13.931 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:13.931 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:13.931 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:13.931 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:13.931 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:13.931 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:13.931 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:13.931 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:13.931 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:13.931 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:13.931 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3706652 00:11:13.932 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3706652 00:11:13.932 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:13.932 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 3706652 ']' 00:11:13.932 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.932 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:13.932 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.932 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:13.932 13:16:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:14.192 [2024-11-07 13:16:21.974808] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:11:14.192 [2024-11-07 13:16:21.974918] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:14.192 [2024-11-07 13:16:22.142000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:14.453 [2024-11-07 13:16:22.263541] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:14.453 [2024-11-07 13:16:22.263601] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:14.453 [2024-11-07 13:16:22.263615] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:14.453 [2024-11-07 13:16:22.263629] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:14.453 [2024-11-07 13:16:22.263640] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:14.453 [2024-11-07 13:16:22.266655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:14.453 [2024-11-07 13:16:22.266825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:14.453 [2024-11-07 13:16:22.266958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:14.453 [2024-11-07 13:16:22.266982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:15.026 13:16:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:15.026 13:16:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:11:15.026 13:16:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:15.026 13:16:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:15.026 13:16:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:15.026 13:16:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:15.026 13:16:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:15.026 13:16:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.026 13:16:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:15.026 [2024-11-07 13:16:22.812129] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:15.026 13:16:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.026 13:16:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:15.026 13:16:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.026 13:16:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:15.026 Malloc0 00:11:15.026 13:16:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.026 13:16:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:15.026 13:16:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.026 13:16:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:15.026 13:16:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.026 13:16:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:15.026 13:16:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.026 13:16:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:15.026 13:16:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.026 13:16:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:15.026 13:16:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.026 13:16:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:15.026 [2024-11-07 13:16:22.928831] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:15.026 13:16:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.026 13:16:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:15.026 13:16:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:15.026 13:16:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:15.026 13:16:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:15.026 13:16:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:15.026 13:16:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:15.026 { 00:11:15.026 "params": { 00:11:15.026 "name": "Nvme$subsystem", 00:11:15.026 "trtype": "$TEST_TRANSPORT", 00:11:15.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:15.026 "adrfam": "ipv4", 00:11:15.026 "trsvcid": "$NVMF_PORT", 00:11:15.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:15.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:15.026 "hdgst": ${hdgst:-false}, 00:11:15.026 "ddgst": ${ddgst:-false} 00:11:15.026 }, 00:11:15.026 "method": "bdev_nvme_attach_controller" 00:11:15.026 } 00:11:15.026 EOF 00:11:15.026 )") 00:11:15.026 13:16:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:15.026 13:16:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:15.026 13:16:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:15.026 13:16:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:15.026 "params": { 00:11:15.026 "name": "Nvme1", 00:11:15.026 "trtype": "tcp", 00:11:15.026 "traddr": "10.0.0.2", 00:11:15.026 "adrfam": "ipv4", 00:11:15.026 "trsvcid": "4420", 00:11:15.026 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:15.026 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:15.026 "hdgst": false, 00:11:15.027 "ddgst": false 00:11:15.027 }, 00:11:15.027 "method": "bdev_nvme_attach_controller" 00:11:15.027 }' 00:11:15.027 [2024-11-07 13:16:23.024487] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:11:15.027 [2024-11-07 13:16:23.024615] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3706979 ] 00:11:15.287 [2024-11-07 13:16:23.180364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:15.287 [2024-11-07 13:16:23.283914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.287 [2024-11-07 13:16:23.283984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:15.287 [2024-11-07 13:16:23.284102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.859 I/O targets: 00:11:15.859 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:15.859 00:11:15.859 00:11:15.859 CUnit - A unit testing framework for C - Version 2.1-3 00:11:15.859 http://cunit.sourceforge.net/ 00:11:15.859 00:11:15.859 00:11:15.859 Suite: bdevio tests on: Nvme1n1 00:11:15.859 Test: blockdev write read block ...passed 00:11:15.859 Test: blockdev write zeroes read block ...passed 00:11:15.859 Test: blockdev write zeroes read no split ...passed 00:11:15.859 Test: blockdev write zeroes read split ...passed 00:11:16.121 Test: blockdev write zeroes read split partial ...passed 00:11:16.121 Test: blockdev reset ...[2024-11-07 13:16:23.900233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:16.121 [2024-11-07 13:16:23.900347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000417600 (9): Bad file descriptor 00:11:16.121 [2024-11-07 13:16:23.960487] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:16.121 passed 00:11:16.121 Test: blockdev write read 8 blocks ...passed 00:11:16.121 Test: blockdev write read size > 128k ...passed 00:11:16.121 Test: blockdev write read invalid size ...passed 00:11:16.121 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:16.121 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:16.121 Test: blockdev write read max offset ...passed 00:11:16.382 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:16.382 Test: blockdev writev readv 8 blocks ...passed 00:11:16.382 Test: blockdev writev readv 30 x 1block ...passed 00:11:16.382 Test: blockdev writev readv block ...passed 00:11:16.382 Test: blockdev writev readv size > 128k ...passed 00:11:16.382 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:16.382 Test: blockdev comparev and writev ...[2024-11-07 13:16:24.229424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:16.382 [2024-11-07 13:16:24.229458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:16.382 [2024-11-07 13:16:24.229481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:16.382 [2024-11-07 13:16:24.229491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:16.382 [2024-11-07 13:16:24.229880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:16.382 [2024-11-07 13:16:24.229893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:16.382 [2024-11-07 13:16:24.229909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:16.382 [2024-11-07 13:16:24.229920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:16.382 [2024-11-07 13:16:24.230323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:16.382 [2024-11-07 13:16:24.230335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:16.382 [2024-11-07 13:16:24.230348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:16.382 [2024-11-07 13:16:24.230355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:16.382 [2024-11-07 13:16:24.230721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:16.382 [2024-11-07 13:16:24.230733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:16.382 [2024-11-07 13:16:24.230745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:16.382 [2024-11-07 13:16:24.230753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:16.382 passed 00:11:16.382 Test: blockdev nvme passthru rw ...passed 00:11:16.383 Test: blockdev nvme passthru vendor specific ...[2024-11-07 13:16:24.314701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:16.383 [2024-11-07 13:16:24.314723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:16.383 [2024-11-07 13:16:24.314982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:16.383 [2024-11-07 13:16:24.314993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:16.383 [2024-11-07 13:16:24.315274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:16.383 [2024-11-07 13:16:24.315284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:16.383 [2024-11-07 13:16:24.315661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:16.383 [2024-11-07 13:16:24.315673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:16.383 passed 00:11:16.383 Test: blockdev nvme admin passthru ...passed 00:11:16.383 Test: blockdev copy ...passed 00:11:16.383 00:11:16.383 Run Summary: Type Total Ran Passed Failed Inactive 00:11:16.383 suites 1 1 n/a 0 0 00:11:16.383 tests 23 23 23 0 0 00:11:16.383 asserts 152 152 152 0 n/a 00:11:16.383 00:11:16.383 Elapsed time = 1.606 seconds 00:11:17.326 13:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:17.326 13:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.326 13:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:17.326 13:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.326 13:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:17.326 13:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:17.326 13:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:17.326 13:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:17.327 13:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:17.327 13:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:17.327 13:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:17.327 13:16:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:17.327 rmmod nvme_tcp 00:11:17.327 rmmod nvme_fabrics 00:11:17.327 rmmod nvme_keyring 00:11:17.327 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:17.327 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:17.327 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:17.327 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3706652 ']' 00:11:17.327 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3706652 00:11:17.327 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 3706652 ']' 00:11:17.327 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 3706652 00:11:17.327 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:11:17.327 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:17.327 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3706652 00:11:17.327 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:11:17.327 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:11:17.327 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3706652' 00:11:17.327 killing process with pid 3706652 00:11:17.327 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 3706652 00:11:17.327 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 3706652 00:11:17.898 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:17.898 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:17.898 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:17.898 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:17.898 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:17.898 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:17.898 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:17.898 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:17.898 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:17.898 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.898 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.898 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.442 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:20.443 00:11:20.443 real 0m14.619s 00:11:20.443 user 0m19.872s 00:11:20.443 sys 0m7.191s 00:11:20.443 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:20.443 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:20.443 ************************************ 00:11:20.443 END TEST nvmf_bdevio 00:11:20.443 ************************************ 00:11:20.443 13:16:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:20.443 00:11:20.443 real 5m29.480s 00:11:20.443 user 12m32.042s 00:11:20.443 sys 2m0.530s 00:11:20.443 13:16:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:20.443 13:16:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:20.443 ************************************ 00:11:20.443 END TEST nvmf_target_core 00:11:20.443 ************************************ 00:11:20.443 13:16:27 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:20.443 13:16:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:20.443 13:16:27 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:20.443 13:16:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:20.443 ************************************ 00:11:20.443 START TEST nvmf_target_extra 00:11:20.443 ************************************ 00:11:20.443 13:16:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:20.443 * Looking for test storage... 00:11:20.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:20.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.443 --rc genhtml_branch_coverage=1 00:11:20.443 --rc genhtml_function_coverage=1 00:11:20.443 --rc genhtml_legend=1 00:11:20.443 --rc geninfo_all_blocks=1 00:11:20.443 --rc geninfo_unexecuted_blocks=1 00:11:20.443 00:11:20.443 ' 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:20.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.443 --rc genhtml_branch_coverage=1 00:11:20.443 --rc genhtml_function_coverage=1 00:11:20.443 --rc genhtml_legend=1 00:11:20.443 --rc geninfo_all_blocks=1 00:11:20.443 --rc geninfo_unexecuted_blocks=1 00:11:20.443 00:11:20.443 ' 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:20.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.443 --rc genhtml_branch_coverage=1 00:11:20.443 --rc genhtml_function_coverage=1 00:11:20.443 --rc genhtml_legend=1 00:11:20.443 --rc geninfo_all_blocks=1 00:11:20.443 --rc geninfo_unexecuted_blocks=1 00:11:20.443 00:11:20.443 ' 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:20.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.443 --rc genhtml_branch_coverage=1 00:11:20.443 --rc genhtml_function_coverage=1 00:11:20.443 --rc genhtml_legend=1 00:11:20.443 --rc geninfo_all_blocks=1 00:11:20.443 --rc geninfo_unexecuted_blocks=1 00:11:20.443 00:11:20.443 ' 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:20.443 13:16:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:20.444 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:20.444 ************************************ 00:11:20.444 START TEST nvmf_example 00:11:20.444 ************************************ 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:20.444 * Looking for test storage... 00:11:20.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:20.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.444 --rc genhtml_branch_coverage=1 00:11:20.444 --rc genhtml_function_coverage=1 00:11:20.444 --rc genhtml_legend=1 00:11:20.444 --rc geninfo_all_blocks=1 00:11:20.444 --rc geninfo_unexecuted_blocks=1 00:11:20.444 00:11:20.444 ' 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:20.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.444 --rc genhtml_branch_coverage=1 00:11:20.444 --rc genhtml_function_coverage=1 00:11:20.444 --rc genhtml_legend=1 00:11:20.444 --rc geninfo_all_blocks=1 00:11:20.444 --rc geninfo_unexecuted_blocks=1 00:11:20.444 00:11:20.444 ' 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:20.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.444 --rc genhtml_branch_coverage=1 00:11:20.444 --rc genhtml_function_coverage=1 00:11:20.444 --rc genhtml_legend=1 00:11:20.444 --rc geninfo_all_blocks=1 00:11:20.444 --rc geninfo_unexecuted_blocks=1 00:11:20.444 00:11:20.444 ' 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:20.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.444 --rc genhtml_branch_coverage=1 00:11:20.444 --rc genhtml_function_coverage=1 00:11:20.444 --rc genhtml_legend=1 00:11:20.444 --rc geninfo_all_blocks=1 00:11:20.444 --rc geninfo_unexecuted_blocks=1 00:11:20.444 00:11:20.444 ' 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:20.444 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:20.445 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:20.445 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.587 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:28.587 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:28.587 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:28.587 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:28.587 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:28.587 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:28.587 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:28.587 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:28.587 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:28.587 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:28.587 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:28.587 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:28.587 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:28.587 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:28.587 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:28.587 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:28.587 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:28.587 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:28.587 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:28.587 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:28.587 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:28.587 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:28.587 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:28.587 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:28.587 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:28.588 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:28.588 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:28.588 Found net devices under 0000:31:00.0: cvl_0_0 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:28.588 Found net devices under 0000:31:00.1: cvl_0_1 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:28.588 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:28.588 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.714 ms 00:11:28.588 00:11:28.588 --- 10.0.0.2 ping statistics --- 00:11:28.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.588 rtt min/avg/max/mdev = 0.714/0.714/0.714/0.000 ms 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:28.588 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:28.588 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:11:28.588 00:11:28.588 --- 10.0.0.1 ping statistics --- 00:11:28.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.588 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:28.588 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.849 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:28.849 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:28.849 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3712308 00:11:28.849 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:28.849 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3712308 00:11:28.849 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:28.849 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # '[' -z 3712308 ']' 00:11:28.849 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.849 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:28.849 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.849 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:28.849 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.791 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:29.791 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@866 -- # return 0 00:11:29.791 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:29.791 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:29.791 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.791 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:29.791 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.791 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.791 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.791 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:29.791 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.791 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.791 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.791 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:29.791 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:29.791 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.791 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.791 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.791 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:29.791 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:29.791 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.791 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.791 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.791 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:29.791 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.791 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.791 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.791 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:29.791 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:42.021 Initializing NVMe Controllers 00:11:42.021 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:42.021 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:42.021 Initialization complete. Launching workers. 00:11:42.021 ======================================================== 00:11:42.021 Latency(us) 00:11:42.021 Device Information : IOPS MiB/s Average min max 00:11:42.021 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16809.97 65.66 3806.86 890.26 15554.37 00:11:42.021 ======================================================== 00:11:42.021 Total : 16809.97 65.66 3806.86 890.26 15554.37 00:11:42.021 00:11:42.021 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:42.021 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:42.021 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:42.021 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:42.021 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:42.021 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:42.021 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:42.021 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:42.021 rmmod nvme_tcp 00:11:42.021 rmmod nvme_fabrics 00:11:42.021 rmmod nvme_keyring 00:11:42.021 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:42.021 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:42.021 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:42.021 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3712308 ']' 00:11:42.021 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3712308 00:11:42.021 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # '[' -z 3712308 ']' 00:11:42.021 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # kill -0 3712308 00:11:42.021 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # uname 00:11:42.021 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:42.021 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3712308 00:11:42.021 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # process_name=nvmf 00:11:42.021 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@962 -- # '[' nvmf = sudo ']' 00:11:42.021 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3712308' 00:11:42.021 killing process with pid 3712308 00:11:42.021 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # kill 3712308 00:11:42.021 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@976 -- # wait 3712308 00:11:42.021 nvmf threads initialize successfully 00:11:42.021 bdev subsystem init successfully 00:11:42.021 created a nvmf target service 00:11:42.021 create targets's poll groups done 00:11:42.021 all subsystems of target started 00:11:42.021 nvmf target is running 00:11:42.021 all subsystems of target stopped 00:11:42.021 destroy targets's poll groups done 00:11:42.021 destroyed the nvmf target service 00:11:42.021 bdev subsystem finish successfully 00:11:42.021 nvmf threads destroy successfully 00:11:42.021 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:42.021 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:42.021 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:42.021 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:42.021 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:42.021 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:42.022 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:42.022 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:42.022 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:42.022 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.022 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:42.022 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.409 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:43.409 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:43.409 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:43.409 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:43.409 00:11:43.409 real 0m22.914s 00:11:43.409 user 0m48.677s 00:11:43.409 sys 0m7.630s 00:11:43.409 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:43.409 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:43.409 ************************************ 00:11:43.409 END TEST nvmf_example 00:11:43.409 ************************************ 00:11:43.409 13:16:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:43.409 13:16:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:43.409 13:16:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:43.409 13:16:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:43.409 ************************************ 00:11:43.409 START TEST nvmf_filesystem 00:11:43.409 ************************************ 00:11:43.409 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:43.409 * Looking for test storage... 00:11:43.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:43.409 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:43.409 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:11:43.409 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:43.409 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:43.409 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:43.409 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:43.409 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:43.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.410 --rc genhtml_branch_coverage=1 00:11:43.410 --rc genhtml_function_coverage=1 00:11:43.410 --rc genhtml_legend=1 00:11:43.410 --rc geninfo_all_blocks=1 00:11:43.410 --rc geninfo_unexecuted_blocks=1 00:11:43.410 00:11:43.410 ' 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:43.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.410 --rc genhtml_branch_coverage=1 00:11:43.410 --rc genhtml_function_coverage=1 00:11:43.410 --rc genhtml_legend=1 00:11:43.410 --rc geninfo_all_blocks=1 00:11:43.410 --rc geninfo_unexecuted_blocks=1 00:11:43.410 00:11:43.410 ' 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:43.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.410 --rc genhtml_branch_coverage=1 00:11:43.410 --rc genhtml_function_coverage=1 00:11:43.410 --rc genhtml_legend=1 00:11:43.410 --rc geninfo_all_blocks=1 00:11:43.410 --rc geninfo_unexecuted_blocks=1 00:11:43.410 00:11:43.410 ' 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:43.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.410 --rc genhtml_branch_coverage=1 00:11:43.410 --rc genhtml_function_coverage=1 00:11:43.410 --rc genhtml_legend=1 00:11:43.410 --rc geninfo_all_blocks=1 00:11:43.410 --rc geninfo_unexecuted_blocks=1 00:11:43.410 00:11:43.410 ' 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:43.410 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:43.411 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:43.411 #define SPDK_CONFIG_H 00:11:43.411 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:43.411 #define SPDK_CONFIG_APPS 1 00:11:43.411 #define SPDK_CONFIG_ARCH native 00:11:43.411 #define SPDK_CONFIG_ASAN 1 00:11:43.411 #undef SPDK_CONFIG_AVAHI 00:11:43.411 #undef SPDK_CONFIG_CET 00:11:43.411 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:43.411 #define SPDK_CONFIG_COVERAGE 1 00:11:43.411 #define SPDK_CONFIG_CROSS_PREFIX 00:11:43.411 #undef SPDK_CONFIG_CRYPTO 00:11:43.411 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:43.411 #undef SPDK_CONFIG_CUSTOMOCF 00:11:43.411 #undef SPDK_CONFIG_DAOS 00:11:43.411 #define SPDK_CONFIG_DAOS_DIR 00:11:43.411 #define SPDK_CONFIG_DEBUG 1 00:11:43.411 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:43.411 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:43.411 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:43.411 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:43.411 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:43.411 #undef SPDK_CONFIG_DPDK_UADK 00:11:43.411 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:43.411 #define SPDK_CONFIG_EXAMPLES 1 00:11:43.411 #undef SPDK_CONFIG_FC 00:11:43.411 #define SPDK_CONFIG_FC_PATH 00:11:43.411 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:43.411 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:43.411 #define SPDK_CONFIG_FSDEV 1 00:11:43.411 #undef SPDK_CONFIG_FUSE 00:11:43.411 #undef SPDK_CONFIG_FUZZER 00:11:43.411 #define SPDK_CONFIG_FUZZER_LIB 00:11:43.411 #undef SPDK_CONFIG_GOLANG 00:11:43.411 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:43.411 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:43.411 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:43.411 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:43.411 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:43.411 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:43.411 #undef SPDK_CONFIG_HAVE_LZ4 00:11:43.411 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:43.411 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:43.411 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:43.411 #define SPDK_CONFIG_IDXD 1 00:11:43.411 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:43.411 #undef SPDK_CONFIG_IPSEC_MB 00:11:43.411 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:43.411 #define SPDK_CONFIG_ISAL 1 00:11:43.411 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:43.411 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:43.411 #define SPDK_CONFIG_LIBDIR 00:11:43.411 #undef SPDK_CONFIG_LTO 00:11:43.411 #define SPDK_CONFIG_MAX_LCORES 128 00:11:43.411 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:43.411 #define SPDK_CONFIG_NVME_CUSE 1 00:11:43.411 #undef SPDK_CONFIG_OCF 00:11:43.411 #define SPDK_CONFIG_OCF_PATH 00:11:43.411 #define SPDK_CONFIG_OPENSSL_PATH 00:11:43.411 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:43.411 #define SPDK_CONFIG_PGO_DIR 00:11:43.411 #undef SPDK_CONFIG_PGO_USE 00:11:43.411 #define SPDK_CONFIG_PREFIX /usr/local 00:11:43.411 #undef SPDK_CONFIG_RAID5F 00:11:43.411 #undef SPDK_CONFIG_RBD 00:11:43.411 #define SPDK_CONFIG_RDMA 1 00:11:43.411 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:43.411 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:43.411 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:43.411 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:43.411 #define SPDK_CONFIG_SHARED 1 00:11:43.411 #undef SPDK_CONFIG_SMA 00:11:43.411 #define SPDK_CONFIG_TESTS 1 00:11:43.411 #undef SPDK_CONFIG_TSAN 00:11:43.411 #define SPDK_CONFIG_UBLK 1 00:11:43.411 #define SPDK_CONFIG_UBSAN 1 00:11:43.411 #undef SPDK_CONFIG_UNIT_TESTS 00:11:43.411 #undef SPDK_CONFIG_URING 00:11:43.411 #define SPDK_CONFIG_URING_PATH 00:11:43.411 #undef SPDK_CONFIG_URING_ZNS 00:11:43.411 #undef SPDK_CONFIG_USDT 00:11:43.411 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:43.411 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:43.411 #undef SPDK_CONFIG_VFIO_USER 00:11:43.411 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:43.411 #define SPDK_CONFIG_VHOST 1 00:11:43.411 #define SPDK_CONFIG_VIRTIO 1 00:11:43.411 #undef SPDK_CONFIG_VTUNE 00:11:43.411 #define SPDK_CONFIG_VTUNE_DIR 00:11:43.411 #define SPDK_CONFIG_WERROR 1 00:11:43.411 #define SPDK_CONFIG_WPDK_DIR 00:11:43.411 #undef SPDK_CONFIG_XNVME 00:11:43.412 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:43.412 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:43.412 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:43.412 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:43.412 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:43.412 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:43.412 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:43.412 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.412 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.412 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.412 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:43.412 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.412 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:43.412 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:43.412 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:43.676 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:43.676 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:43.676 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:43.676 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:43.676 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:43.676 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:43.676 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:43.676 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:43.676 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:43.676 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:43.676 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:43.676 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:43.676 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:43.676 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:43.676 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:43.676 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:43.676 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:43.676 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:43.676 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:43.676 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:43.676 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:43.676 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:43.676 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:43.676 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:43.676 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:43.676 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:43.676 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:43.676 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:43.676 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:43.676 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:43.676 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:43.676 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:43.676 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:43.676 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:43.676 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:43.676 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:43.676 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:43.676 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:43.676 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:43.677 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j144 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 3715139 ]] 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 3715139 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.SLrgWn 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.SLrgWn/tests/target /tmp/spdk.SLrgWn 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:11:43.678 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=122227761152 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=129356550144 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=7128788992 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64666906624 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678273024 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11366400 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=25847697408 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=25871310848 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23613440 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=efivarfs 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=efivarfs 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=175104 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=507904 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=328704 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64677396480 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678277120 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=880640 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12935639040 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12935651328 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:11:43.679 * Looking for test storage... 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=122227761152 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=9343381504 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:43.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:43.679 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:43.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.680 --rc genhtml_branch_coverage=1 00:11:43.680 --rc genhtml_function_coverage=1 00:11:43.680 --rc genhtml_legend=1 00:11:43.680 --rc geninfo_all_blocks=1 00:11:43.680 --rc geninfo_unexecuted_blocks=1 00:11:43.680 00:11:43.680 ' 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:43.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.680 --rc genhtml_branch_coverage=1 00:11:43.680 --rc genhtml_function_coverage=1 00:11:43.680 --rc genhtml_legend=1 00:11:43.680 --rc geninfo_all_blocks=1 00:11:43.680 --rc geninfo_unexecuted_blocks=1 00:11:43.680 00:11:43.680 ' 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:43.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.680 --rc genhtml_branch_coverage=1 00:11:43.680 --rc genhtml_function_coverage=1 00:11:43.680 --rc genhtml_legend=1 00:11:43.680 --rc geninfo_all_blocks=1 00:11:43.680 --rc geninfo_unexecuted_blocks=1 00:11:43.680 00:11:43.680 ' 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:43.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.680 --rc genhtml_branch_coverage=1 00:11:43.680 --rc genhtml_function_coverage=1 00:11:43.680 --rc genhtml_legend=1 00:11:43.680 --rc geninfo_all_blocks=1 00:11:43.680 --rc geninfo_unexecuted_blocks=1 00:11:43.680 00:11:43.680 ' 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:43.680 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.680 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.942 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:43.942 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:43.942 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:43.942 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:52.088 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:52.088 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:52.088 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:52.089 Found net devices under 0000:31:00.0: cvl_0_0 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:52.089 Found net devices under 0000:31:00.1: cvl_0_1 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:52.089 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:52.089 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:52.089 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:52.089 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:52.089 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:52.350 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:52.350 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:52.350 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:52.350 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:52.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:52.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.680 ms 00:11:52.350 00:11:52.350 --- 10.0.0.2 ping statistics --- 00:11:52.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.350 rtt min/avg/max/mdev = 0.680/0.680/0.680/0.000 ms 00:11:52.350 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:52.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:52.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:11:52.350 00:11:52.350 --- 10.0.0.1 ping statistics --- 00:11:52.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.350 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:11:52.350 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:52.350 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:52.350 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:52.350 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:52.350 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:52.350 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:52.350 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:52.350 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:52.350 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:52.350 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:52.350 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:52.350 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:52.350 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:52.350 ************************************ 00:11:52.350 START TEST nvmf_filesystem_no_in_capsule 00:11:52.350 ************************************ 00:11:52.350 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 0 00:11:52.350 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:52.350 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:52.350 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:52.350 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:52.350 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.350 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3719454 00:11:52.350 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3719454 00:11:52.351 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:52.351 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 3719454 ']' 00:11:52.351 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.351 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:52.351 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.351 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:52.351 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.611 [2024-11-07 13:17:00.391275] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:11:52.611 [2024-11-07 13:17:00.391404] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.611 [2024-11-07 13:17:00.557336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:52.871 [2024-11-07 13:17:00.658996] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:52.871 [2024-11-07 13:17:00.659039] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:52.871 [2024-11-07 13:17:00.659051] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:52.871 [2024-11-07 13:17:00.659062] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:52.871 [2024-11-07 13:17:00.659071] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:52.871 [2024-11-07 13:17:00.661337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.871 [2024-11-07 13:17:00.661425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.871 [2024-11-07 13:17:00.661569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.871 [2024-11-07 13:17:00.661594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:53.441 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:53.441 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:11:53.441 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:53.441 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:53.442 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.442 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:53.442 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:53.442 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:53.442 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.442 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.442 [2024-11-07 13:17:01.207048] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:53.442 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.442 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:53.442 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.442 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.702 Malloc1 00:11:53.702 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.702 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:53.702 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.702 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.702 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.702 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:53.702 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.702 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.702 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.702 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:53.702 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.702 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.702 [2024-11-07 13:17:01.651455] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:53.702 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.703 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:53.703 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:11:53.703 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:11:53.703 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:11:53.703 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:11:53.703 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:53.703 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.703 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.703 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.703 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:11:53.703 { 00:11:53.703 "name": "Malloc1", 00:11:53.703 "aliases": [ 00:11:53.703 "b21d204c-5da6-4fc5-b995-f400097ef4c1" 00:11:53.703 ], 00:11:53.703 "product_name": "Malloc disk", 00:11:53.703 "block_size": 512, 00:11:53.703 "num_blocks": 1048576, 00:11:53.703 "uuid": "b21d204c-5da6-4fc5-b995-f400097ef4c1", 00:11:53.703 "assigned_rate_limits": { 00:11:53.703 "rw_ios_per_sec": 0, 00:11:53.703 "rw_mbytes_per_sec": 0, 00:11:53.703 "r_mbytes_per_sec": 0, 00:11:53.703 "w_mbytes_per_sec": 0 00:11:53.703 }, 00:11:53.703 "claimed": true, 00:11:53.703 "claim_type": "exclusive_write", 00:11:53.703 "zoned": false, 00:11:53.703 "supported_io_types": { 00:11:53.703 "read": true, 00:11:53.703 "write": true, 00:11:53.703 "unmap": true, 00:11:53.703 "flush": true, 00:11:53.703 "reset": true, 00:11:53.703 "nvme_admin": false, 00:11:53.703 "nvme_io": false, 00:11:53.703 "nvme_io_md": false, 00:11:53.703 "write_zeroes": true, 00:11:53.703 "zcopy": true, 00:11:53.703 "get_zone_info": false, 00:11:53.703 "zone_management": false, 00:11:53.703 "zone_append": false, 00:11:53.703 "compare": false, 00:11:53.703 "compare_and_write": false, 00:11:53.703 "abort": true, 00:11:53.703 "seek_hole": false, 00:11:53.703 "seek_data": false, 00:11:53.703 "copy": true, 00:11:53.703 "nvme_iov_md": false 00:11:53.703 }, 00:11:53.703 "memory_domains": [ 00:11:53.703 { 00:11:53.703 "dma_device_id": "system", 00:11:53.703 "dma_device_type": 1 00:11:53.703 }, 00:11:53.703 { 00:11:53.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.703 "dma_device_type": 2 00:11:53.703 } 00:11:53.703 ], 00:11:53.703 "driver_specific": {} 00:11:53.703 } 00:11:53.703 ]' 00:11:53.703 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:11:53.963 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:11:53.963 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:11:53.963 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:11:53.963 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:11:53.963 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:11:53.963 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:53.963 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:55.347 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:55.347 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:11:55.347 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:55.347 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:55.347 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:11:57.891 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:57.891 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:57.891 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:57.891 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:57.891 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:57.891 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:11:57.891 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:57.891 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:57.891 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:57.891 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:57.891 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:57.891 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:57.891 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:57.891 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:57.891 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:57.891 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:57.891 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:57.891 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:58.462 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:59.406 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:59.406 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:59.406 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:59.406 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:59.406 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.406 ************************************ 00:11:59.406 START TEST filesystem_ext4 00:11:59.406 ************************************ 00:11:59.406 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:59.406 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:59.406 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:59.406 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:59.406 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:11:59.406 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:59.406 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:11:59.406 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local force 00:11:59.406 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:11:59.406 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:11:59.406 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:59.406 mke2fs 1.47.0 (5-Feb-2023) 00:11:59.406 Discarding device blocks: 0/522240 done 00:11:59.406 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:59.406 Filesystem UUID: 537bf7f4-42de-4782-9f0e-043bcf2db53d 00:11:59.406 Superblock backups stored on blocks: 00:11:59.406 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:59.406 00:11:59.406 Allocating group tables: 0/64 done 00:11:59.406 Writing inode tables: 0/64 done 00:11:59.667 Creating journal (8192 blocks): done 00:11:59.667 Writing superblocks and filesystem accounting information: 0/64 done 00:11:59.667 00:11:59.667 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@947 -- # return 0 00:11:59.667 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:04.958 13:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:04.958 13:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:04.958 13:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:04.958 13:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:04.958 13:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:04.958 13:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:04.958 13:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3719454 00:12:04.958 13:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:04.958 13:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:04.958 13:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:04.958 13:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:04.958 00:12:04.958 real 0m5.702s 00:12:04.958 user 0m0.036s 00:12:04.958 sys 0m0.067s 00:12:04.958 13:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:04.958 13:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:04.958 ************************************ 00:12:04.958 END TEST filesystem_ext4 00:12:04.958 ************************************ 00:12:05.219 13:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:05.219 13:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:05.219 13:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:05.219 13:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:05.219 ************************************ 00:12:05.219 START TEST filesystem_btrfs 00:12:05.219 ************************************ 00:12:05.219 13:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:05.219 13:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:05.219 13:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:05.219 13:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:05.219 13:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:12:05.219 13:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:12:05.219 13:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:12:05.219 13:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local force 00:12:05.219 13:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:12:05.219 13:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:12:05.219 13:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:05.480 btrfs-progs v6.8.1 00:12:05.480 See https://btrfs.readthedocs.io for more information. 00:12:05.480 00:12:05.480 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:05.480 NOTE: several default settings have changed in version 5.15, please make sure 00:12:05.480 this does not affect your deployments: 00:12:05.480 - DUP for metadata (-m dup) 00:12:05.480 - enabled no-holes (-O no-holes) 00:12:05.480 - enabled free-space-tree (-R free-space-tree) 00:12:05.480 00:12:05.480 Label: (null) 00:12:05.480 UUID: a2b1ea73-74f6-4b51-98d7-036355ecb194 00:12:05.480 Node size: 16384 00:12:05.480 Sector size: 4096 (CPU page size: 4096) 00:12:05.480 Filesystem size: 510.00MiB 00:12:05.480 Block group profiles: 00:12:05.480 Data: single 8.00MiB 00:12:05.480 Metadata: DUP 32.00MiB 00:12:05.480 System: DUP 8.00MiB 00:12:05.480 SSD detected: yes 00:12:05.480 Zoned device: no 00:12:05.480 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:05.480 Checksum: crc32c 00:12:05.480 Number of devices: 1 00:12:05.480 Devices: 00:12:05.480 ID SIZE PATH 00:12:05.480 1 510.00MiB /dev/nvme0n1p1 00:12:05.480 00:12:05.480 13:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@947 -- # return 0 00:12:05.480 13:17:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:06.422 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:06.422 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:06.683 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:06.683 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:06.683 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:06.683 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:06.683 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3719454 00:12:06.683 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:06.683 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:06.683 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:06.683 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:06.683 00:12:06.683 real 0m1.484s 00:12:06.683 user 0m0.034s 00:12:06.683 sys 0m0.117s 00:12:06.683 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:06.683 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:06.683 ************************************ 00:12:06.683 END TEST filesystem_btrfs 00:12:06.683 ************************************ 00:12:06.683 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:06.683 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:06.683 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:06.683 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:06.683 ************************************ 00:12:06.684 START TEST filesystem_xfs 00:12:06.684 ************************************ 00:12:06.684 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:12:06.684 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:06.684 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:06.684 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:06.684 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:12:06.684 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:12:06.684 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local i=0 00:12:06.684 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local force 00:12:06.684 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:12:06.684 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # force=-f 00:12:06.684 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:06.684 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:06.684 = sectsz=512 attr=2, projid32bit=1 00:12:06.684 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:06.684 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:06.684 data = bsize=4096 blocks=130560, imaxpct=25 00:12:06.684 = sunit=0 swidth=0 blks 00:12:06.684 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:06.684 log =internal log bsize=4096 blocks=16384, version=2 00:12:06.684 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:06.684 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:07.772 Discarding blocks...Done. 00:12:07.772 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@947 -- # return 0 00:12:07.772 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:09.707 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:09.707 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:09.707 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:09.707 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:09.707 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:09.707 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:09.707 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3719454 00:12:09.707 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:09.707 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:09.707 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:09.707 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:09.707 00:12:09.707 real 0m2.815s 00:12:09.707 user 0m0.028s 00:12:09.707 sys 0m0.077s 00:12:09.707 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:09.707 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:09.707 ************************************ 00:12:09.707 END TEST filesystem_xfs 00:12:09.707 ************************************ 00:12:09.707 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:09.969 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:09.969 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:09.969 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.969 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:09.969 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:12:09.969 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:09.969 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:09.969 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:09.969 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:09.969 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:12:09.969 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:09.969 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.969 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:09.970 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.970 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:09.970 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3719454 00:12:09.970 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 3719454 ']' 00:12:09.970 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # kill -0 3719454 00:12:09.970 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # uname 00:12:09.970 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:09.970 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3719454 00:12:10.231 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:10.231 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:10.231 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3719454' 00:12:10.231 killing process with pid 3719454 00:12:10.231 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # kill 3719454 00:12:10.231 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@976 -- # wait 3719454 00:12:12.146 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:12.146 00:12:12.146 real 0m19.368s 00:12:12.146 user 1m15.062s 00:12:12.146 sys 0m1.532s 00:12:12.146 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:12.146 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.146 ************************************ 00:12:12.146 END TEST nvmf_filesystem_no_in_capsule 00:12:12.146 ************************************ 00:12:12.146 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:12.146 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:12.146 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:12.146 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:12.146 ************************************ 00:12:12.146 START TEST nvmf_filesystem_in_capsule 00:12:12.146 ************************************ 00:12:12.146 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 4096 00:12:12.146 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:12.146 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:12.146 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:12.146 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:12.146 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.146 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3723587 00:12:12.146 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3723587 00:12:12.146 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:12.146 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 3723587 ']' 00:12:12.146 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.146 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:12.146 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.146 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:12.146 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.146 [2024-11-07 13:17:19.803160] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:12:12.146 [2024-11-07 13:17:19.803299] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.146 [2024-11-07 13:17:19.968271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:12.146 [2024-11-07 13:17:20.076491] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:12.146 [2024-11-07 13:17:20.076538] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:12.146 [2024-11-07 13:17:20.076550] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:12.146 [2024-11-07 13:17:20.076561] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:12.146 [2024-11-07 13:17:20.076570] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:12.146 [2024-11-07 13:17:20.078828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.146 [2024-11-07 13:17:20.078915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:12.146 [2024-11-07 13:17:20.079019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.146 [2024-11-07 13:17:20.079040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:12.719 13:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:12.719 13:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:12:12.719 13:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:12.719 13:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:12.719 13:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.719 13:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:12.719 13:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:12.719 13:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:12.719 13:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.719 13:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.719 [2024-11-07 13:17:20.616740] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:12.719 13:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.719 13:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:12.719 13:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.719 13:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.291 Malloc1 00:12:13.291 13:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.291 13:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:13.291 13:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.291 13:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.291 13:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.291 13:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:13.291 13:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.291 13:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.291 13:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.291 13:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:13.291 13:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.291 13:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.291 [2024-11-07 13:17:21.070551] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:13.291 13:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.291 13:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:13.291 13:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:12:13.291 13:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:12:13.291 13:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:12:13.291 13:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:12:13.291 13:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:13.291 13:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.291 13:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.291 13:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.291 13:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:12:13.291 { 00:12:13.291 "name": "Malloc1", 00:12:13.291 "aliases": [ 00:12:13.292 "6a169c88-68cb-427f-95ae-7bfec6fcd1c4" 00:12:13.292 ], 00:12:13.292 "product_name": "Malloc disk", 00:12:13.292 "block_size": 512, 00:12:13.292 "num_blocks": 1048576, 00:12:13.292 "uuid": "6a169c88-68cb-427f-95ae-7bfec6fcd1c4", 00:12:13.292 "assigned_rate_limits": { 00:12:13.292 "rw_ios_per_sec": 0, 00:12:13.292 "rw_mbytes_per_sec": 0, 00:12:13.292 "r_mbytes_per_sec": 0, 00:12:13.292 "w_mbytes_per_sec": 0 00:12:13.292 }, 00:12:13.292 "claimed": true, 00:12:13.292 "claim_type": "exclusive_write", 00:12:13.292 "zoned": false, 00:12:13.292 "supported_io_types": { 00:12:13.292 "read": true, 00:12:13.292 "write": true, 00:12:13.292 "unmap": true, 00:12:13.292 "flush": true, 00:12:13.292 "reset": true, 00:12:13.292 "nvme_admin": false, 00:12:13.292 "nvme_io": false, 00:12:13.292 "nvme_io_md": false, 00:12:13.292 "write_zeroes": true, 00:12:13.292 "zcopy": true, 00:12:13.292 "get_zone_info": false, 00:12:13.292 "zone_management": false, 00:12:13.292 "zone_append": false, 00:12:13.292 "compare": false, 00:12:13.292 "compare_and_write": false, 00:12:13.292 "abort": true, 00:12:13.292 "seek_hole": false, 00:12:13.292 "seek_data": false, 00:12:13.292 "copy": true, 00:12:13.292 "nvme_iov_md": false 00:12:13.292 }, 00:12:13.292 "memory_domains": [ 00:12:13.292 { 00:12:13.292 "dma_device_id": "system", 00:12:13.292 "dma_device_type": 1 00:12:13.292 }, 00:12:13.292 { 00:12:13.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.292 "dma_device_type": 2 00:12:13.292 } 00:12:13.292 ], 00:12:13.292 "driver_specific": {} 00:12:13.292 } 00:12:13.292 ]' 00:12:13.292 13:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:12:13.292 13:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:12:13.292 13:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:12:13.292 13:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:12:13.292 13:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:12:13.292 13:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:12:13.292 13:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:13.292 13:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:15.208 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:15.208 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:12:15.208 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:15.208 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:15.208 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:12:17.123 13:17:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:17.123 13:17:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:17.123 13:17:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:17.123 13:17:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:17.123 13:17:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:17.123 13:17:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:12:17.123 13:17:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:17.123 13:17:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:17.123 13:17:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:17.123 13:17:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:17.123 13:17:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:17.123 13:17:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:17.123 13:17:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:17.123 13:17:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:17.123 13:17:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:17.123 13:17:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:17.123 13:17:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:17.123 13:17:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:18.065 13:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:19.008 13:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:19.008 13:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:19.008 13:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:19.008 13:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:19.008 13:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:19.008 ************************************ 00:12:19.008 START TEST filesystem_in_capsule_ext4 00:12:19.008 ************************************ 00:12:19.008 13:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:19.008 13:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:19.008 13:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:19.008 13:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:19.008 13:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:12:19.008 13:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:12:19.008 13:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:12:19.008 13:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local force 00:12:19.008 13:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:12:19.008 13:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:12:19.008 13:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:19.008 mke2fs 1.47.0 (5-Feb-2023) 00:12:19.008 Discarding device blocks: 0/522240 done 00:12:19.008 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:19.008 Filesystem UUID: 9345ad80-8c5d-4e49-8cce-cbc403b2b2be 00:12:19.008 Superblock backups stored on blocks: 00:12:19.008 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:19.008 00:12:19.008 Allocating group tables: 0/64 done 00:12:19.008 Writing inode tables: 0/64 done 00:12:19.269 Creating journal (8192 blocks): done 00:12:21.488 Writing superblocks and filesystem accounting information: 0/6410/64 done 00:12:21.488 00:12:21.488 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@947 -- # return 0 00:12:21.488 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:28.075 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:28.075 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:28.075 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:28.075 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:28.075 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:28.075 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:28.075 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3723587 00:12:28.075 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:28.075 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:28.075 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:28.075 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:28.075 00:12:28.075 real 0m8.551s 00:12:28.075 user 0m0.030s 00:12:28.075 sys 0m0.080s 00:12:28.075 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:28.075 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:28.075 ************************************ 00:12:28.075 END TEST filesystem_in_capsule_ext4 00:12:28.075 ************************************ 00:12:28.075 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:28.075 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:28.075 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:28.075 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:28.075 ************************************ 00:12:28.075 START TEST filesystem_in_capsule_btrfs 00:12:28.075 ************************************ 00:12:28.075 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:28.075 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:28.075 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:28.075 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:28.075 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:12:28.075 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:12:28.075 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:12:28.075 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local force 00:12:28.075 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:12:28.075 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:12:28.075 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:28.075 btrfs-progs v6.8.1 00:12:28.075 See https://btrfs.readthedocs.io for more information. 00:12:28.075 00:12:28.075 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:28.075 NOTE: several default settings have changed in version 5.15, please make sure 00:12:28.075 this does not affect your deployments: 00:12:28.075 - DUP for metadata (-m dup) 00:12:28.075 - enabled no-holes (-O no-holes) 00:12:28.075 - enabled free-space-tree (-R free-space-tree) 00:12:28.075 00:12:28.075 Label: (null) 00:12:28.075 UUID: ef161190-a89d-4164-b492-b8d8ba6407bc 00:12:28.075 Node size: 16384 00:12:28.075 Sector size: 4096 (CPU page size: 4096) 00:12:28.075 Filesystem size: 510.00MiB 00:12:28.075 Block group profiles: 00:12:28.075 Data: single 8.00MiB 00:12:28.075 Metadata: DUP 32.00MiB 00:12:28.075 System: DUP 8.00MiB 00:12:28.075 SSD detected: yes 00:12:28.075 Zoned device: no 00:12:28.075 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:28.075 Checksum: crc32c 00:12:28.075 Number of devices: 1 00:12:28.075 Devices: 00:12:28.075 ID SIZE PATH 00:12:28.075 1 510.00MiB /dev/nvme0n1p1 00:12:28.075 00:12:28.075 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@947 -- # return 0 00:12:28.075 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:28.653 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:28.653 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:28.653 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:28.653 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:28.653 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:28.653 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:28.653 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3723587 00:12:28.653 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:28.653 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:28.653 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:28.653 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:28.653 00:12:28.653 real 0m1.078s 00:12:28.653 user 0m0.031s 00:12:28.653 sys 0m0.119s 00:12:28.653 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:28.653 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:28.653 ************************************ 00:12:28.653 END TEST filesystem_in_capsule_btrfs 00:12:28.653 ************************************ 00:12:28.653 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:28.653 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:28.653 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:28.653 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:28.653 ************************************ 00:12:28.653 START TEST filesystem_in_capsule_xfs 00:12:28.653 ************************************ 00:12:28.653 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:12:28.653 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:28.653 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:28.653 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:28.653 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:12:28.653 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:12:28.653 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local i=0 00:12:28.653 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local force 00:12:28.653 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:12:28.653 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # force=-f 00:12:28.653 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:28.914 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:28.914 = sectsz=512 attr=2, projid32bit=1 00:12:28.914 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:28.914 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:28.914 data = bsize=4096 blocks=130560, imaxpct=25 00:12:28.914 = sunit=0 swidth=0 blks 00:12:28.914 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:28.914 log =internal log bsize=4096 blocks=16384, version=2 00:12:28.914 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:28.914 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:29.856 Discarding blocks...Done. 00:12:29.856 13:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@947 -- # return 0 00:12:29.856 13:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:31.768 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:31.768 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:31.768 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:31.768 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:31.768 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:31.768 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:31.768 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3723587 00:12:31.768 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:31.768 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:31.768 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:31.768 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:31.768 00:12:31.768 real 0m2.853s 00:12:31.768 user 0m0.022s 00:12:31.768 sys 0m0.083s 00:12:31.768 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:31.768 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:31.768 ************************************ 00:12:31.768 END TEST filesystem_in_capsule_xfs 00:12:31.768 ************************************ 00:12:31.768 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:31.768 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:32.029 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:32.289 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.289 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:32.289 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:12:32.289 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:32.289 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:32.289 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:32.289 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:32.289 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:12:32.289 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:32.289 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.289 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:32.289 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.289 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:32.289 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3723587 00:12:32.289 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 3723587 ']' 00:12:32.289 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # kill -0 3723587 00:12:32.289 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # uname 00:12:32.289 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:32.289 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3723587 00:12:32.550 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:32.550 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:32.550 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3723587' 00:12:32.550 killing process with pid 3723587 00:12:32.550 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # kill 3723587 00:12:32.550 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@976 -- # wait 3723587 00:12:34.462 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:34.462 00:12:34.462 real 0m22.311s 00:12:34.462 user 1m26.729s 00:12:34.462 sys 0m1.629s 00:12:34.462 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:34.462 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:34.462 ************************************ 00:12:34.462 END TEST nvmf_filesystem_in_capsule 00:12:34.462 ************************************ 00:12:34.462 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:34.462 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:34.462 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:34.462 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:34.462 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:34.462 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:34.462 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:34.462 rmmod nvme_tcp 00:12:34.462 rmmod nvme_fabrics 00:12:34.462 rmmod nvme_keyring 00:12:34.462 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:34.462 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:34.462 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:34.462 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:34.462 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:34.462 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:34.462 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:34.462 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:34.462 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:34.462 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:34.462 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:34.462 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:34.462 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:34.462 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.462 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:34.462 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.375 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:36.375 00:12:36.375 real 0m53.043s 00:12:36.375 user 2m44.463s 00:12:36.375 sys 0m9.809s 00:12:36.375 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:36.375 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:36.375 ************************************ 00:12:36.375 END TEST nvmf_filesystem 00:12:36.375 ************************************ 00:12:36.375 13:17:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:36.375 13:17:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:36.375 13:17:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:36.375 13:17:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:36.375 ************************************ 00:12:36.375 START TEST nvmf_target_discovery 00:12:36.375 ************************************ 00:12:36.375 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:36.375 * Looking for test storage... 00:12:36.375 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:36.375 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:36.375 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:12:36.375 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:36.636 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:36.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.637 --rc genhtml_branch_coverage=1 00:12:36.637 --rc genhtml_function_coverage=1 00:12:36.637 --rc genhtml_legend=1 00:12:36.637 --rc geninfo_all_blocks=1 00:12:36.637 --rc geninfo_unexecuted_blocks=1 00:12:36.637 00:12:36.637 ' 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:36.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.637 --rc genhtml_branch_coverage=1 00:12:36.637 --rc genhtml_function_coverage=1 00:12:36.637 --rc genhtml_legend=1 00:12:36.637 --rc geninfo_all_blocks=1 00:12:36.637 --rc geninfo_unexecuted_blocks=1 00:12:36.637 00:12:36.637 ' 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:36.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.637 --rc genhtml_branch_coverage=1 00:12:36.637 --rc genhtml_function_coverage=1 00:12:36.637 --rc genhtml_legend=1 00:12:36.637 --rc geninfo_all_blocks=1 00:12:36.637 --rc geninfo_unexecuted_blocks=1 00:12:36.637 00:12:36.637 ' 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:36.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.637 --rc genhtml_branch_coverage=1 00:12:36.637 --rc genhtml_function_coverage=1 00:12:36.637 --rc genhtml_legend=1 00:12:36.637 --rc geninfo_all_blocks=1 00:12:36.637 --rc geninfo_unexecuted_blocks=1 00:12:36.637 00:12:36.637 ' 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:36.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:36.637 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:36.638 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:36.638 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:36.638 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:36.638 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:36.638 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:36.638 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:36.638 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:36.638 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:36.638 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:36.638 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:36.638 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:36.638 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:36.638 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.638 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:36.638 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.638 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:36.638 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:36.638 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:36.638 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:44.782 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:44.782 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:44.782 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:44.783 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:44.783 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.783 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:44.783 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:44.783 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:44.783 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:44.783 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.783 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:44.783 Found net devices under 0000:31:00.0: cvl_0_0 00:12:44.783 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.783 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:44.783 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.783 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:44.783 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:44.783 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:44.783 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:44.783 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.783 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:44.783 Found net devices under 0000:31:00.1: cvl_0_1 00:12:44.783 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.783 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:44.783 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:44.783 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:44.783 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:44.783 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:44.783 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:44.783 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:44.783 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:44.783 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:44.783 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:44.783 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:44.783 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:44.783 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:44.783 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:44.783 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:44.783 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:44.783 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:44.783 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:44.783 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:44.783 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:44.783 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:44.783 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:45.044 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:45.044 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:45.044 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:45.044 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:45.044 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:45.044 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:45.044 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:45.044 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.675 ms 00:12:45.044 00:12:45.044 --- 10.0.0.2 ping statistics --- 00:12:45.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.044 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:12:45.044 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:45.044 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:45.044 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:12:45.044 00:12:45.044 --- 10.0.0.1 ping statistics --- 00:12:45.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.044 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:12:45.044 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:45.044 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:45.044 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:45.044 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:45.044 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:45.044 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:45.044 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:45.044 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:45.044 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:45.044 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:45.044 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:45.044 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:45.044 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.044 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3732795 00:12:45.044 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3732795 00:12:45.044 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:45.044 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # '[' -z 3732795 ']' 00:12:45.045 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.045 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:45.045 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.045 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:45.045 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.305 [2024-11-07 13:17:53.072977] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:12:45.305 [2024-11-07 13:17:53.073109] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:45.305 [2024-11-07 13:17:53.234498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:45.571 [2024-11-07 13:17:53.335187] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:45.571 [2024-11-07 13:17:53.335233] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:45.571 [2024-11-07 13:17:53.335245] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:45.571 [2024-11-07 13:17:53.335256] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:45.571 [2024-11-07 13:17:53.335265] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:45.571 [2024-11-07 13:17:53.337522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.571 [2024-11-07 13:17:53.337606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:45.571 [2024-11-07 13:17:53.337722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.571 [2024-11-07 13:17:53.337746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:45.877 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:45.877 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@866 -- # return 0 00:12:45.877 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:45.877 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:45.877 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.150 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:46.150 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:46.150 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.150 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.150 [2024-11-07 13:17:53.894082] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:46.150 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.150 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:46.150 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:46.150 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:46.150 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.150 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.150 Null1 00:12:46.150 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.150 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:46.150 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.150 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.150 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.150 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:46.150 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.150 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.150 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.150 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.150 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.150 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.150 [2024-11-07 13:17:53.972550] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.150 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.150 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:46.150 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:46.150 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.150 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.150 Null2 00:12:46.150 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.150 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:46.150 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.150 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.150 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.150 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:46.150 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.150 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.150 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.150 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:46.150 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.150 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.150 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.150 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:46.150 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:46.150 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.150 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.150 Null3 00:12:46.150 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.150 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:46.150 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.150 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.150 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.150 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:46.150 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.150 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.150 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.150 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:46.150 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.150 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.150 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.150 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:46.150 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:46.150 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.150 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.150 Null4 00:12:46.150 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.151 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:46.151 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.151 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.151 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.151 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:46.151 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.151 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.151 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.151 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:46.151 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.151 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.151 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.151 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:46.151 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.151 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.151 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.151 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:46.151 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.151 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.151 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.151 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:12:46.412 00:12:46.412 Discovery Log Number of Records 6, Generation counter 6 00:12:46.412 =====Discovery Log Entry 0====== 00:12:46.412 trtype: tcp 00:12:46.412 adrfam: ipv4 00:12:46.412 subtype: current discovery subsystem 00:12:46.412 treq: not required 00:12:46.412 portid: 0 00:12:46.412 trsvcid: 4420 00:12:46.412 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:46.412 traddr: 10.0.0.2 00:12:46.412 eflags: explicit discovery connections, duplicate discovery information 00:12:46.412 sectype: none 00:12:46.412 =====Discovery Log Entry 1====== 00:12:46.412 trtype: tcp 00:12:46.412 adrfam: ipv4 00:12:46.412 subtype: nvme subsystem 00:12:46.412 treq: not required 00:12:46.412 portid: 0 00:12:46.412 trsvcid: 4420 00:12:46.412 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:46.412 traddr: 10.0.0.2 00:12:46.412 eflags: none 00:12:46.412 sectype: none 00:12:46.412 =====Discovery Log Entry 2====== 00:12:46.412 trtype: tcp 00:12:46.412 adrfam: ipv4 00:12:46.412 subtype: nvme subsystem 00:12:46.412 treq: not required 00:12:46.412 portid: 0 00:12:46.412 trsvcid: 4420 00:12:46.412 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:46.412 traddr: 10.0.0.2 00:12:46.412 eflags: none 00:12:46.412 sectype: none 00:12:46.412 =====Discovery Log Entry 3====== 00:12:46.412 trtype: tcp 00:12:46.412 adrfam: ipv4 00:12:46.412 subtype: nvme subsystem 00:12:46.412 treq: not required 00:12:46.412 portid: 0 00:12:46.412 trsvcid: 4420 00:12:46.412 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:46.412 traddr: 10.0.0.2 00:12:46.412 eflags: none 00:12:46.412 sectype: none 00:12:46.412 =====Discovery Log Entry 4====== 00:12:46.412 trtype: tcp 00:12:46.412 adrfam: ipv4 00:12:46.412 subtype: nvme subsystem 00:12:46.412 treq: not required 00:12:46.412 portid: 0 00:12:46.412 trsvcid: 4420 00:12:46.412 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:46.412 traddr: 10.0.0.2 00:12:46.412 eflags: none 00:12:46.412 sectype: none 00:12:46.412 =====Discovery Log Entry 5====== 00:12:46.412 trtype: tcp 00:12:46.412 adrfam: ipv4 00:12:46.412 subtype: discovery subsystem referral 00:12:46.412 treq: not required 00:12:46.412 portid: 0 00:12:46.412 trsvcid: 4430 00:12:46.412 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:46.412 traddr: 10.0.0.2 00:12:46.412 eflags: none 00:12:46.412 sectype: none 00:12:46.413 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:46.413 Perform nvmf subsystem discovery via RPC 00:12:46.413 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:46.413 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.413 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.413 [ 00:12:46.413 { 00:12:46.413 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:46.413 "subtype": "Discovery", 00:12:46.413 "listen_addresses": [ 00:12:46.413 { 00:12:46.413 "trtype": "TCP", 00:12:46.413 "adrfam": "IPv4", 00:12:46.413 "traddr": "10.0.0.2", 00:12:46.413 "trsvcid": "4420" 00:12:46.413 } 00:12:46.413 ], 00:12:46.413 "allow_any_host": true, 00:12:46.413 "hosts": [] 00:12:46.413 }, 00:12:46.413 { 00:12:46.413 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:46.413 "subtype": "NVMe", 00:12:46.413 "listen_addresses": [ 00:12:46.413 { 00:12:46.413 "trtype": "TCP", 00:12:46.413 "adrfam": "IPv4", 00:12:46.413 "traddr": "10.0.0.2", 00:12:46.413 "trsvcid": "4420" 00:12:46.413 } 00:12:46.413 ], 00:12:46.413 "allow_any_host": true, 00:12:46.413 "hosts": [], 00:12:46.413 "serial_number": "SPDK00000000000001", 00:12:46.413 "model_number": "SPDK bdev Controller", 00:12:46.413 "max_namespaces": 32, 00:12:46.413 "min_cntlid": 1, 00:12:46.413 "max_cntlid": 65519, 00:12:46.413 "namespaces": [ 00:12:46.413 { 00:12:46.413 "nsid": 1, 00:12:46.413 "bdev_name": "Null1", 00:12:46.413 "name": "Null1", 00:12:46.413 "nguid": "84A11CA1C74F4F2F8534AAD23D707DF3", 00:12:46.413 "uuid": "84a11ca1-c74f-4f2f-8534-aad23d707df3" 00:12:46.413 } 00:12:46.413 ] 00:12:46.413 }, 00:12:46.413 { 00:12:46.413 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:46.413 "subtype": "NVMe", 00:12:46.413 "listen_addresses": [ 00:12:46.413 { 00:12:46.413 "trtype": "TCP", 00:12:46.413 "adrfam": "IPv4", 00:12:46.413 "traddr": "10.0.0.2", 00:12:46.413 "trsvcid": "4420" 00:12:46.413 } 00:12:46.413 ], 00:12:46.413 "allow_any_host": true, 00:12:46.413 "hosts": [], 00:12:46.413 "serial_number": "SPDK00000000000002", 00:12:46.413 "model_number": "SPDK bdev Controller", 00:12:46.413 "max_namespaces": 32, 00:12:46.413 "min_cntlid": 1, 00:12:46.413 "max_cntlid": 65519, 00:12:46.413 "namespaces": [ 00:12:46.413 { 00:12:46.413 "nsid": 1, 00:12:46.413 "bdev_name": "Null2", 00:12:46.413 "name": "Null2", 00:12:46.413 "nguid": "CA83495C7846401A8B41CEB238CD176B", 00:12:46.413 "uuid": "ca83495c-7846-401a-8b41-ceb238cd176b" 00:12:46.413 } 00:12:46.413 ] 00:12:46.413 }, 00:12:46.413 { 00:12:46.413 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:46.413 "subtype": "NVMe", 00:12:46.413 "listen_addresses": [ 00:12:46.413 { 00:12:46.413 "trtype": "TCP", 00:12:46.413 "adrfam": "IPv4", 00:12:46.413 "traddr": "10.0.0.2", 00:12:46.413 "trsvcid": "4420" 00:12:46.413 } 00:12:46.413 ], 00:12:46.413 "allow_any_host": true, 00:12:46.413 "hosts": [], 00:12:46.413 "serial_number": "SPDK00000000000003", 00:12:46.413 "model_number": "SPDK bdev Controller", 00:12:46.413 "max_namespaces": 32, 00:12:46.413 "min_cntlid": 1, 00:12:46.413 "max_cntlid": 65519, 00:12:46.413 "namespaces": [ 00:12:46.413 { 00:12:46.413 "nsid": 1, 00:12:46.413 "bdev_name": "Null3", 00:12:46.413 "name": "Null3", 00:12:46.413 "nguid": "2B893DFC3F9C4D77B0F80502E4159C84", 00:12:46.413 "uuid": "2b893dfc-3f9c-4d77-b0f8-0502e4159c84" 00:12:46.413 } 00:12:46.413 ] 00:12:46.413 }, 00:12:46.413 { 00:12:46.413 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:46.413 "subtype": "NVMe", 00:12:46.413 "listen_addresses": [ 00:12:46.413 { 00:12:46.413 "trtype": "TCP", 00:12:46.413 "adrfam": "IPv4", 00:12:46.413 "traddr": "10.0.0.2", 00:12:46.413 "trsvcid": "4420" 00:12:46.413 } 00:12:46.413 ], 00:12:46.413 "allow_any_host": true, 00:12:46.413 "hosts": [], 00:12:46.413 "serial_number": "SPDK00000000000004", 00:12:46.413 "model_number": "SPDK bdev Controller", 00:12:46.413 "max_namespaces": 32, 00:12:46.413 "min_cntlid": 1, 00:12:46.413 "max_cntlid": 65519, 00:12:46.413 "namespaces": [ 00:12:46.413 { 00:12:46.413 "nsid": 1, 00:12:46.413 "bdev_name": "Null4", 00:12:46.413 "name": "Null4", 00:12:46.413 "nguid": "FEA7AA46B16B48A58BF2E98A590EF0C3", 00:12:46.413 "uuid": "fea7aa46-b16b-48a5-8bf2-e98a590ef0c3" 00:12:46.413 } 00:12:46.413 ] 00:12:46.413 } 00:12:46.413 ] 00:12:46.413 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.413 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:46.413 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:46.413 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:46.413 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.413 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.413 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.413 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:46.413 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.413 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.413 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.413 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:46.413 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:46.413 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.413 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.413 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.413 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:46.413 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.413 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.413 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.413 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:46.413 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:46.413 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.413 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.413 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.413 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:46.413 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.413 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.413 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.413 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:46.413 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:46.413 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.413 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.413 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.413 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:46.413 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.413 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.675 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.675 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:46.675 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.675 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.675 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.675 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:46.675 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:46.675 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.675 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.675 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.675 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:46.675 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:46.675 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:46.675 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:46.675 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:46.675 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:46.675 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:46.675 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:46.675 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:46.675 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:46.675 rmmod nvme_tcp 00:12:46.675 rmmod nvme_fabrics 00:12:46.675 rmmod nvme_keyring 00:12:46.675 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:46.675 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:46.675 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:46.675 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3732795 ']' 00:12:46.675 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3732795 00:12:46.675 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' -z 3732795 ']' 00:12:46.675 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # kill -0 3732795 00:12:46.675 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # uname 00:12:46.675 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:46.675 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3732795 00:12:46.675 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:46.675 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:46.675 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3732795' 00:12:46.675 killing process with pid 3732795 00:12:46.675 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # kill 3732795 00:12:46.675 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@976 -- # wait 3732795 00:12:47.617 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:47.617 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:47.617 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:47.617 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:47.617 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:47.617 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:47.617 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:47.617 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:47.617 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:47.617 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.617 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:47.617 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.529 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:49.529 00:12:49.529 real 0m13.227s 00:12:49.529 user 0m10.093s 00:12:49.529 sys 0m6.917s 00:12:49.529 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:49.529 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.529 ************************************ 00:12:49.529 END TEST nvmf_target_discovery 00:12:49.529 ************************************ 00:12:49.529 13:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:49.529 13:17:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:49.529 13:17:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:49.529 13:17:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:49.791 ************************************ 00:12:49.791 START TEST nvmf_referrals 00:12:49.791 ************************************ 00:12:49.791 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:49.791 * Looking for test storage... 00:12:49.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:49.791 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:49.791 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:12:49.791 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:49.791 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:49.791 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:49.791 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:49.791 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:49.791 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:49.791 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:49.791 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:49.791 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:49.791 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:49.791 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:49.791 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:49.791 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:49.791 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:49.791 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:49.791 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:49.791 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:49.791 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:49.791 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:49.791 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:49.791 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:49.791 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:49.791 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:49.791 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:49.791 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:49.791 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:49.791 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:49.791 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:49.791 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:49.791 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:49.791 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:49.791 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:49.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.791 --rc genhtml_branch_coverage=1 00:12:49.791 --rc genhtml_function_coverage=1 00:12:49.791 --rc genhtml_legend=1 00:12:49.791 --rc geninfo_all_blocks=1 00:12:49.792 --rc geninfo_unexecuted_blocks=1 00:12:49.792 00:12:49.792 ' 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:49.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.792 --rc genhtml_branch_coverage=1 00:12:49.792 --rc genhtml_function_coverage=1 00:12:49.792 --rc genhtml_legend=1 00:12:49.792 --rc geninfo_all_blocks=1 00:12:49.792 --rc geninfo_unexecuted_blocks=1 00:12:49.792 00:12:49.792 ' 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:49.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.792 --rc genhtml_branch_coverage=1 00:12:49.792 --rc genhtml_function_coverage=1 00:12:49.792 --rc genhtml_legend=1 00:12:49.792 --rc geninfo_all_blocks=1 00:12:49.792 --rc geninfo_unexecuted_blocks=1 00:12:49.792 00:12:49.792 ' 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:49.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.792 --rc genhtml_branch_coverage=1 00:12:49.792 --rc genhtml_function_coverage=1 00:12:49.792 --rc genhtml_legend=1 00:12:49.792 --rc geninfo_all_blocks=1 00:12:49.792 --rc geninfo_unexecuted_blocks=1 00:12:49.792 00:12:49.792 ' 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:49.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:49.792 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:57.939 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:57.939 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.939 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:57.940 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:57.940 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:57.940 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:57.940 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:57.940 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.940 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:57.940 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.940 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:57.940 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:57.940 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.940 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:57.940 Found net devices under 0000:31:00.0: cvl_0_0 00:12:57.940 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.940 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:57.940 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.940 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:57.940 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.940 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:57.940 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:57.940 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.940 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:57.940 Found net devices under 0000:31:00.1: cvl_0_1 00:12:57.940 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.940 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:57.940 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:57.940 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:57.940 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:57.940 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:57.940 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:57.940 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:57.940 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:57.940 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:57.940 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:57.940 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:57.940 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:57.940 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:57.940 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:57.940 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:57.940 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:57.940 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:57.940 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:57.940 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:57.940 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:58.201 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:58.201 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:58.201 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:58.201 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:58.201 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:58.201 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:58.201 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:58.201 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:58.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:58.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.683 ms 00:12:58.462 00:12:58.462 --- 10.0.0.2 ping statistics --- 00:12:58.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.462 rtt min/avg/max/mdev = 0.683/0.683/0.683/0.000 ms 00:12:58.462 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:58.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:58.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:12:58.462 00:12:58.462 --- 10.0.0.1 ping statistics --- 00:12:58.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.462 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:12:58.462 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:58.462 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:58.462 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:58.462 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:58.462 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:58.462 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:58.462 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:58.462 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:58.462 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:58.462 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:58.462 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:58.462 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:58.462 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:58.462 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3737932 00:12:58.462 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3737932 00:12:58.462 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:58.462 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # '[' -z 3737932 ']' 00:12:58.462 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.462 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:58.462 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.462 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:58.462 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:58.462 [2024-11-07 13:18:06.366010] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:12:58.462 [2024-11-07 13:18:06.366145] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:58.724 [2024-11-07 13:18:06.528411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:58.724 [2024-11-07 13:18:06.629931] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:58.724 [2024-11-07 13:18:06.629974] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:58.724 [2024-11-07 13:18:06.629986] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:58.724 [2024-11-07 13:18:06.630000] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:58.724 [2024-11-07 13:18:06.630009] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:58.724 [2024-11-07 13:18:06.632265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.724 [2024-11-07 13:18:06.632348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:58.724 [2024-11-07 13:18:06.632464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.724 [2024-11-07 13:18:06.632487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:59.295 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:59.295 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@866 -- # return 0 00:12:59.295 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:59.295 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:59.295 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:59.295 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:59.295 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:59.295 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.295 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:59.295 [2024-11-07 13:18:07.185152] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:59.295 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.295 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:59.295 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.295 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:59.295 [2024-11-07 13:18:07.208362] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:59.295 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.295 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:59.295 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.295 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:59.295 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.295 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:59.295 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.295 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:59.295 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.295 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:59.295 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.295 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:59.295 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.295 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:59.295 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:59.295 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.295 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:59.295 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.295 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:59.295 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:59.295 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:59.556 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:59.556 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:59.556 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.556 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:59.556 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:59.556 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.556 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:59.556 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:59.556 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:59.556 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:59.556 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:59.556 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:59.556 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:59.556 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:59.556 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:59.556 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:59.556 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:59.556 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.556 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:59.556 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.556 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:59.556 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.556 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:59.816 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.816 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:59.816 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.816 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:59.816 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.816 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:59.816 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:59.816 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.817 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:59.817 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.817 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:59.817 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:59.817 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:59.817 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:59.817 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:59.817 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:59.817 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:00.077 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:00.077 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:00.077 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:13:00.077 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.077 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:00.077 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.077 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:00.077 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.078 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:00.078 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.078 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:00.078 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:00.078 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:00.078 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:00.078 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.078 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:00.078 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:00.078 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.078 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:00.078 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:00.078 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:00.078 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:00.078 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:00.078 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:00.078 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:00.078 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:00.338 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:00.338 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:00.338 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:00.338 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:00.338 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:00.338 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:00.338 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:00.598 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:00.598 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:00.598 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:00.598 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:00.598 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:00.598 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:00.598 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:00.598 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:00.598 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.598 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:00.598 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.859 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:00.859 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:00.859 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:00.859 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:00.859 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.859 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:00.859 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:00.859 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.859 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:00.859 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:00.859 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:00.859 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:00.859 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:00.859 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:00.859 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:00.859 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:00.859 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:00.859 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:00.859 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:00.859 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:00.859 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:00.859 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:00.860 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:01.120 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:01.120 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:01.120 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:01.120 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:01.120 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:01.120 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:01.381 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:01.381 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:01.381 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.381 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:01.381 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.381 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:01.381 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:01.381 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.381 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:01.381 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.381 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:01.381 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:01.381 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:01.381 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:01.381 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:01.381 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:01.381 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:01.642 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:01.642 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:01.642 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:01.642 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:01.642 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:01.642 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:13:01.642 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:01.642 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:13:01.642 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:01.642 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:01.642 rmmod nvme_tcp 00:13:01.642 rmmod nvme_fabrics 00:13:01.642 rmmod nvme_keyring 00:13:01.642 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:01.642 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:13:01.642 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:13:01.642 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3737932 ']' 00:13:01.642 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3737932 00:13:01.642 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' -z 3737932 ']' 00:13:01.642 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # kill -0 3737932 00:13:01.642 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # uname 00:13:01.642 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:01.642 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3737932 00:13:01.642 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:01.642 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:01.642 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3737932' 00:13:01.642 killing process with pid 3737932 00:13:01.642 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # kill 3737932 00:13:01.642 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@976 -- # wait 3737932 00:13:02.585 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:02.585 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:02.585 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:02.585 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:13:02.585 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:13:02.585 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:02.585 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:13:02.585 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:02.585 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:02.585 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.585 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:02.585 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.497 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:04.497 00:13:04.497 real 0m14.881s 00:13:04.497 user 0m17.435s 00:13:04.497 sys 0m7.300s 00:13:04.497 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:04.497 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:04.497 ************************************ 00:13:04.497 END TEST nvmf_referrals 00:13:04.497 ************************************ 00:13:04.497 13:18:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:04.497 13:18:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:04.497 13:18:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:04.497 13:18:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:04.497 ************************************ 00:13:04.497 START TEST nvmf_connect_disconnect 00:13:04.497 ************************************ 00:13:04.497 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:04.758 * Looking for test storage... 00:13:04.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:04.758 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:04.758 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:13:04.758 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:04.758 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:04.758 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:04.758 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:04.758 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:04.758 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:13:04.758 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:13:04.758 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:13:04.758 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:04.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.759 --rc genhtml_branch_coverage=1 00:13:04.759 --rc genhtml_function_coverage=1 00:13:04.759 --rc genhtml_legend=1 00:13:04.759 --rc geninfo_all_blocks=1 00:13:04.759 --rc geninfo_unexecuted_blocks=1 00:13:04.759 00:13:04.759 ' 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:04.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.759 --rc genhtml_branch_coverage=1 00:13:04.759 --rc genhtml_function_coverage=1 00:13:04.759 --rc genhtml_legend=1 00:13:04.759 --rc geninfo_all_blocks=1 00:13:04.759 --rc geninfo_unexecuted_blocks=1 00:13:04.759 00:13:04.759 ' 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:04.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.759 --rc genhtml_branch_coverage=1 00:13:04.759 --rc genhtml_function_coverage=1 00:13:04.759 --rc genhtml_legend=1 00:13:04.759 --rc geninfo_all_blocks=1 00:13:04.759 --rc geninfo_unexecuted_blocks=1 00:13:04.759 00:13:04.759 ' 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:04.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.759 --rc genhtml_branch_coverage=1 00:13:04.759 --rc genhtml_function_coverage=1 00:13:04.759 --rc genhtml_legend=1 00:13:04.759 --rc geninfo_all_blocks=1 00:13:04.759 --rc geninfo_unexecuted_blocks=1 00:13:04.759 00:13:04.759 ' 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:04.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:04.759 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:04.760 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.760 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:04.760 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.760 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:04.760 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:04.760 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:13:04.760 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:12.899 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:12.899 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:12.899 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:12.900 Found net devices under 0000:31:00.0: cvl_0_0 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:12.900 Found net devices under 0000:31:00.1: cvl_0_1 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:12.900 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:13.161 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:13.161 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:13.161 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:13.161 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:13.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:13.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:13:13.161 00:13:13.161 --- 10.0.0.2 ping statistics --- 00:13:13.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.161 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:13:13.161 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:13.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:13.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:13:13.161 00:13:13.161 --- 10.0.0.1 ping statistics --- 00:13:13.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.161 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:13:13.161 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:13.161 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:13:13.161 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:13.161 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:13.161 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:13.161 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:13.161 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:13.161 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:13.161 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:13.161 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:13.161 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:13.161 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:13.161 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:13.161 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3744105 00:13:13.161 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3744105 00:13:13.161 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:13.161 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # '[' -z 3744105 ']' 00:13:13.161 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.161 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:13.161 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.161 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:13.161 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:13.161 [2024-11-07 13:18:21.139686] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:13:13.161 [2024-11-07 13:18:21.139823] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:13.421 [2024-11-07 13:18:21.300845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:13.421 [2024-11-07 13:18:21.401564] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:13.421 [2024-11-07 13:18:21.401611] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:13.421 [2024-11-07 13:18:21.401622] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:13.421 [2024-11-07 13:18:21.401633] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:13.421 [2024-11-07 13:18:21.401642] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:13.421 [2024-11-07 13:18:21.403913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:13.421 [2024-11-07 13:18:21.403980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:13.421 [2024-11-07 13:18:21.404108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.421 [2024-11-07 13:18:21.404134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:13.991 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:13.991 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@866 -- # return 0 00:13:13.991 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:13.991 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:13.991 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:13.991 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:13.991 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:13.991 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.991 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:13.991 [2024-11-07 13:18:21.964606] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:13.991 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.991 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:13.991 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.991 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:14.251 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.251 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:14.251 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:14.251 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.251 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:14.251 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.251 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:14.251 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.251 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:14.251 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.251 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:14.251 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.251 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:14.251 [2024-11-07 13:18:22.072377] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.251 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.251 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:13:14.251 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:13:14.251 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:13:14.251 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:16.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.337 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.890 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.803 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.890 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.432 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.892 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.098 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.639 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.675 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.780 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.427 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.427 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.988 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.533 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.465 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.015 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.559 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.101 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:22.656 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.209 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:36.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.687 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.777 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.230 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:16.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.420 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:23.964 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.877 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:28.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:30.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:33.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:35.420 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:37.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:40.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:43.041 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:45.583 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:47.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:50.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:52.576 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:55.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:57.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:59.577 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:02.248 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:04.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:06.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:09.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:11.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:11.789 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:17:11.789 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:17:11.789 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:11.789 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:17:11.789 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:11.789 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:17:11.789 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:11.789 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:11.789 rmmod nvme_tcp 00:17:11.789 rmmod nvme_fabrics 00:17:11.789 rmmod nvme_keyring 00:17:11.789 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:11.789 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:17:11.789 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:17:11.789 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3744105 ']' 00:17:11.789 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3744105 00:17:11.789 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' -z 3744105 ']' 00:17:11.789 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # kill -0 3744105 00:17:11.789 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # uname 00:17:11.789 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:11.789 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3744105 00:17:11.789 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:11.789 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:11.789 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3744105' 00:17:11.789 killing process with pid 3744105 00:17:11.789 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # kill 3744105 00:17:11.789 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@976 -- # wait 3744105 00:17:12.361 13:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:12.361 13:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:12.361 13:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:12.361 13:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:17:12.361 13:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:17:12.361 13:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:12.361 13:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:17:12.361 13:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:12.361 13:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:12.361 13:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.361 13:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:12.361 13:22:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.904 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:14.904 00:17:14.904 real 4m9.921s 00:17:14.904 user 15m43.267s 00:17:14.904 sys 0m30.480s 00:17:14.904 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:14.904 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:14.904 ************************************ 00:17:14.904 END TEST nvmf_connect_disconnect 00:17:14.904 ************************************ 00:17:14.904 13:22:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:14.904 13:22:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:14.904 13:22:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:14.904 13:22:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:14.904 ************************************ 00:17:14.904 START TEST nvmf_multitarget 00:17:14.904 ************************************ 00:17:14.904 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:14.904 * Looking for test storage... 00:17:14.904 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:14.904 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:14.904 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:17:14.904 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:14.904 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:14.904 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:14.904 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:14.904 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:14.904 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:17:14.904 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:17:14.904 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:17:14.904 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:17:14.904 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:17:14.904 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:17:14.904 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:17:14.904 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:14.904 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:17:14.904 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:17:14.904 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:14.904 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:14.904 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:17:14.904 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:17:14.904 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:14.904 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:17:14.904 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:17:14.904 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:17:14.904 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:17:14.904 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:14.904 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:17:14.904 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:17:14.904 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:14.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.905 --rc genhtml_branch_coverage=1 00:17:14.905 --rc genhtml_function_coverage=1 00:17:14.905 --rc genhtml_legend=1 00:17:14.905 --rc geninfo_all_blocks=1 00:17:14.905 --rc geninfo_unexecuted_blocks=1 00:17:14.905 00:17:14.905 ' 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:14.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.905 --rc genhtml_branch_coverage=1 00:17:14.905 --rc genhtml_function_coverage=1 00:17:14.905 --rc genhtml_legend=1 00:17:14.905 --rc geninfo_all_blocks=1 00:17:14.905 --rc geninfo_unexecuted_blocks=1 00:17:14.905 00:17:14.905 ' 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:14.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.905 --rc genhtml_branch_coverage=1 00:17:14.905 --rc genhtml_function_coverage=1 00:17:14.905 --rc genhtml_legend=1 00:17:14.905 --rc geninfo_all_blocks=1 00:17:14.905 --rc geninfo_unexecuted_blocks=1 00:17:14.905 00:17:14.905 ' 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:14.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.905 --rc genhtml_branch_coverage=1 00:17:14.905 --rc genhtml_function_coverage=1 00:17:14.905 --rc genhtml_legend=1 00:17:14.905 --rc geninfo_all_blocks=1 00:17:14.905 --rc geninfo_unexecuted_blocks=1 00:17:14.905 00:17:14.905 ' 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:14.905 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:17:14.905 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:23.040 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:23.040 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:17:23.040 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:23.040 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:23.040 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:23.040 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:23.040 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:23.040 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:17:23.040 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:23.040 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:17:23.040 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:17:23.040 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:17:23.040 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:17:23.040 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:17:23.040 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:17:23.040 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:23.040 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:23.040 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:23.040 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:23.040 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:23.040 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:23.040 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:23.040 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:23.040 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:23.040 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:23.040 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:23.040 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:23.040 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:23.040 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:23.040 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:23.040 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:23.040 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:23.040 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:23.040 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:23.040 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:23.040 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:23.041 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:23.041 Found net devices under 0000:31:00.0: cvl_0_0 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:23.041 Found net devices under 0000:31:00.1: cvl_0_1 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:23.041 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:23.041 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:17:23.041 00:17:23.041 --- 10.0.0.2 ping statistics --- 00:17:23.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.041 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:23.041 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:23.041 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:17:23.041 00:17:23.041 --- 10.0.0.1 ping statistics --- 00:17:23.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.041 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:17:23.041 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:23.042 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:23.042 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:23.042 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3796063 00:17:23.042 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3796063 00:17:23.042 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:23.042 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # '[' -z 3796063 ']' 00:17:23.042 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.042 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:23.042 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.042 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:23.042 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:23.042 [2024-11-07 13:22:30.614844] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:17:23.042 [2024-11-07 13:22:30.614990] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:23.042 [2024-11-07 13:22:30.779249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:23.042 [2024-11-07 13:22:30.882730] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:23.042 [2024-11-07 13:22:30.882774] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:23.042 [2024-11-07 13:22:30.882786] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:23.042 [2024-11-07 13:22:30.882797] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:23.042 [2024-11-07 13:22:30.882806] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:23.042 [2024-11-07 13:22:30.885215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:23.042 [2024-11-07 13:22:30.885296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:23.042 [2024-11-07 13:22:30.885416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.042 [2024-11-07 13:22:30.885435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:23.613 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:23.613 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@866 -- # return 0 00:17:23.613 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:23.613 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:23.613 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:23.613 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:23.613 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:23.613 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:23.613 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:17:23.613 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:17:23.613 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:17:23.873 "nvmf_tgt_1" 00:17:23.873 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:17:23.873 "nvmf_tgt_2" 00:17:23.873 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:23.873 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:17:23.873 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:17:23.873 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:17:24.134 true 00:17:24.134 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:17:24.134 true 00:17:24.134 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:24.134 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:17:24.394 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:17:24.394 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:24.394 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:17:24.394 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:24.394 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:17:24.394 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:24.394 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:17:24.394 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:24.394 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:24.394 rmmod nvme_tcp 00:17:24.394 rmmod nvme_fabrics 00:17:24.394 rmmod nvme_keyring 00:17:24.394 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:24.394 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:17:24.394 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:17:24.394 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3796063 ']' 00:17:24.394 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3796063 00:17:24.394 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' -z 3796063 ']' 00:17:24.394 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # kill -0 3796063 00:17:24.394 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # uname 00:17:24.394 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:24.394 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3796063 00:17:24.394 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:24.394 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:24.394 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3796063' 00:17:24.394 killing process with pid 3796063 00:17:24.394 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # kill 3796063 00:17:24.394 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@976 -- # wait 3796063 00:17:25.335 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:25.335 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:25.335 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:25.335 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:17:25.335 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:25.335 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:17:25.335 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:17:25.335 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:25.335 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:25.335 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.335 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:25.335 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.246 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:27.246 00:17:27.246 real 0m12.709s 00:17:27.246 user 0m11.172s 00:17:27.246 sys 0m6.586s 00:17:27.246 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:27.246 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:27.246 ************************************ 00:17:27.246 END TEST nvmf_multitarget 00:17:27.246 ************************************ 00:17:27.246 13:22:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:27.246 13:22:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:27.246 13:22:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:27.246 13:22:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:27.246 ************************************ 00:17:27.246 START TEST nvmf_rpc 00:17:27.246 ************************************ 00:17:27.246 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:27.507 * Looking for test storage... 00:17:27.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:27.507 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:27.507 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:17:27.507 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:27.507 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:27.507 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:27.507 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:27.507 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:27.507 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:27.507 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:27.507 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:27.507 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:27.507 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:27.507 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:27.507 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:27.507 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:27.507 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:27.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.508 --rc genhtml_branch_coverage=1 00:17:27.508 --rc genhtml_function_coverage=1 00:17:27.508 --rc genhtml_legend=1 00:17:27.508 --rc geninfo_all_blocks=1 00:17:27.508 --rc geninfo_unexecuted_blocks=1 00:17:27.508 00:17:27.508 ' 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:27.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.508 --rc genhtml_branch_coverage=1 00:17:27.508 --rc genhtml_function_coverage=1 00:17:27.508 --rc genhtml_legend=1 00:17:27.508 --rc geninfo_all_blocks=1 00:17:27.508 --rc geninfo_unexecuted_blocks=1 00:17:27.508 00:17:27.508 ' 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:27.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.508 --rc genhtml_branch_coverage=1 00:17:27.508 --rc genhtml_function_coverage=1 00:17:27.508 --rc genhtml_legend=1 00:17:27.508 --rc geninfo_all_blocks=1 00:17:27.508 --rc geninfo_unexecuted_blocks=1 00:17:27.508 00:17:27.508 ' 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:27.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.508 --rc genhtml_branch_coverage=1 00:17:27.508 --rc genhtml_function_coverage=1 00:17:27.508 --rc genhtml_legend=1 00:17:27.508 --rc geninfo_all_blocks=1 00:17:27.508 --rc geninfo_unexecuted_blocks=1 00:17:27.508 00:17:27.508 ' 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:27.508 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:17:27.508 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:35.645 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:35.645 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:35.645 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:35.646 Found net devices under 0000:31:00.0: cvl_0_0 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:35.646 Found net devices under 0000:31:00.1: cvl_0_1 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:35.646 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:35.907 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:35.907 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:35.907 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:35.907 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:35.907 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:35.907 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:35.907 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:35.907 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:35.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:35.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:17:35.907 00:17:35.907 --- 10.0.0.2 ping statistics --- 00:17:35.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.907 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:17:35.907 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:36.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:36.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:17:36.168 00:17:36.168 --- 10.0.0.1 ping statistics --- 00:17:36.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.168 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:17:36.168 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:36.168 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:17:36.168 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:36.168 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:36.168 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:36.168 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:36.168 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:36.168 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:36.168 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:36.168 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:17:36.168 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:36.168 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:36.168 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:36.168 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3801344 00:17:36.168 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3801344 00:17:36.168 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:36.168 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # '[' -z 3801344 ']' 00:17:36.168 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.168 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:36.168 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.168 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:36.168 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:36.168 [2024-11-07 13:22:44.081110] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:17:36.168 [2024-11-07 13:22:44.081244] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:36.428 [2024-11-07 13:22:44.231866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:36.428 [2024-11-07 13:22:44.331458] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:36.428 [2024-11-07 13:22:44.331504] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:36.428 [2024-11-07 13:22:44.331516] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:36.428 [2024-11-07 13:22:44.331528] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:36.428 [2024-11-07 13:22:44.331537] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:36.428 [2024-11-07 13:22:44.333804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.428 [2024-11-07 13:22:44.333896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:36.428 [2024-11-07 13:22:44.333990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.428 [2024-11-07 13:22:44.334013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:36.999 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:36.999 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@866 -- # return 0 00:17:36.999 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:36.999 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:36.999 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:36.999 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:36.999 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:17:36.999 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.999 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:36.999 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.999 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:17:36.999 "tick_rate": 2400000000, 00:17:36.999 "poll_groups": [ 00:17:36.999 { 00:17:36.999 "name": "nvmf_tgt_poll_group_000", 00:17:36.999 "admin_qpairs": 0, 00:17:36.999 "io_qpairs": 0, 00:17:36.999 "current_admin_qpairs": 0, 00:17:36.999 "current_io_qpairs": 0, 00:17:36.999 "pending_bdev_io": 0, 00:17:36.999 "completed_nvme_io": 0, 00:17:36.999 "transports": [] 00:17:36.999 }, 00:17:36.999 { 00:17:36.999 "name": "nvmf_tgt_poll_group_001", 00:17:36.999 "admin_qpairs": 0, 00:17:36.999 "io_qpairs": 0, 00:17:36.999 "current_admin_qpairs": 0, 00:17:36.999 "current_io_qpairs": 0, 00:17:36.999 "pending_bdev_io": 0, 00:17:36.999 "completed_nvme_io": 0, 00:17:36.999 "transports": [] 00:17:36.999 }, 00:17:36.999 { 00:17:36.999 "name": "nvmf_tgt_poll_group_002", 00:17:36.999 "admin_qpairs": 0, 00:17:36.999 "io_qpairs": 0, 00:17:36.999 "current_admin_qpairs": 0, 00:17:36.999 "current_io_qpairs": 0, 00:17:36.999 "pending_bdev_io": 0, 00:17:36.999 "completed_nvme_io": 0, 00:17:36.999 "transports": [] 00:17:36.999 }, 00:17:36.999 { 00:17:36.999 "name": "nvmf_tgt_poll_group_003", 00:17:36.999 "admin_qpairs": 0, 00:17:36.999 "io_qpairs": 0, 00:17:36.999 "current_admin_qpairs": 0, 00:17:36.999 "current_io_qpairs": 0, 00:17:36.999 "pending_bdev_io": 0, 00:17:36.999 "completed_nvme_io": 0, 00:17:36.999 "transports": [] 00:17:36.999 } 00:17:36.999 ] 00:17:36.999 }' 00:17:36.999 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:17:36.999 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:17:36.999 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:17:36.999 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:36.999 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:17:37.000 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:17:37.000 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:17:37.000 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:37.000 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.000 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:37.262 [2024-11-07 13:22:45.007260] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:17:37.262 "tick_rate": 2400000000, 00:17:37.262 "poll_groups": [ 00:17:37.262 { 00:17:37.262 "name": "nvmf_tgt_poll_group_000", 00:17:37.262 "admin_qpairs": 0, 00:17:37.262 "io_qpairs": 0, 00:17:37.262 "current_admin_qpairs": 0, 00:17:37.262 "current_io_qpairs": 0, 00:17:37.262 "pending_bdev_io": 0, 00:17:37.262 "completed_nvme_io": 0, 00:17:37.262 "transports": [ 00:17:37.262 { 00:17:37.262 "trtype": "TCP" 00:17:37.262 } 00:17:37.262 ] 00:17:37.262 }, 00:17:37.262 { 00:17:37.262 "name": "nvmf_tgt_poll_group_001", 00:17:37.262 "admin_qpairs": 0, 00:17:37.262 "io_qpairs": 0, 00:17:37.262 "current_admin_qpairs": 0, 00:17:37.262 "current_io_qpairs": 0, 00:17:37.262 "pending_bdev_io": 0, 00:17:37.262 "completed_nvme_io": 0, 00:17:37.262 "transports": [ 00:17:37.262 { 00:17:37.262 "trtype": "TCP" 00:17:37.262 } 00:17:37.262 ] 00:17:37.262 }, 00:17:37.262 { 00:17:37.262 "name": "nvmf_tgt_poll_group_002", 00:17:37.262 "admin_qpairs": 0, 00:17:37.262 "io_qpairs": 0, 00:17:37.262 "current_admin_qpairs": 0, 00:17:37.262 "current_io_qpairs": 0, 00:17:37.262 "pending_bdev_io": 0, 00:17:37.262 "completed_nvme_io": 0, 00:17:37.262 "transports": [ 00:17:37.262 { 00:17:37.262 "trtype": "TCP" 00:17:37.262 } 00:17:37.262 ] 00:17:37.262 }, 00:17:37.262 { 00:17:37.262 "name": "nvmf_tgt_poll_group_003", 00:17:37.262 "admin_qpairs": 0, 00:17:37.262 "io_qpairs": 0, 00:17:37.262 "current_admin_qpairs": 0, 00:17:37.262 "current_io_qpairs": 0, 00:17:37.262 "pending_bdev_io": 0, 00:17:37.262 "completed_nvme_io": 0, 00:17:37.262 "transports": [ 00:17:37.262 { 00:17:37.262 "trtype": "TCP" 00:17:37.262 } 00:17:37.262 ] 00:17:37.262 } 00:17:37.262 ] 00:17:37.262 }' 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:37.262 Malloc1 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:37.262 [2024-11-07 13:22:45.250469] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:17:37.262 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:17:37.524 [2024-11-07 13:22:45.287976] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:17:37.524 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:37.524 could not add new controller: failed to write to nvme-fabrics device 00:17:37.524 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:17:37.524 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:37.524 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:37.524 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:37.524 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:37.524 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.524 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:37.524 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.524 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:38.908 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:17:38.908 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:17:38.908 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:17:38.908 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:17:38.908 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:17:41.455 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:17:41.455 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:17:41.455 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:17:41.455 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:17:41.455 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:17:41.455 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:17:41.455 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:41.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:41.455 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:41.455 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:17:41.455 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:17:41.455 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:41.455 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:17:41.455 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:41.455 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:17:41.455 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:41.455 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.455 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:41.455 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.456 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:41.456 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:17:41.456 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:41.456 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:17:41.456 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:41.456 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:17:41.456 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:41.456 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:17:41.456 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:41.456 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:17:41.456 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:17:41.456 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:41.456 [2024-11-07 13:22:49.208272] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:17:41.456 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:41.456 could not add new controller: failed to write to nvme-fabrics device 00:17:41.456 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:17:41.456 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:41.456 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:41.456 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:41.456 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:41.456 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.456 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:41.456 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.456 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:42.840 13:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:42.840 13:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:17:42.840 13:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:17:42.840 13:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:17:42.841 13:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:17:44.754 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:17:44.754 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:17:44.754 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:17:45.015 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:17:45.015 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:17:45.015 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:17:45.015 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:45.015 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:45.015 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:45.015 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:17:45.015 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:17:45.015 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:45.275 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:17:45.275 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:45.275 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:17:45.275 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:45.276 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.276 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.276 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.276 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:45.276 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:45.276 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:45.276 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.276 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.276 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.276 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:45.276 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.276 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.276 [2024-11-07 13:22:53.080404] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:45.276 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.276 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:45.276 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.276 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.276 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.276 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:45.276 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.276 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.276 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.276 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:46.660 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:46.660 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:17:46.660 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:17:46.660 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:17:46.660 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:17:49.208 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:17:49.208 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:17:49.208 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:17:49.208 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:17:49.208 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:17:49.208 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:17:49.208 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:49.208 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:49.208 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:49.208 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:17:49.208 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:17:49.208 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:49.208 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:17:49.208 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:49.208 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:17:49.208 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:49.208 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.208 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:49.208 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.208 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:49.208 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.208 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:49.208 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.208 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:49.208 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:49.208 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.208 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:49.208 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.208 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:49.208 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.208 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:49.208 [2024-11-07 13:22:56.947727] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:49.208 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.208 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:49.208 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.208 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:49.208 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.208 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:49.208 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.208 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:49.208 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.208 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:50.595 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:50.595 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:17:50.595 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:17:50.595 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:17:50.595 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:17:53.141 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:17:53.141 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:17:53.141 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:17:53.141 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:17:53.141 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:17:53.141 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:17:53.141 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:53.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:53.141 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:53.141 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:17:53.141 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:17:53.141 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:53.141 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:17:53.141 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:53.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:17:53.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:53.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:53.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:53.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:53.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:53.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:53.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:53.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:53.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:53.142 [2024-11-07 13:23:00.871685] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:53.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:53.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:53.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:53.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:53.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:54.528 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:54.528 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:17:54.528 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:17:54.528 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:17:54.528 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:17:56.443 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:17:56.443 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:17:56.443 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:17:56.443 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:17:56.443 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:17:56.443 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:17:56.443 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:56.704 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:56.704 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:56.704 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:17:56.704 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:17:56.704 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:56.704 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:17:56.704 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:56.704 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:17:56.704 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:56.704 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.704 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.704 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.704 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:56.704 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.704 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.704 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.704 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:56.704 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:56.704 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.704 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.704 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.704 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:56.704 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.704 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.704 [2024-11-07 13:23:04.707292] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:56.964 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.964 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:56.964 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.964 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.964 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.964 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:56.964 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.964 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.964 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.965 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:58.348 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:58.348 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:17:58.348 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:17:58.348 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:17:58.348 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:18:00.259 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:00.259 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:00.259 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:00.259 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:18:00.259 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:00.259 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:18:00.259 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:00.519 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:00.519 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:00.519 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:18:00.519 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:18:00.519 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:00.519 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:00.519 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:18:00.780 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:18:00.780 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:00.780 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.780 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:00.780 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.780 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:00.780 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.780 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:00.780 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.780 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:00.780 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:00.780 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.780 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:00.780 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.780 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:00.780 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.780 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:00.780 [2024-11-07 13:23:08.574515] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:00.780 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.780 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:00.780 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.780 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:00.780 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.780 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:00.780 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.780 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:00.780 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.780 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:02.163 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:02.163 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:18:02.163 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:02.163 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:18:02.163 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:18:04.704 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:04.704 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:04.704 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:04.704 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:18:04.704 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:04.704 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:18:04.704 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:04.704 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:04.704 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:04.704 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:18:04.704 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:18:04.704 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:04.704 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.705 [2024-11-07 13:23:12.463467] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.705 [2024-11-07 13:23:12.527623] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.705 [2024-11-07 13:23:12.595808] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.705 [2024-11-07 13:23:12.660011] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:04.705 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.706 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:04.706 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.706 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.706 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.706 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:04.706 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.706 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.706 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.706 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:04.706 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.706 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.706 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.706 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:04.706 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.706 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.706 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.706 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:04.706 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:04.706 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.706 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.967 [2024-11-07 13:23:12.724250] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:18:04.967 "tick_rate": 2400000000, 00:18:04.967 "poll_groups": [ 00:18:04.967 { 00:18:04.967 "name": "nvmf_tgt_poll_group_000", 00:18:04.967 "admin_qpairs": 0, 00:18:04.967 "io_qpairs": 224, 00:18:04.967 "current_admin_qpairs": 0, 00:18:04.967 "current_io_qpairs": 0, 00:18:04.967 "pending_bdev_io": 0, 00:18:04.967 "completed_nvme_io": 226, 00:18:04.967 "transports": [ 00:18:04.967 { 00:18:04.967 "trtype": "TCP" 00:18:04.967 } 00:18:04.967 ] 00:18:04.967 }, 00:18:04.967 { 00:18:04.967 "name": "nvmf_tgt_poll_group_001", 00:18:04.967 "admin_qpairs": 1, 00:18:04.967 "io_qpairs": 223, 00:18:04.967 "current_admin_qpairs": 0, 00:18:04.967 "current_io_qpairs": 0, 00:18:04.967 "pending_bdev_io": 0, 00:18:04.967 "completed_nvme_io": 224, 00:18:04.967 "transports": [ 00:18:04.967 { 00:18:04.967 "trtype": "TCP" 00:18:04.967 } 00:18:04.967 ] 00:18:04.967 }, 00:18:04.967 { 00:18:04.967 "name": "nvmf_tgt_poll_group_002", 00:18:04.967 "admin_qpairs": 6, 00:18:04.967 "io_qpairs": 218, 00:18:04.967 "current_admin_qpairs": 0, 00:18:04.967 "current_io_qpairs": 0, 00:18:04.967 "pending_bdev_io": 0, 00:18:04.967 "completed_nvme_io": 270, 00:18:04.967 "transports": [ 00:18:04.967 { 00:18:04.967 "trtype": "TCP" 00:18:04.967 } 00:18:04.967 ] 00:18:04.967 }, 00:18:04.967 { 00:18:04.967 "name": "nvmf_tgt_poll_group_003", 00:18:04.967 "admin_qpairs": 0, 00:18:04.967 "io_qpairs": 224, 00:18:04.967 "current_admin_qpairs": 0, 00:18:04.967 "current_io_qpairs": 0, 00:18:04.967 "pending_bdev_io": 0, 00:18:04.967 "completed_nvme_io": 519, 00:18:04.967 "transports": [ 00:18:04.967 { 00:18:04.967 "trtype": "TCP" 00:18:04.967 } 00:18:04.967 ] 00:18:04.967 } 00:18:04.967 ] 00:18:04.967 }' 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:04.967 rmmod nvme_tcp 00:18:04.967 rmmod nvme_fabrics 00:18:04.967 rmmod nvme_keyring 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3801344 ']' 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3801344 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' -z 3801344 ']' 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # kill -0 3801344 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # uname 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:04.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3801344 00:18:05.228 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:05.228 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:05.228 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3801344' 00:18:05.228 killing process with pid 3801344 00:18:05.228 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # kill 3801344 00:18:05.228 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@976 -- # wait 3801344 00:18:06.169 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:06.169 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:06.169 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:06.169 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:18:06.169 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:18:06.169 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:06.169 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:18:06.169 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:06.169 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:06.169 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.169 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:06.169 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.225 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:08.225 00:18:08.225 real 0m40.784s 00:18:08.225 user 1m59.563s 00:18:08.225 sys 0m8.878s 00:18:08.225 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:08.225 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:08.225 ************************************ 00:18:08.225 END TEST nvmf_rpc 00:18:08.225 ************************************ 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:08.225 ************************************ 00:18:08.225 START TEST nvmf_invalid 00:18:08.225 ************************************ 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:08.225 * Looking for test storage... 00:18:08.225 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:08.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.225 --rc genhtml_branch_coverage=1 00:18:08.225 --rc genhtml_function_coverage=1 00:18:08.225 --rc genhtml_legend=1 00:18:08.225 --rc geninfo_all_blocks=1 00:18:08.225 --rc geninfo_unexecuted_blocks=1 00:18:08.225 00:18:08.225 ' 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:08.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.225 --rc genhtml_branch_coverage=1 00:18:08.225 --rc genhtml_function_coverage=1 00:18:08.225 --rc genhtml_legend=1 00:18:08.225 --rc geninfo_all_blocks=1 00:18:08.225 --rc geninfo_unexecuted_blocks=1 00:18:08.225 00:18:08.225 ' 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:08.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.225 --rc genhtml_branch_coverage=1 00:18:08.225 --rc genhtml_function_coverage=1 00:18:08.225 --rc genhtml_legend=1 00:18:08.225 --rc geninfo_all_blocks=1 00:18:08.225 --rc geninfo_unexecuted_blocks=1 00:18:08.225 00:18:08.225 ' 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:08.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.225 --rc genhtml_branch_coverage=1 00:18:08.225 --rc genhtml_function_coverage=1 00:18:08.225 --rc genhtml_legend=1 00:18:08.225 --rc geninfo_all_blocks=1 00:18:08.225 --rc geninfo_unexecuted_blocks=1 00:18:08.225 00:18:08.225 ' 00:18:08.225 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:08.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:18:08.488 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:16.631 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:16.631 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:16.631 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:16.631 Found net devices under 0000:31:00.0: cvl_0_0 00:18:16.632 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:16.632 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:16.632 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:16.632 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:16.632 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:16.632 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:16.632 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:16.632 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:16.632 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:16.632 Found net devices under 0000:31:00.1: cvl_0_1 00:18:16.632 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:16.632 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:16.632 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:18:16.632 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:16.632 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:16.632 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:16.632 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:16.632 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:16.632 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:16.632 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:16.632 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:16.632 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:16.632 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:16.632 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:16.632 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:16.632 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:16.632 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:16.632 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:16.632 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:16.632 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:16.632 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:16.632 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:16.632 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:16.632 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:16.632 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:16.892 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:16.892 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:16.892 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:16.893 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:16.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:16.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.520 ms 00:18:16.893 00:18:16.893 --- 10.0.0.2 ping statistics --- 00:18:16.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.893 rtt min/avg/max/mdev = 0.520/0.520/0.520/0.000 ms 00:18:16.893 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:16.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:16.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:18:16.893 00:18:16.893 --- 10.0.0.1 ping statistics --- 00:18:16.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.893 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:18:16.893 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:16.893 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:18:16.893 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:16.893 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:16.893 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:16.893 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:16.893 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:16.893 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:16.893 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:16.893 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:18:16.893 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:16.893 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:16.893 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:16.893 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3811833 00:18:16.893 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3811833 00:18:16.893 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:16.893 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # '[' -z 3811833 ']' 00:18:16.893 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.893 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:16.893 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.893 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:16.893 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:16.893 [2024-11-07 13:23:24.892930] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:18:16.893 [2024-11-07 13:23:24.893054] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:17.153 [2024-11-07 13:23:25.057209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:17.413 [2024-11-07 13:23:25.159020] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:17.413 [2024-11-07 13:23:25.159067] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:17.413 [2024-11-07 13:23:25.159080] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:17.413 [2024-11-07 13:23:25.159091] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:17.413 [2024-11-07 13:23:25.159100] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:17.413 [2024-11-07 13:23:25.161553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:17.413 [2024-11-07 13:23:25.161639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:17.413 [2024-11-07 13:23:25.161757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.413 [2024-11-07 13:23:25.161779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:17.673 13:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:17.673 13:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@866 -- # return 0 00:18:17.673 13:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:17.673 13:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:17.673 13:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:17.933 13:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.933 13:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:17.933 13:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode19662 00:18:17.933 [2024-11-07 13:23:25.862923] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:18:17.933 13:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:18:17.933 { 00:18:17.933 "nqn": "nqn.2016-06.io.spdk:cnode19662", 00:18:17.933 "tgt_name": "foobar", 00:18:17.933 "method": "nvmf_create_subsystem", 00:18:17.933 "req_id": 1 00:18:17.933 } 00:18:17.933 Got JSON-RPC error response 00:18:17.933 response: 00:18:17.933 { 00:18:17.933 "code": -32603, 00:18:17.933 "message": "Unable to find target foobar" 00:18:17.933 }' 00:18:17.933 13:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:18:17.933 { 00:18:17.933 "nqn": "nqn.2016-06.io.spdk:cnode19662", 00:18:17.933 "tgt_name": "foobar", 00:18:17.933 "method": "nvmf_create_subsystem", 00:18:17.933 "req_id": 1 00:18:17.933 } 00:18:17.933 Got JSON-RPC error response 00:18:17.933 response: 00:18:17.933 { 00:18:17.933 "code": -32603, 00:18:17.933 "message": "Unable to find target foobar" 00:18:17.933 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:18:17.933 13:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:18:17.933 13:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode15991 00:18:18.194 [2024-11-07 13:23:26.055554] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15991: invalid serial number 'SPDKISFASTANDAWESOME' 00:18:18.194 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:18:18.194 { 00:18:18.194 "nqn": "nqn.2016-06.io.spdk:cnode15991", 00:18:18.194 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:18.194 "method": "nvmf_create_subsystem", 00:18:18.194 "req_id": 1 00:18:18.194 } 00:18:18.194 Got JSON-RPC error response 00:18:18.194 response: 00:18:18.194 { 00:18:18.194 "code": -32602, 00:18:18.194 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:18.194 }' 00:18:18.194 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:18:18.194 { 00:18:18.194 "nqn": "nqn.2016-06.io.spdk:cnode15991", 00:18:18.194 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:18.194 "method": "nvmf_create_subsystem", 00:18:18.194 "req_id": 1 00:18:18.194 } 00:18:18.194 Got JSON-RPC error response 00:18:18.194 response: 00:18:18.194 { 00:18:18.194 "code": -32602, 00:18:18.194 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:18.194 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:18.194 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:18:18.194 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode25008 00:18:18.456 [2024-11-07 13:23:26.240366] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25008: invalid model number 'SPDK_Controller' 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:18:18.456 { 00:18:18.456 "nqn": "nqn.2016-06.io.spdk:cnode25008", 00:18:18.456 "model_number": "SPDK_Controller\u001f", 00:18:18.456 "method": "nvmf_create_subsystem", 00:18:18.456 "req_id": 1 00:18:18.456 } 00:18:18.456 Got JSON-RPC error response 00:18:18.456 response: 00:18:18.456 { 00:18:18.456 "code": -32602, 00:18:18.456 "message": "Invalid MN SPDK_Controller\u001f" 00:18:18.456 }' 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:18:18.456 { 00:18:18.456 "nqn": "nqn.2016-06.io.spdk:cnode25008", 00:18:18.456 "model_number": "SPDK_Controller\u001f", 00:18:18.456 "method": "nvmf_create_subsystem", 00:18:18.456 "req_id": 1 00:18:18.456 } 00:18:18.456 Got JSON-RPC error response 00:18:18.456 response: 00:18:18.456 { 00:18:18.456 "code": -32602, 00:18:18.456 "message": "Invalid MN SPDK_Controller\u001f" 00:18:18.456 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:18:18.456 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.457 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.457 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:18:18.457 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:18:18.457 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:18:18.457 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.457 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.457 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:18:18.457 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:18:18.457 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:18:18.457 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.457 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.457 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:18:18.457 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:18:18.457 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:18:18.457 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.457 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.457 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:18:18.457 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:18:18.457 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:18:18.457 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.457 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.457 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:18:18.457 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:18:18.457 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:18:18.457 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.457 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.457 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 8 == \- ]] 00:18:18.457 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '8J}X&CRewOLp]rQQ3]H->' 00:18:18.457 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '8J}X&CRewOLp]rQQ3]H->' nqn.2016-06.io.spdk:cnode32415 00:18:18.718 [2024-11-07 13:23:26.593336] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32415: invalid serial number '8J}X&CRewOLp]rQQ3]H->' 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:18:18.718 { 00:18:18.718 "nqn": "nqn.2016-06.io.spdk:cnode32415", 00:18:18.718 "serial_number": "8J}X&CRewOLp]rQQ3]H->", 00:18:18.718 "method": "nvmf_create_subsystem", 00:18:18.718 "req_id": 1 00:18:18.718 } 00:18:18.718 Got JSON-RPC error response 00:18:18.718 response: 00:18:18.718 { 00:18:18.718 "code": -32602, 00:18:18.718 "message": "Invalid SN 8J}X&CRewOLp]rQQ3]H->" 00:18:18.718 }' 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:18:18.718 { 00:18:18.718 "nqn": "nqn.2016-06.io.spdk:cnode32415", 00:18:18.718 "serial_number": "8J}X&CRewOLp]rQQ3]H->", 00:18:18.718 "method": "nvmf_create_subsystem", 00:18:18.718 "req_id": 1 00:18:18.718 } 00:18:18.718 Got JSON-RPC error response 00:18:18.718 response: 00:18:18.718 { 00:18:18.718 "code": -32602, 00:18:18.718 "message": "Invalid SN 8J}X&CRewOLp]rQQ3]H->" 00:18:18.718 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:18:18.718 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.980 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.980 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:18:18.980 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:18:18.980 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:18:18.980 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.980 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.980 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:18:18.980 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:18:18.980 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:18:18.980 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.980 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.980 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:18:18.980 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:18:18.980 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:18:18.980 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.980 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.980 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:18:18.980 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:18:18.980 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:18:18.980 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.980 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.980 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:18:18.980 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:18:18.980 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:18:18.980 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.980 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.980 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:18:18.980 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:18:18.980 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:18:18.980 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.980 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ O == \- ]] 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Odsp1.Si![9Pu{M*K)0dy-9`#:ZzRJ"lf!2O8ee' 00:18:18.981 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'Odsp1.Si![9Pu{M*K)0dy-9`#:ZzRJ"lf!2O8ee' nqn.2016-06.io.spdk:cnode22418 00:18:19.243 [2024-11-07 13:23:27.095029] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22418: invalid model number 'Odsp1.Si![9Pu{M*K)0dy-9`#:ZzRJ"lf!2O8ee' 00:18:19.243 13:23:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:18:19.243 { 00:18:19.243 "nqn": "nqn.2016-06.io.spdk:cnode22418", 00:18:19.243 "model_number": "Odsp1.Si![9Pu{\u007fM*K)0dy-9`#:ZzRJ\u007f\"lf!2O8ee", 00:18:19.243 "method": "nvmf_create_subsystem", 00:18:19.243 "req_id": 1 00:18:19.243 } 00:18:19.243 Got JSON-RPC error response 00:18:19.243 response: 00:18:19.243 { 00:18:19.243 "code": -32602, 00:18:19.243 "message": "Invalid MN Odsp1.Si![9Pu{\u007fM*K)0dy-9`#:ZzRJ\u007f\"lf!2O8ee" 00:18:19.243 }' 00:18:19.243 13:23:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:18:19.243 { 00:18:19.243 "nqn": "nqn.2016-06.io.spdk:cnode22418", 00:18:19.243 "model_number": "Odsp1.Si![9Pu{\u007fM*K)0dy-9`#:ZzRJ\u007f\"lf!2O8ee", 00:18:19.243 "method": "nvmf_create_subsystem", 00:18:19.243 "req_id": 1 00:18:19.243 } 00:18:19.243 Got JSON-RPC error response 00:18:19.243 response: 00:18:19.243 { 00:18:19.243 "code": -32602, 00:18:19.243 "message": "Invalid MN Odsp1.Si![9Pu{\u007fM*K)0dy-9`#:ZzRJ\u007f\"lf!2O8ee" 00:18:19.243 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:19.243 13:23:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:18:19.503 [2024-11-07 13:23:27.279709] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:19.503 13:23:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:18:19.764 13:23:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:18:19.764 13:23:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:18:19.764 13:23:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:18:19.764 13:23:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:18:19.764 13:23:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:18:19.764 [2024-11-07 13:23:27.673560] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:18:19.764 13:23:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:18:19.764 { 00:18:19.764 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:19.764 "listen_address": { 00:18:19.764 "trtype": "tcp", 00:18:19.764 "traddr": "", 00:18:19.764 "trsvcid": "4421" 00:18:19.764 }, 00:18:19.764 "method": "nvmf_subsystem_remove_listener", 00:18:19.764 "req_id": 1 00:18:19.764 } 00:18:19.764 Got JSON-RPC error response 00:18:19.764 response: 00:18:19.764 { 00:18:19.764 "code": -32602, 00:18:19.764 "message": "Invalid parameters" 00:18:19.764 }' 00:18:19.764 13:23:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:18:19.764 { 00:18:19.764 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:19.764 "listen_address": { 00:18:19.764 "trtype": "tcp", 00:18:19.764 "traddr": "", 00:18:19.764 "trsvcid": "4421" 00:18:19.764 }, 00:18:19.764 "method": "nvmf_subsystem_remove_listener", 00:18:19.764 "req_id": 1 00:18:19.764 } 00:18:19.764 Got JSON-RPC error response 00:18:19.764 response: 00:18:19.764 { 00:18:19.764 "code": -32602, 00:18:19.764 "message": "Invalid parameters" 00:18:19.764 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:18:19.764 13:23:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9213 -i 0 00:18:20.025 [2024-11-07 13:23:27.862141] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9213: invalid cntlid range [0-65519] 00:18:20.025 13:23:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:18:20.025 { 00:18:20.025 "nqn": "nqn.2016-06.io.spdk:cnode9213", 00:18:20.025 "min_cntlid": 0, 00:18:20.025 "method": "nvmf_create_subsystem", 00:18:20.025 "req_id": 1 00:18:20.025 } 00:18:20.025 Got JSON-RPC error response 00:18:20.025 response: 00:18:20.025 { 00:18:20.025 "code": -32602, 00:18:20.025 "message": "Invalid cntlid range [0-65519]" 00:18:20.025 }' 00:18:20.025 13:23:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:18:20.025 { 00:18:20.025 "nqn": "nqn.2016-06.io.spdk:cnode9213", 00:18:20.025 "min_cntlid": 0, 00:18:20.025 "method": "nvmf_create_subsystem", 00:18:20.025 "req_id": 1 00:18:20.025 } 00:18:20.025 Got JSON-RPC error response 00:18:20.025 response: 00:18:20.025 { 00:18:20.025 "code": -32602, 00:18:20.025 "message": "Invalid cntlid range [0-65519]" 00:18:20.025 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:20.025 13:23:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12494 -i 65520 00:18:20.285 [2024-11-07 13:23:28.046718] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12494: invalid cntlid range [65520-65519] 00:18:20.285 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:18:20.285 { 00:18:20.285 "nqn": "nqn.2016-06.io.spdk:cnode12494", 00:18:20.285 "min_cntlid": 65520, 00:18:20.285 "method": "nvmf_create_subsystem", 00:18:20.285 "req_id": 1 00:18:20.285 } 00:18:20.285 Got JSON-RPC error response 00:18:20.285 response: 00:18:20.285 { 00:18:20.285 "code": -32602, 00:18:20.285 "message": "Invalid cntlid range [65520-65519]" 00:18:20.285 }' 00:18:20.285 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:18:20.285 { 00:18:20.285 "nqn": "nqn.2016-06.io.spdk:cnode12494", 00:18:20.285 "min_cntlid": 65520, 00:18:20.285 "method": "nvmf_create_subsystem", 00:18:20.285 "req_id": 1 00:18:20.285 } 00:18:20.285 Got JSON-RPC error response 00:18:20.285 response: 00:18:20.285 { 00:18:20.285 "code": -32602, 00:18:20.285 "message": "Invalid cntlid range [65520-65519]" 00:18:20.285 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:20.285 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26717 -I 0 00:18:20.285 [2024-11-07 13:23:28.223318] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26717: invalid cntlid range [1-0] 00:18:20.285 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:18:20.285 { 00:18:20.285 "nqn": "nqn.2016-06.io.spdk:cnode26717", 00:18:20.285 "max_cntlid": 0, 00:18:20.285 "method": "nvmf_create_subsystem", 00:18:20.285 "req_id": 1 00:18:20.285 } 00:18:20.285 Got JSON-RPC error response 00:18:20.285 response: 00:18:20.285 { 00:18:20.285 "code": -32602, 00:18:20.285 "message": "Invalid cntlid range [1-0]" 00:18:20.285 }' 00:18:20.285 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:18:20.285 { 00:18:20.285 "nqn": "nqn.2016-06.io.spdk:cnode26717", 00:18:20.285 "max_cntlid": 0, 00:18:20.285 "method": "nvmf_create_subsystem", 00:18:20.285 "req_id": 1 00:18:20.285 } 00:18:20.285 Got JSON-RPC error response 00:18:20.285 response: 00:18:20.285 { 00:18:20.285 "code": -32602, 00:18:20.285 "message": "Invalid cntlid range [1-0]" 00:18:20.285 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:20.285 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24794 -I 65520 00:18:20.545 [2024-11-07 13:23:28.399896] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24794: invalid cntlid range [1-65520] 00:18:20.545 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:18:20.546 { 00:18:20.546 "nqn": "nqn.2016-06.io.spdk:cnode24794", 00:18:20.546 "max_cntlid": 65520, 00:18:20.546 "method": "nvmf_create_subsystem", 00:18:20.546 "req_id": 1 00:18:20.546 } 00:18:20.546 Got JSON-RPC error response 00:18:20.546 response: 00:18:20.546 { 00:18:20.546 "code": -32602, 00:18:20.546 "message": "Invalid cntlid range [1-65520]" 00:18:20.546 }' 00:18:20.546 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:18:20.546 { 00:18:20.546 "nqn": "nqn.2016-06.io.spdk:cnode24794", 00:18:20.546 "max_cntlid": 65520, 00:18:20.546 "method": "nvmf_create_subsystem", 00:18:20.546 "req_id": 1 00:18:20.546 } 00:18:20.546 Got JSON-RPC error response 00:18:20.546 response: 00:18:20.546 { 00:18:20.546 "code": -32602, 00:18:20.546 "message": "Invalid cntlid range [1-65520]" 00:18:20.546 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:20.546 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 -i 6 -I 5 00:18:20.806 [2024-11-07 13:23:28.580456] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20: invalid cntlid range [6-5] 00:18:20.806 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:18:20.806 { 00:18:20.806 "nqn": "nqn.2016-06.io.spdk:cnode20", 00:18:20.806 "min_cntlid": 6, 00:18:20.806 "max_cntlid": 5, 00:18:20.806 "method": "nvmf_create_subsystem", 00:18:20.806 "req_id": 1 00:18:20.806 } 00:18:20.806 Got JSON-RPC error response 00:18:20.806 response: 00:18:20.806 { 00:18:20.806 "code": -32602, 00:18:20.806 "message": "Invalid cntlid range [6-5]" 00:18:20.806 }' 00:18:20.806 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:18:20.806 { 00:18:20.806 "nqn": "nqn.2016-06.io.spdk:cnode20", 00:18:20.806 "min_cntlid": 6, 00:18:20.806 "max_cntlid": 5, 00:18:20.806 "method": "nvmf_create_subsystem", 00:18:20.806 "req_id": 1 00:18:20.806 } 00:18:20.806 Got JSON-RPC error response 00:18:20.806 response: 00:18:20.806 { 00:18:20.806 "code": -32602, 00:18:20.806 "message": "Invalid cntlid range [6-5]" 00:18:20.806 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:20.806 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:18:20.806 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:18:20.806 { 00:18:20.806 "name": "foobar", 00:18:20.806 "method": "nvmf_delete_target", 00:18:20.806 "req_id": 1 00:18:20.806 } 00:18:20.806 Got JSON-RPC error response 00:18:20.806 response: 00:18:20.806 { 00:18:20.806 "code": -32602, 00:18:20.806 "message": "The specified target doesn'\''t exist, cannot delete it." 00:18:20.806 }' 00:18:20.806 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:18:20.806 { 00:18:20.806 "name": "foobar", 00:18:20.806 "method": "nvmf_delete_target", 00:18:20.806 "req_id": 1 00:18:20.806 } 00:18:20.806 Got JSON-RPC error response 00:18:20.806 response: 00:18:20.806 { 00:18:20.806 "code": -32602, 00:18:20.806 "message": "The specified target doesn't exist, cannot delete it." 00:18:20.807 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:18:20.807 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:18:20.807 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:18:20.807 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:20.807 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:18:20.807 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:20.807 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:18:20.807 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:20.807 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:20.807 rmmod nvme_tcp 00:18:20.807 rmmod nvme_fabrics 00:18:20.807 rmmod nvme_keyring 00:18:20.807 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:20.807 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:18:20.807 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:18:20.807 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 3811833 ']' 00:18:20.807 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 3811833 00:18:20.807 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # '[' -z 3811833 ']' 00:18:20.807 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # kill -0 3811833 00:18:20.807 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # uname 00:18:20.807 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:20.807 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3811833 00:18:21.067 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:21.067 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:21.067 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3811833' 00:18:21.067 killing process with pid 3811833 00:18:21.067 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@971 -- # kill 3811833 00:18:21.067 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@976 -- # wait 3811833 00:18:21.638 13:23:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:21.638 13:23:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:21.638 13:23:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:21.638 13:23:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:18:21.638 13:23:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:18:21.638 13:23:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:21.638 13:23:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:18:21.638 13:23:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:21.638 13:23:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:21.638 13:23:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:21.638 13:23:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:21.638 13:23:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.183 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:24.183 00:18:24.183 real 0m15.674s 00:18:24.183 user 0m22.130s 00:18:24.183 sys 0m7.508s 00:18:24.183 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:24.183 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:24.183 ************************************ 00:18:24.183 END TEST nvmf_invalid 00:18:24.183 ************************************ 00:18:24.183 13:23:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:24.183 13:23:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:24.183 13:23:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:24.183 13:23:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:24.183 ************************************ 00:18:24.183 START TEST nvmf_connect_stress 00:18:24.183 ************************************ 00:18:24.183 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:24.183 * Looking for test storage... 00:18:24.183 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:24.183 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:24.183 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:18:24.183 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:24.183 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:24.183 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:24.183 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:24.183 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:24.183 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:18:24.183 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:18:24.183 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:18:24.183 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:18:24.183 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:18:24.183 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:18:24.183 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:18:24.183 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:24.183 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:18:24.183 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:18:24.183 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:24.183 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:24.183 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:18:24.183 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:18:24.183 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:24.183 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:18:24.183 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:18:24.183 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:24.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.184 --rc genhtml_branch_coverage=1 00:18:24.184 --rc genhtml_function_coverage=1 00:18:24.184 --rc genhtml_legend=1 00:18:24.184 --rc geninfo_all_blocks=1 00:18:24.184 --rc geninfo_unexecuted_blocks=1 00:18:24.184 00:18:24.184 ' 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:24.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.184 --rc genhtml_branch_coverage=1 00:18:24.184 --rc genhtml_function_coverage=1 00:18:24.184 --rc genhtml_legend=1 00:18:24.184 --rc geninfo_all_blocks=1 00:18:24.184 --rc geninfo_unexecuted_blocks=1 00:18:24.184 00:18:24.184 ' 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:24.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.184 --rc genhtml_branch_coverage=1 00:18:24.184 --rc genhtml_function_coverage=1 00:18:24.184 --rc genhtml_legend=1 00:18:24.184 --rc geninfo_all_blocks=1 00:18:24.184 --rc geninfo_unexecuted_blocks=1 00:18:24.184 00:18:24.184 ' 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:24.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.184 --rc genhtml_branch_coverage=1 00:18:24.184 --rc genhtml_function_coverage=1 00:18:24.184 --rc genhtml_legend=1 00:18:24.184 --rc geninfo_all_blocks=1 00:18:24.184 --rc geninfo_unexecuted_blocks=1 00:18:24.184 00:18:24.184 ' 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:24.184 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:18:24.184 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:32.325 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:32.325 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:18:32.325 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:32.325 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:32.325 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:32.325 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:32.325 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:32.325 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:18:32.325 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:32.325 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:18:32.325 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:18:32.325 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:18:32.325 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:18:32.325 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:18:32.325 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:18:32.325 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:32.325 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:32.325 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:32.325 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:32.326 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:32.326 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:32.326 Found net devices under 0000:31:00.0: cvl_0_0 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:32.326 Found net devices under 0000:31:00.1: cvl_0_1 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:32.326 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:32.326 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:32.326 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:32.326 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:32.326 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:32.326 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:32.326 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:32.326 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:32.326 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:32.326 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:32.326 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.709 ms 00:18:32.326 00:18:32.326 --- 10.0.0.2 ping statistics --- 00:18:32.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.326 rtt min/avg/max/mdev = 0.709/0.709/0.709/0.000 ms 00:18:32.326 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:32.326 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:32.326 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:18:32.326 00:18:32.326 --- 10.0.0.1 ping statistics --- 00:18:32.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.326 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:18:32.326 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:32.326 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:18:32.326 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:32.326 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:32.326 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:32.326 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:32.326 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:32.327 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:32.327 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:32.327 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:18:32.327 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:32.327 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:32.327 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:32.327 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3817644 00:18:32.327 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3817644 00:18:32.327 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:32.327 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # '[' -z 3817644 ']' 00:18:32.327 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.327 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:32.327 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.327 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:32.327 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:32.587 [2024-11-07 13:23:40.423379] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:18:32.587 [2024-11-07 13:23:40.423509] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.848 [2024-11-07 13:23:40.605292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:32.848 [2024-11-07 13:23:40.731754] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.848 [2024-11-07 13:23:40.731828] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.848 [2024-11-07 13:23:40.731842] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:32.848 [2024-11-07 13:23:40.731855] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:32.848 [2024-11-07 13:23:40.731878] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.848 [2024-11-07 13:23:40.734797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:32.848 [2024-11-07 13:23:40.734970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.848 [2024-11-07 13:23:40.734997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:33.419 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:33.419 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@866 -- # return 0 00:18:33.419 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:33.419 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:33.419 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:33.419 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:33.419 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:33.419 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.419 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:33.419 [2024-11-07 13:23:41.239133] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:33.419 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.419 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:33.419 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.419 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:33.419 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.419 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:33.419 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.419 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:33.419 [2024-11-07 13:23:41.265057] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:33.419 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.419 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:33.419 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.419 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:33.419 NULL1 00:18:33.419 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.419 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3817741 00:18:33.419 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:33.419 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:18:33.419 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:33.419 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:18:33.419 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:33.419 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:33.419 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:33.419 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:33.419 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:33.419 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:33.419 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:33.419 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:33.419 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:33.419 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:33.419 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:33.419 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:33.419 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:33.419 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:33.420 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:33.420 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:33.420 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:33.420 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:33.420 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:33.420 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:33.420 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:33.420 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:33.420 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:33.420 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:33.420 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:33.420 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:33.420 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:33.420 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:33.420 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:33.420 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:33.420 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:33.420 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:33.420 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:33.420 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:33.420 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:33.420 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:33.420 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:33.420 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:33.420 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:33.420 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:33.420 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3817741 00:18:33.420 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:33.420 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.420 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:33.991 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.991 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3817741 00:18:33.991 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:33.991 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.991 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:34.252 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.252 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3817741 00:18:34.252 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:34.252 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.252 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:34.514 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.514 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3817741 00:18:34.514 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:34.514 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.514 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:34.775 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.775 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3817741 00:18:34.775 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:34.775 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.775 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:35.035 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.035 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3817741 00:18:35.035 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:35.035 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.035 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:35.606 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.606 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3817741 00:18:35.606 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:35.606 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.606 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:35.866 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.867 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3817741 00:18:35.867 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:35.867 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.867 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:36.127 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.127 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3817741 00:18:36.127 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:36.127 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.127 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:36.388 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.388 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3817741 00:18:36.388 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:36.388 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.388 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:36.958 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.958 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3817741 00:18:36.958 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:36.958 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.958 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:37.219 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.219 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3817741 00:18:37.219 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:37.219 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.219 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:37.479 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.479 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3817741 00:18:37.479 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:37.479 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.479 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:37.739 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.739 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3817741 00:18:37.739 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:37.739 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.739 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:38.000 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.000 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3817741 00:18:38.000 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:38.000 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.000 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:38.572 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.572 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3817741 00:18:38.572 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:38.572 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.572 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:38.833 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.833 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3817741 00:18:38.833 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:38.833 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.833 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:39.094 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.094 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3817741 00:18:39.094 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:39.094 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.094 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:39.354 13:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.354 13:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3817741 00:18:39.354 13:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:39.354 13:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.355 13:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:39.615 13:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.615 13:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3817741 00:18:39.615 13:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:39.615 13:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.615 13:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:40.185 13:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.185 13:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3817741 00:18:40.185 13:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:40.185 13:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.185 13:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:40.445 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.445 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3817741 00:18:40.445 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:40.445 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.445 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:40.705 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.705 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3817741 00:18:40.705 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:40.705 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.706 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:40.966 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.966 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3817741 00:18:40.966 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:40.966 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.966 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:41.227 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.227 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3817741 00:18:41.227 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:41.227 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.227 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:41.797 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.797 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3817741 00:18:41.797 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:41.797 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.797 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:42.057 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.057 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3817741 00:18:42.057 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:42.057 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.057 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:42.318 13:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.318 13:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3817741 00:18:42.318 13:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:42.318 13:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.318 13:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:42.579 13:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.579 13:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3817741 00:18:42.579 13:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:42.579 13:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.579 13:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:42.839 13:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.839 13:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3817741 00:18:42.839 13:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:42.839 13:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.839 13:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:43.410 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.410 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3817741 00:18:43.410 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:43.410 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.410 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:43.670 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:43.670 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.670 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3817741 00:18:43.670 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3817741) - No such process 00:18:43.670 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3817741 00:18:43.670 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:43.670 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:43.670 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:18:43.670 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:43.670 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:18:43.670 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:43.671 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:18:43.671 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:43.671 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:43.671 rmmod nvme_tcp 00:18:43.671 rmmod nvme_fabrics 00:18:43.671 rmmod nvme_keyring 00:18:43.671 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:43.671 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:18:43.671 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:18:43.671 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3817644 ']' 00:18:43.671 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3817644 00:18:43.671 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' -z 3817644 ']' 00:18:43.671 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # kill -0 3817644 00:18:43.671 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # uname 00:18:43.671 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:43.671 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3817644 00:18:43.671 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:43.671 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:43.671 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3817644' 00:18:43.671 killing process with pid 3817644 00:18:43.671 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # kill 3817644 00:18:43.671 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@976 -- # wait 3817644 00:18:44.241 13:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:44.241 13:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:44.241 13:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:44.241 13:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:18:44.241 13:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:18:44.241 13:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:44.241 13:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:18:44.241 13:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:44.241 13:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:44.241 13:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.241 13:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:44.242 13:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.785 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:46.785 00:18:46.785 real 0m22.549s 00:18:46.785 user 0m43.352s 00:18:46.785 sys 0m9.769s 00:18:46.785 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:46.785 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:46.785 ************************************ 00:18:46.785 END TEST nvmf_connect_stress 00:18:46.785 ************************************ 00:18:46.785 13:23:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:46.785 13:23:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:46.785 13:23:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:46.785 13:23:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:46.785 ************************************ 00:18:46.785 START TEST nvmf_fused_ordering 00:18:46.785 ************************************ 00:18:46.785 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:46.785 * Looking for test storage... 00:18:46.785 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:46.785 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:46.785 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:18:46.785 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:46.785 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:46.785 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:46.785 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:46.785 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:46.785 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:46.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.786 --rc genhtml_branch_coverage=1 00:18:46.786 --rc genhtml_function_coverage=1 00:18:46.786 --rc genhtml_legend=1 00:18:46.786 --rc geninfo_all_blocks=1 00:18:46.786 --rc geninfo_unexecuted_blocks=1 00:18:46.786 00:18:46.786 ' 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:46.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.786 --rc genhtml_branch_coverage=1 00:18:46.786 --rc genhtml_function_coverage=1 00:18:46.786 --rc genhtml_legend=1 00:18:46.786 --rc geninfo_all_blocks=1 00:18:46.786 --rc geninfo_unexecuted_blocks=1 00:18:46.786 00:18:46.786 ' 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:46.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.786 --rc genhtml_branch_coverage=1 00:18:46.786 --rc genhtml_function_coverage=1 00:18:46.786 --rc genhtml_legend=1 00:18:46.786 --rc geninfo_all_blocks=1 00:18:46.786 --rc geninfo_unexecuted_blocks=1 00:18:46.786 00:18:46.786 ' 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:46.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.786 --rc genhtml_branch_coverage=1 00:18:46.786 --rc genhtml_function_coverage=1 00:18:46.786 --rc genhtml_legend=1 00:18:46.786 --rc geninfo_all_blocks=1 00:18:46.786 --rc geninfo_unexecuted_blocks=1 00:18:46.786 00:18:46.786 ' 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:46.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.786 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:46.787 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.787 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:46.787 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:46.787 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:18:46.787 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:54.935 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:54.935 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:18:54.935 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:54.935 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:54.935 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:54.935 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:54.935 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:54.935 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:18:54.935 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:54.935 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:18:54.935 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:18:54.935 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:18:54.935 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:18:54.935 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:54.936 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:54.936 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:54.936 Found net devices under 0000:31:00.0: cvl_0_0 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:54.936 Found net devices under 0000:31:00.1: cvl_0_1 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:54.936 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:55.197 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:55.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:55.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.551 ms 00:18:55.197 00:18:55.197 --- 10.0.0.2 ping statistics --- 00:18:55.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.197 rtt min/avg/max/mdev = 0.551/0.551/0.551/0.000 ms 00:18:55.197 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:55.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:55.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:18:55.198 00:18:55.198 --- 10.0.0.1 ping statistics --- 00:18:55.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.198 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:18:55.198 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:55.198 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:18:55.198 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:55.198 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:55.198 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:55.198 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:55.198 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:55.198 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:55.198 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:55.198 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:18:55.198 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:55.198 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:55.198 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:55.198 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3824745 00:18:55.198 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3824745 00:18:55.198 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:55.198 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # '[' -z 3824745 ']' 00:18:55.198 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.198 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:55.198 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.198 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:55.198 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:55.198 [2024-11-07 13:24:03.088357] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:18:55.198 [2024-11-07 13:24:03.088470] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:55.459 [2024-11-07 13:24:03.263674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.459 [2024-11-07 13:24:03.384338] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:55.459 [2024-11-07 13:24:03.384402] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:55.459 [2024-11-07 13:24:03.384414] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:55.459 [2024-11-07 13:24:03.384427] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:55.459 [2024-11-07 13:24:03.384441] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:55.459 [2024-11-07 13:24:03.385928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.032 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:56.032 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@866 -- # return 0 00:18:56.032 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:56.032 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:56.032 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:56.032 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:56.032 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:56.032 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.032 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:56.032 [2024-11-07 13:24:03.938988] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:56.032 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.032 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:56.032 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.032 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:56.032 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.032 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:56.032 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.032 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:56.032 [2024-11-07 13:24:03.963247] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:56.032 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.032 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:56.032 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.032 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:56.032 NULL1 00:18:56.032 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.032 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:18:56.032 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.032 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:56.032 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.032 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:56.032 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.032 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:56.032 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.032 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:56.032 [2024-11-07 13:24:04.034125] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:18:56.032 [2024-11-07 13:24:04.034213] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3824883 ] 00:18:56.606 Attached to nqn.2016-06.io.spdk:cnode1 00:18:56.606 Namespace ID: 1 size: 1GB 00:18:56.606 fused_ordering(0) 00:18:56.606 fused_ordering(1) 00:18:56.606 fused_ordering(2) 00:18:56.606 fused_ordering(3) 00:18:56.606 fused_ordering(4) 00:18:56.606 fused_ordering(5) 00:18:56.606 fused_ordering(6) 00:18:56.606 fused_ordering(7) 00:18:56.606 fused_ordering(8) 00:18:56.606 fused_ordering(9) 00:18:56.606 fused_ordering(10) 00:18:56.606 fused_ordering(11) 00:18:56.606 fused_ordering(12) 00:18:56.606 fused_ordering(13) 00:18:56.606 fused_ordering(14) 00:18:56.606 fused_ordering(15) 00:18:56.606 fused_ordering(16) 00:18:56.606 fused_ordering(17) 00:18:56.606 fused_ordering(18) 00:18:56.606 fused_ordering(19) 00:18:56.606 fused_ordering(20) 00:18:56.606 fused_ordering(21) 00:18:56.606 fused_ordering(22) 00:18:56.606 fused_ordering(23) 00:18:56.606 fused_ordering(24) 00:18:56.606 fused_ordering(25) 00:18:56.606 fused_ordering(26) 00:18:56.606 fused_ordering(27) 00:18:56.606 fused_ordering(28) 00:18:56.606 fused_ordering(29) 00:18:56.606 fused_ordering(30) 00:18:56.606 fused_ordering(31) 00:18:56.606 fused_ordering(32) 00:18:56.606 fused_ordering(33) 00:18:56.606 fused_ordering(34) 00:18:56.606 fused_ordering(35) 00:18:56.606 fused_ordering(36) 00:18:56.606 fused_ordering(37) 00:18:56.606 fused_ordering(38) 00:18:56.606 fused_ordering(39) 00:18:56.606 fused_ordering(40) 00:18:56.606 fused_ordering(41) 00:18:56.606 fused_ordering(42) 00:18:56.606 fused_ordering(43) 00:18:56.606 fused_ordering(44) 00:18:56.606 fused_ordering(45) 00:18:56.606 fused_ordering(46) 00:18:56.606 fused_ordering(47) 00:18:56.606 fused_ordering(48) 00:18:56.606 fused_ordering(49) 00:18:56.606 fused_ordering(50) 00:18:56.606 fused_ordering(51) 00:18:56.606 fused_ordering(52) 00:18:56.606 fused_ordering(53) 00:18:56.606 fused_ordering(54) 00:18:56.606 fused_ordering(55) 00:18:56.606 fused_ordering(56) 00:18:56.606 fused_ordering(57) 00:18:56.606 fused_ordering(58) 00:18:56.606 fused_ordering(59) 00:18:56.606 fused_ordering(60) 00:18:56.606 fused_ordering(61) 00:18:56.606 fused_ordering(62) 00:18:56.606 fused_ordering(63) 00:18:56.606 fused_ordering(64) 00:18:56.606 fused_ordering(65) 00:18:56.606 fused_ordering(66) 00:18:56.606 fused_ordering(67) 00:18:56.606 fused_ordering(68) 00:18:56.606 fused_ordering(69) 00:18:56.606 fused_ordering(70) 00:18:56.606 fused_ordering(71) 00:18:56.606 fused_ordering(72) 00:18:56.606 fused_ordering(73) 00:18:56.606 fused_ordering(74) 00:18:56.606 fused_ordering(75) 00:18:56.606 fused_ordering(76) 00:18:56.606 fused_ordering(77) 00:18:56.606 fused_ordering(78) 00:18:56.606 fused_ordering(79) 00:18:56.606 fused_ordering(80) 00:18:56.606 fused_ordering(81) 00:18:56.606 fused_ordering(82) 00:18:56.606 fused_ordering(83) 00:18:56.606 fused_ordering(84) 00:18:56.606 fused_ordering(85) 00:18:56.606 fused_ordering(86) 00:18:56.606 fused_ordering(87) 00:18:56.606 fused_ordering(88) 00:18:56.606 fused_ordering(89) 00:18:56.606 fused_ordering(90) 00:18:56.606 fused_ordering(91) 00:18:56.606 fused_ordering(92) 00:18:56.606 fused_ordering(93) 00:18:56.606 fused_ordering(94) 00:18:56.606 fused_ordering(95) 00:18:56.606 fused_ordering(96) 00:18:56.606 fused_ordering(97) 00:18:56.606 fused_ordering(98) 00:18:56.606 fused_ordering(99) 00:18:56.606 fused_ordering(100) 00:18:56.606 fused_ordering(101) 00:18:56.606 fused_ordering(102) 00:18:56.606 fused_ordering(103) 00:18:56.606 fused_ordering(104) 00:18:56.606 fused_ordering(105) 00:18:56.606 fused_ordering(106) 00:18:56.606 fused_ordering(107) 00:18:56.606 fused_ordering(108) 00:18:56.606 fused_ordering(109) 00:18:56.606 fused_ordering(110) 00:18:56.606 fused_ordering(111) 00:18:56.606 fused_ordering(112) 00:18:56.606 fused_ordering(113) 00:18:56.606 fused_ordering(114) 00:18:56.606 fused_ordering(115) 00:18:56.606 fused_ordering(116) 00:18:56.606 fused_ordering(117) 00:18:56.606 fused_ordering(118) 00:18:56.606 fused_ordering(119) 00:18:56.606 fused_ordering(120) 00:18:56.606 fused_ordering(121) 00:18:56.606 fused_ordering(122) 00:18:56.606 fused_ordering(123) 00:18:56.606 fused_ordering(124) 00:18:56.606 fused_ordering(125) 00:18:56.606 fused_ordering(126) 00:18:56.606 fused_ordering(127) 00:18:56.606 fused_ordering(128) 00:18:56.606 fused_ordering(129) 00:18:56.606 fused_ordering(130) 00:18:56.606 fused_ordering(131) 00:18:56.606 fused_ordering(132) 00:18:56.606 fused_ordering(133) 00:18:56.606 fused_ordering(134) 00:18:56.606 fused_ordering(135) 00:18:56.606 fused_ordering(136) 00:18:56.606 fused_ordering(137) 00:18:56.606 fused_ordering(138) 00:18:56.606 fused_ordering(139) 00:18:56.606 fused_ordering(140) 00:18:56.606 fused_ordering(141) 00:18:56.606 fused_ordering(142) 00:18:56.606 fused_ordering(143) 00:18:56.606 fused_ordering(144) 00:18:56.606 fused_ordering(145) 00:18:56.606 fused_ordering(146) 00:18:56.606 fused_ordering(147) 00:18:56.606 fused_ordering(148) 00:18:56.606 fused_ordering(149) 00:18:56.606 fused_ordering(150) 00:18:56.606 fused_ordering(151) 00:18:56.606 fused_ordering(152) 00:18:56.607 fused_ordering(153) 00:18:56.607 fused_ordering(154) 00:18:56.607 fused_ordering(155) 00:18:56.607 fused_ordering(156) 00:18:56.607 fused_ordering(157) 00:18:56.607 fused_ordering(158) 00:18:56.607 fused_ordering(159) 00:18:56.607 fused_ordering(160) 00:18:56.607 fused_ordering(161) 00:18:56.607 fused_ordering(162) 00:18:56.607 fused_ordering(163) 00:18:56.607 fused_ordering(164) 00:18:56.607 fused_ordering(165) 00:18:56.607 fused_ordering(166) 00:18:56.607 fused_ordering(167) 00:18:56.607 fused_ordering(168) 00:18:56.607 fused_ordering(169) 00:18:56.607 fused_ordering(170) 00:18:56.607 fused_ordering(171) 00:18:56.607 fused_ordering(172) 00:18:56.607 fused_ordering(173) 00:18:56.607 fused_ordering(174) 00:18:56.607 fused_ordering(175) 00:18:56.607 fused_ordering(176) 00:18:56.607 fused_ordering(177) 00:18:56.607 fused_ordering(178) 00:18:56.607 fused_ordering(179) 00:18:56.607 fused_ordering(180) 00:18:56.607 fused_ordering(181) 00:18:56.607 fused_ordering(182) 00:18:56.607 fused_ordering(183) 00:18:56.607 fused_ordering(184) 00:18:56.607 fused_ordering(185) 00:18:56.607 fused_ordering(186) 00:18:56.607 fused_ordering(187) 00:18:56.607 fused_ordering(188) 00:18:56.607 fused_ordering(189) 00:18:56.607 fused_ordering(190) 00:18:56.607 fused_ordering(191) 00:18:56.607 fused_ordering(192) 00:18:56.607 fused_ordering(193) 00:18:56.607 fused_ordering(194) 00:18:56.607 fused_ordering(195) 00:18:56.607 fused_ordering(196) 00:18:56.607 fused_ordering(197) 00:18:56.607 fused_ordering(198) 00:18:56.607 fused_ordering(199) 00:18:56.607 fused_ordering(200) 00:18:56.607 fused_ordering(201) 00:18:56.607 fused_ordering(202) 00:18:56.607 fused_ordering(203) 00:18:56.607 fused_ordering(204) 00:18:56.607 fused_ordering(205) 00:18:56.869 fused_ordering(206) 00:18:56.869 fused_ordering(207) 00:18:56.869 fused_ordering(208) 00:18:56.869 fused_ordering(209) 00:18:56.869 fused_ordering(210) 00:18:56.869 fused_ordering(211) 00:18:56.869 fused_ordering(212) 00:18:56.869 fused_ordering(213) 00:18:56.869 fused_ordering(214) 00:18:56.869 fused_ordering(215) 00:18:56.869 fused_ordering(216) 00:18:56.869 fused_ordering(217) 00:18:56.869 fused_ordering(218) 00:18:56.869 fused_ordering(219) 00:18:56.869 fused_ordering(220) 00:18:56.869 fused_ordering(221) 00:18:56.869 fused_ordering(222) 00:18:56.869 fused_ordering(223) 00:18:56.869 fused_ordering(224) 00:18:56.869 fused_ordering(225) 00:18:56.869 fused_ordering(226) 00:18:56.869 fused_ordering(227) 00:18:56.869 fused_ordering(228) 00:18:56.869 fused_ordering(229) 00:18:56.869 fused_ordering(230) 00:18:56.869 fused_ordering(231) 00:18:56.869 fused_ordering(232) 00:18:56.869 fused_ordering(233) 00:18:56.869 fused_ordering(234) 00:18:56.869 fused_ordering(235) 00:18:56.869 fused_ordering(236) 00:18:56.869 fused_ordering(237) 00:18:56.869 fused_ordering(238) 00:18:56.869 fused_ordering(239) 00:18:56.869 fused_ordering(240) 00:18:56.869 fused_ordering(241) 00:18:56.869 fused_ordering(242) 00:18:56.869 fused_ordering(243) 00:18:56.869 fused_ordering(244) 00:18:56.869 fused_ordering(245) 00:18:56.869 fused_ordering(246) 00:18:56.869 fused_ordering(247) 00:18:56.869 fused_ordering(248) 00:18:56.869 fused_ordering(249) 00:18:56.869 fused_ordering(250) 00:18:56.869 fused_ordering(251) 00:18:56.869 fused_ordering(252) 00:18:56.869 fused_ordering(253) 00:18:56.869 fused_ordering(254) 00:18:56.869 fused_ordering(255) 00:18:56.869 fused_ordering(256) 00:18:56.869 fused_ordering(257) 00:18:56.869 fused_ordering(258) 00:18:56.869 fused_ordering(259) 00:18:56.869 fused_ordering(260) 00:18:56.869 fused_ordering(261) 00:18:56.869 fused_ordering(262) 00:18:56.869 fused_ordering(263) 00:18:56.869 fused_ordering(264) 00:18:56.869 fused_ordering(265) 00:18:56.869 fused_ordering(266) 00:18:56.869 fused_ordering(267) 00:18:56.869 fused_ordering(268) 00:18:56.869 fused_ordering(269) 00:18:56.869 fused_ordering(270) 00:18:56.869 fused_ordering(271) 00:18:56.869 fused_ordering(272) 00:18:56.869 fused_ordering(273) 00:18:56.869 fused_ordering(274) 00:18:56.869 fused_ordering(275) 00:18:56.869 fused_ordering(276) 00:18:56.869 fused_ordering(277) 00:18:56.869 fused_ordering(278) 00:18:56.869 fused_ordering(279) 00:18:56.869 fused_ordering(280) 00:18:56.869 fused_ordering(281) 00:18:56.869 fused_ordering(282) 00:18:56.869 fused_ordering(283) 00:18:56.869 fused_ordering(284) 00:18:56.869 fused_ordering(285) 00:18:56.869 fused_ordering(286) 00:18:56.869 fused_ordering(287) 00:18:56.869 fused_ordering(288) 00:18:56.869 fused_ordering(289) 00:18:56.869 fused_ordering(290) 00:18:56.869 fused_ordering(291) 00:18:56.869 fused_ordering(292) 00:18:56.869 fused_ordering(293) 00:18:56.869 fused_ordering(294) 00:18:56.869 fused_ordering(295) 00:18:56.869 fused_ordering(296) 00:18:56.869 fused_ordering(297) 00:18:56.869 fused_ordering(298) 00:18:56.869 fused_ordering(299) 00:18:56.869 fused_ordering(300) 00:18:56.869 fused_ordering(301) 00:18:56.869 fused_ordering(302) 00:18:56.869 fused_ordering(303) 00:18:56.869 fused_ordering(304) 00:18:56.869 fused_ordering(305) 00:18:56.869 fused_ordering(306) 00:18:56.869 fused_ordering(307) 00:18:56.869 fused_ordering(308) 00:18:56.869 fused_ordering(309) 00:18:56.869 fused_ordering(310) 00:18:56.869 fused_ordering(311) 00:18:56.869 fused_ordering(312) 00:18:56.869 fused_ordering(313) 00:18:56.869 fused_ordering(314) 00:18:56.869 fused_ordering(315) 00:18:56.869 fused_ordering(316) 00:18:56.869 fused_ordering(317) 00:18:56.869 fused_ordering(318) 00:18:56.869 fused_ordering(319) 00:18:56.869 fused_ordering(320) 00:18:56.869 fused_ordering(321) 00:18:56.869 fused_ordering(322) 00:18:56.869 fused_ordering(323) 00:18:56.869 fused_ordering(324) 00:18:56.869 fused_ordering(325) 00:18:56.869 fused_ordering(326) 00:18:56.869 fused_ordering(327) 00:18:56.869 fused_ordering(328) 00:18:56.869 fused_ordering(329) 00:18:56.869 fused_ordering(330) 00:18:56.869 fused_ordering(331) 00:18:56.869 fused_ordering(332) 00:18:56.869 fused_ordering(333) 00:18:56.869 fused_ordering(334) 00:18:56.869 fused_ordering(335) 00:18:56.869 fused_ordering(336) 00:18:56.869 fused_ordering(337) 00:18:56.869 fused_ordering(338) 00:18:56.869 fused_ordering(339) 00:18:56.869 fused_ordering(340) 00:18:56.869 fused_ordering(341) 00:18:56.869 fused_ordering(342) 00:18:56.869 fused_ordering(343) 00:18:56.869 fused_ordering(344) 00:18:56.869 fused_ordering(345) 00:18:56.869 fused_ordering(346) 00:18:56.869 fused_ordering(347) 00:18:56.869 fused_ordering(348) 00:18:56.869 fused_ordering(349) 00:18:56.869 fused_ordering(350) 00:18:56.869 fused_ordering(351) 00:18:56.869 fused_ordering(352) 00:18:56.869 fused_ordering(353) 00:18:56.869 fused_ordering(354) 00:18:56.869 fused_ordering(355) 00:18:56.869 fused_ordering(356) 00:18:56.869 fused_ordering(357) 00:18:56.869 fused_ordering(358) 00:18:56.869 fused_ordering(359) 00:18:56.869 fused_ordering(360) 00:18:56.869 fused_ordering(361) 00:18:56.869 fused_ordering(362) 00:18:56.869 fused_ordering(363) 00:18:56.869 fused_ordering(364) 00:18:56.869 fused_ordering(365) 00:18:56.869 fused_ordering(366) 00:18:56.869 fused_ordering(367) 00:18:56.869 fused_ordering(368) 00:18:56.869 fused_ordering(369) 00:18:56.869 fused_ordering(370) 00:18:56.869 fused_ordering(371) 00:18:56.869 fused_ordering(372) 00:18:56.869 fused_ordering(373) 00:18:56.869 fused_ordering(374) 00:18:56.869 fused_ordering(375) 00:18:56.869 fused_ordering(376) 00:18:56.869 fused_ordering(377) 00:18:56.869 fused_ordering(378) 00:18:56.869 fused_ordering(379) 00:18:56.869 fused_ordering(380) 00:18:56.869 fused_ordering(381) 00:18:56.869 fused_ordering(382) 00:18:56.869 fused_ordering(383) 00:18:56.869 fused_ordering(384) 00:18:56.869 fused_ordering(385) 00:18:56.869 fused_ordering(386) 00:18:56.869 fused_ordering(387) 00:18:56.869 fused_ordering(388) 00:18:56.869 fused_ordering(389) 00:18:56.869 fused_ordering(390) 00:18:56.869 fused_ordering(391) 00:18:56.869 fused_ordering(392) 00:18:56.869 fused_ordering(393) 00:18:56.869 fused_ordering(394) 00:18:56.869 fused_ordering(395) 00:18:56.869 fused_ordering(396) 00:18:56.869 fused_ordering(397) 00:18:56.869 fused_ordering(398) 00:18:56.869 fused_ordering(399) 00:18:56.869 fused_ordering(400) 00:18:56.869 fused_ordering(401) 00:18:56.869 fused_ordering(402) 00:18:56.869 fused_ordering(403) 00:18:56.869 fused_ordering(404) 00:18:56.869 fused_ordering(405) 00:18:56.869 fused_ordering(406) 00:18:56.869 fused_ordering(407) 00:18:56.869 fused_ordering(408) 00:18:56.869 fused_ordering(409) 00:18:56.869 fused_ordering(410) 00:18:57.442 fused_ordering(411) 00:18:57.442 fused_ordering(412) 00:18:57.442 fused_ordering(413) 00:18:57.442 fused_ordering(414) 00:18:57.442 fused_ordering(415) 00:18:57.442 fused_ordering(416) 00:18:57.442 fused_ordering(417) 00:18:57.442 fused_ordering(418) 00:18:57.442 fused_ordering(419) 00:18:57.442 fused_ordering(420) 00:18:57.442 fused_ordering(421) 00:18:57.442 fused_ordering(422) 00:18:57.442 fused_ordering(423) 00:18:57.442 fused_ordering(424) 00:18:57.442 fused_ordering(425) 00:18:57.442 fused_ordering(426) 00:18:57.442 fused_ordering(427) 00:18:57.442 fused_ordering(428) 00:18:57.442 fused_ordering(429) 00:18:57.442 fused_ordering(430) 00:18:57.442 fused_ordering(431) 00:18:57.442 fused_ordering(432) 00:18:57.442 fused_ordering(433) 00:18:57.442 fused_ordering(434) 00:18:57.442 fused_ordering(435) 00:18:57.442 fused_ordering(436) 00:18:57.442 fused_ordering(437) 00:18:57.442 fused_ordering(438) 00:18:57.442 fused_ordering(439) 00:18:57.442 fused_ordering(440) 00:18:57.442 fused_ordering(441) 00:18:57.442 fused_ordering(442) 00:18:57.442 fused_ordering(443) 00:18:57.442 fused_ordering(444) 00:18:57.442 fused_ordering(445) 00:18:57.442 fused_ordering(446) 00:18:57.442 fused_ordering(447) 00:18:57.442 fused_ordering(448) 00:18:57.442 fused_ordering(449) 00:18:57.442 fused_ordering(450) 00:18:57.442 fused_ordering(451) 00:18:57.442 fused_ordering(452) 00:18:57.442 fused_ordering(453) 00:18:57.442 fused_ordering(454) 00:18:57.442 fused_ordering(455) 00:18:57.442 fused_ordering(456) 00:18:57.442 fused_ordering(457) 00:18:57.442 fused_ordering(458) 00:18:57.442 fused_ordering(459) 00:18:57.442 fused_ordering(460) 00:18:57.442 fused_ordering(461) 00:18:57.442 fused_ordering(462) 00:18:57.442 fused_ordering(463) 00:18:57.442 fused_ordering(464) 00:18:57.442 fused_ordering(465) 00:18:57.442 fused_ordering(466) 00:18:57.442 fused_ordering(467) 00:18:57.442 fused_ordering(468) 00:18:57.442 fused_ordering(469) 00:18:57.442 fused_ordering(470) 00:18:57.442 fused_ordering(471) 00:18:57.442 fused_ordering(472) 00:18:57.442 fused_ordering(473) 00:18:57.442 fused_ordering(474) 00:18:57.442 fused_ordering(475) 00:18:57.442 fused_ordering(476) 00:18:57.442 fused_ordering(477) 00:18:57.442 fused_ordering(478) 00:18:57.442 fused_ordering(479) 00:18:57.442 fused_ordering(480) 00:18:57.442 fused_ordering(481) 00:18:57.442 fused_ordering(482) 00:18:57.442 fused_ordering(483) 00:18:57.442 fused_ordering(484) 00:18:57.442 fused_ordering(485) 00:18:57.442 fused_ordering(486) 00:18:57.442 fused_ordering(487) 00:18:57.442 fused_ordering(488) 00:18:57.442 fused_ordering(489) 00:18:57.442 fused_ordering(490) 00:18:57.442 fused_ordering(491) 00:18:57.442 fused_ordering(492) 00:18:57.442 fused_ordering(493) 00:18:57.442 fused_ordering(494) 00:18:57.442 fused_ordering(495) 00:18:57.442 fused_ordering(496) 00:18:57.442 fused_ordering(497) 00:18:57.442 fused_ordering(498) 00:18:57.442 fused_ordering(499) 00:18:57.442 fused_ordering(500) 00:18:57.442 fused_ordering(501) 00:18:57.442 fused_ordering(502) 00:18:57.442 fused_ordering(503) 00:18:57.442 fused_ordering(504) 00:18:57.442 fused_ordering(505) 00:18:57.442 fused_ordering(506) 00:18:57.442 fused_ordering(507) 00:18:57.442 fused_ordering(508) 00:18:57.442 fused_ordering(509) 00:18:57.442 fused_ordering(510) 00:18:57.442 fused_ordering(511) 00:18:57.442 fused_ordering(512) 00:18:57.442 fused_ordering(513) 00:18:57.442 fused_ordering(514) 00:18:57.442 fused_ordering(515) 00:18:57.442 fused_ordering(516) 00:18:57.442 fused_ordering(517) 00:18:57.442 fused_ordering(518) 00:18:57.442 fused_ordering(519) 00:18:57.442 fused_ordering(520) 00:18:57.442 fused_ordering(521) 00:18:57.442 fused_ordering(522) 00:18:57.442 fused_ordering(523) 00:18:57.442 fused_ordering(524) 00:18:57.442 fused_ordering(525) 00:18:57.442 fused_ordering(526) 00:18:57.442 fused_ordering(527) 00:18:57.442 fused_ordering(528) 00:18:57.442 fused_ordering(529) 00:18:57.442 fused_ordering(530) 00:18:57.442 fused_ordering(531) 00:18:57.442 fused_ordering(532) 00:18:57.442 fused_ordering(533) 00:18:57.442 fused_ordering(534) 00:18:57.442 fused_ordering(535) 00:18:57.443 fused_ordering(536) 00:18:57.443 fused_ordering(537) 00:18:57.443 fused_ordering(538) 00:18:57.443 fused_ordering(539) 00:18:57.443 fused_ordering(540) 00:18:57.443 fused_ordering(541) 00:18:57.443 fused_ordering(542) 00:18:57.443 fused_ordering(543) 00:18:57.443 fused_ordering(544) 00:18:57.443 fused_ordering(545) 00:18:57.443 fused_ordering(546) 00:18:57.443 fused_ordering(547) 00:18:57.443 fused_ordering(548) 00:18:57.443 fused_ordering(549) 00:18:57.443 fused_ordering(550) 00:18:57.443 fused_ordering(551) 00:18:57.443 fused_ordering(552) 00:18:57.443 fused_ordering(553) 00:18:57.443 fused_ordering(554) 00:18:57.443 fused_ordering(555) 00:18:57.443 fused_ordering(556) 00:18:57.443 fused_ordering(557) 00:18:57.443 fused_ordering(558) 00:18:57.443 fused_ordering(559) 00:18:57.443 fused_ordering(560) 00:18:57.443 fused_ordering(561) 00:18:57.443 fused_ordering(562) 00:18:57.443 fused_ordering(563) 00:18:57.443 fused_ordering(564) 00:18:57.443 fused_ordering(565) 00:18:57.443 fused_ordering(566) 00:18:57.443 fused_ordering(567) 00:18:57.443 fused_ordering(568) 00:18:57.443 fused_ordering(569) 00:18:57.443 fused_ordering(570) 00:18:57.443 fused_ordering(571) 00:18:57.443 fused_ordering(572) 00:18:57.443 fused_ordering(573) 00:18:57.443 fused_ordering(574) 00:18:57.443 fused_ordering(575) 00:18:57.443 fused_ordering(576) 00:18:57.443 fused_ordering(577) 00:18:57.443 fused_ordering(578) 00:18:57.443 fused_ordering(579) 00:18:57.443 fused_ordering(580) 00:18:57.443 fused_ordering(581) 00:18:57.443 fused_ordering(582) 00:18:57.443 fused_ordering(583) 00:18:57.443 fused_ordering(584) 00:18:57.443 fused_ordering(585) 00:18:57.443 fused_ordering(586) 00:18:57.443 fused_ordering(587) 00:18:57.443 fused_ordering(588) 00:18:57.443 fused_ordering(589) 00:18:57.443 fused_ordering(590) 00:18:57.443 fused_ordering(591) 00:18:57.443 fused_ordering(592) 00:18:57.443 fused_ordering(593) 00:18:57.443 fused_ordering(594) 00:18:57.443 fused_ordering(595) 00:18:57.443 fused_ordering(596) 00:18:57.443 fused_ordering(597) 00:18:57.443 fused_ordering(598) 00:18:57.443 fused_ordering(599) 00:18:57.443 fused_ordering(600) 00:18:57.443 fused_ordering(601) 00:18:57.443 fused_ordering(602) 00:18:57.443 fused_ordering(603) 00:18:57.443 fused_ordering(604) 00:18:57.443 fused_ordering(605) 00:18:57.443 fused_ordering(606) 00:18:57.443 fused_ordering(607) 00:18:57.443 fused_ordering(608) 00:18:57.443 fused_ordering(609) 00:18:57.443 fused_ordering(610) 00:18:57.443 fused_ordering(611) 00:18:57.443 fused_ordering(612) 00:18:57.443 fused_ordering(613) 00:18:57.443 fused_ordering(614) 00:18:57.443 fused_ordering(615) 00:18:58.015 fused_ordering(616) 00:18:58.015 fused_ordering(617) 00:18:58.015 fused_ordering(618) 00:18:58.015 fused_ordering(619) 00:18:58.015 fused_ordering(620) 00:18:58.015 fused_ordering(621) 00:18:58.015 fused_ordering(622) 00:18:58.015 fused_ordering(623) 00:18:58.015 fused_ordering(624) 00:18:58.015 fused_ordering(625) 00:18:58.015 fused_ordering(626) 00:18:58.015 fused_ordering(627) 00:18:58.015 fused_ordering(628) 00:18:58.015 fused_ordering(629) 00:18:58.015 fused_ordering(630) 00:18:58.015 fused_ordering(631) 00:18:58.015 fused_ordering(632) 00:18:58.015 fused_ordering(633) 00:18:58.015 fused_ordering(634) 00:18:58.015 fused_ordering(635) 00:18:58.015 fused_ordering(636) 00:18:58.015 fused_ordering(637) 00:18:58.015 fused_ordering(638) 00:18:58.015 fused_ordering(639) 00:18:58.015 fused_ordering(640) 00:18:58.015 fused_ordering(641) 00:18:58.015 fused_ordering(642) 00:18:58.015 fused_ordering(643) 00:18:58.015 fused_ordering(644) 00:18:58.015 fused_ordering(645) 00:18:58.015 fused_ordering(646) 00:18:58.015 fused_ordering(647) 00:18:58.015 fused_ordering(648) 00:18:58.015 fused_ordering(649) 00:18:58.015 fused_ordering(650) 00:18:58.015 fused_ordering(651) 00:18:58.015 fused_ordering(652) 00:18:58.015 fused_ordering(653) 00:18:58.015 fused_ordering(654) 00:18:58.015 fused_ordering(655) 00:18:58.015 fused_ordering(656) 00:18:58.015 fused_ordering(657) 00:18:58.015 fused_ordering(658) 00:18:58.015 fused_ordering(659) 00:18:58.015 fused_ordering(660) 00:18:58.015 fused_ordering(661) 00:18:58.015 fused_ordering(662) 00:18:58.015 fused_ordering(663) 00:18:58.015 fused_ordering(664) 00:18:58.015 fused_ordering(665) 00:18:58.015 fused_ordering(666) 00:18:58.015 fused_ordering(667) 00:18:58.015 fused_ordering(668) 00:18:58.015 fused_ordering(669) 00:18:58.015 fused_ordering(670) 00:18:58.015 fused_ordering(671) 00:18:58.015 fused_ordering(672) 00:18:58.015 fused_ordering(673) 00:18:58.015 fused_ordering(674) 00:18:58.015 fused_ordering(675) 00:18:58.015 fused_ordering(676) 00:18:58.015 fused_ordering(677) 00:18:58.015 fused_ordering(678) 00:18:58.015 fused_ordering(679) 00:18:58.015 fused_ordering(680) 00:18:58.015 fused_ordering(681) 00:18:58.016 fused_ordering(682) 00:18:58.016 fused_ordering(683) 00:18:58.016 fused_ordering(684) 00:18:58.016 fused_ordering(685) 00:18:58.016 fused_ordering(686) 00:18:58.016 fused_ordering(687) 00:18:58.016 fused_ordering(688) 00:18:58.016 fused_ordering(689) 00:18:58.016 fused_ordering(690) 00:18:58.016 fused_ordering(691) 00:18:58.016 fused_ordering(692) 00:18:58.016 fused_ordering(693) 00:18:58.016 fused_ordering(694) 00:18:58.016 fused_ordering(695) 00:18:58.016 fused_ordering(696) 00:18:58.016 fused_ordering(697) 00:18:58.016 fused_ordering(698) 00:18:58.016 fused_ordering(699) 00:18:58.016 fused_ordering(700) 00:18:58.016 fused_ordering(701) 00:18:58.016 fused_ordering(702) 00:18:58.016 fused_ordering(703) 00:18:58.016 fused_ordering(704) 00:18:58.016 fused_ordering(705) 00:18:58.016 fused_ordering(706) 00:18:58.016 fused_ordering(707) 00:18:58.016 fused_ordering(708) 00:18:58.016 fused_ordering(709) 00:18:58.016 fused_ordering(710) 00:18:58.016 fused_ordering(711) 00:18:58.016 fused_ordering(712) 00:18:58.016 fused_ordering(713) 00:18:58.016 fused_ordering(714) 00:18:58.016 fused_ordering(715) 00:18:58.016 fused_ordering(716) 00:18:58.016 fused_ordering(717) 00:18:58.016 fused_ordering(718) 00:18:58.016 fused_ordering(719) 00:18:58.016 fused_ordering(720) 00:18:58.016 fused_ordering(721) 00:18:58.016 fused_ordering(722) 00:18:58.016 fused_ordering(723) 00:18:58.016 fused_ordering(724) 00:18:58.016 fused_ordering(725) 00:18:58.016 fused_ordering(726) 00:18:58.016 fused_ordering(727) 00:18:58.016 fused_ordering(728) 00:18:58.016 fused_ordering(729) 00:18:58.016 fused_ordering(730) 00:18:58.016 fused_ordering(731) 00:18:58.016 fused_ordering(732) 00:18:58.016 fused_ordering(733) 00:18:58.016 fused_ordering(734) 00:18:58.016 fused_ordering(735) 00:18:58.016 fused_ordering(736) 00:18:58.016 fused_ordering(737) 00:18:58.016 fused_ordering(738) 00:18:58.016 fused_ordering(739) 00:18:58.016 fused_ordering(740) 00:18:58.016 fused_ordering(741) 00:18:58.016 fused_ordering(742) 00:18:58.016 fused_ordering(743) 00:18:58.016 fused_ordering(744) 00:18:58.016 fused_ordering(745) 00:18:58.016 fused_ordering(746) 00:18:58.016 fused_ordering(747) 00:18:58.016 fused_ordering(748) 00:18:58.016 fused_ordering(749) 00:18:58.016 fused_ordering(750) 00:18:58.016 fused_ordering(751) 00:18:58.016 fused_ordering(752) 00:18:58.016 fused_ordering(753) 00:18:58.016 fused_ordering(754) 00:18:58.016 fused_ordering(755) 00:18:58.016 fused_ordering(756) 00:18:58.016 fused_ordering(757) 00:18:58.016 fused_ordering(758) 00:18:58.016 fused_ordering(759) 00:18:58.016 fused_ordering(760) 00:18:58.016 fused_ordering(761) 00:18:58.016 fused_ordering(762) 00:18:58.016 fused_ordering(763) 00:18:58.016 fused_ordering(764) 00:18:58.016 fused_ordering(765) 00:18:58.016 fused_ordering(766) 00:18:58.016 fused_ordering(767) 00:18:58.016 fused_ordering(768) 00:18:58.016 fused_ordering(769) 00:18:58.016 fused_ordering(770) 00:18:58.016 fused_ordering(771) 00:18:58.016 fused_ordering(772) 00:18:58.016 fused_ordering(773) 00:18:58.016 fused_ordering(774) 00:18:58.016 fused_ordering(775) 00:18:58.016 fused_ordering(776) 00:18:58.016 fused_ordering(777) 00:18:58.016 fused_ordering(778) 00:18:58.016 fused_ordering(779) 00:18:58.016 fused_ordering(780) 00:18:58.016 fused_ordering(781) 00:18:58.016 fused_ordering(782) 00:18:58.016 fused_ordering(783) 00:18:58.016 fused_ordering(784) 00:18:58.016 fused_ordering(785) 00:18:58.016 fused_ordering(786) 00:18:58.016 fused_ordering(787) 00:18:58.016 fused_ordering(788) 00:18:58.016 fused_ordering(789) 00:18:58.016 fused_ordering(790) 00:18:58.016 fused_ordering(791) 00:18:58.016 fused_ordering(792) 00:18:58.016 fused_ordering(793) 00:18:58.016 fused_ordering(794) 00:18:58.016 fused_ordering(795) 00:18:58.016 fused_ordering(796) 00:18:58.016 fused_ordering(797) 00:18:58.016 fused_ordering(798) 00:18:58.016 fused_ordering(799) 00:18:58.016 fused_ordering(800) 00:18:58.016 fused_ordering(801) 00:18:58.016 fused_ordering(802) 00:18:58.016 fused_ordering(803) 00:18:58.016 fused_ordering(804) 00:18:58.016 fused_ordering(805) 00:18:58.016 fused_ordering(806) 00:18:58.016 fused_ordering(807) 00:18:58.016 fused_ordering(808) 00:18:58.016 fused_ordering(809) 00:18:58.016 fused_ordering(810) 00:18:58.016 fused_ordering(811) 00:18:58.016 fused_ordering(812) 00:18:58.016 fused_ordering(813) 00:18:58.016 fused_ordering(814) 00:18:58.016 fused_ordering(815) 00:18:58.016 fused_ordering(816) 00:18:58.016 fused_ordering(817) 00:18:58.016 fused_ordering(818) 00:18:58.016 fused_ordering(819) 00:18:58.016 fused_ordering(820) 00:18:58.589 fused_ordering(821) 00:18:58.589 fused_ordering(822) 00:18:58.589 fused_ordering(823) 00:18:58.589 fused_ordering(824) 00:18:58.589 fused_ordering(825) 00:18:58.589 fused_ordering(826) 00:18:58.589 fused_ordering(827) 00:18:58.589 fused_ordering(828) 00:18:58.589 fused_ordering(829) 00:18:58.589 fused_ordering(830) 00:18:58.589 fused_ordering(831) 00:18:58.589 fused_ordering(832) 00:18:58.589 fused_ordering(833) 00:18:58.589 fused_ordering(834) 00:18:58.589 fused_ordering(835) 00:18:58.589 fused_ordering(836) 00:18:58.589 fused_ordering(837) 00:18:58.589 fused_ordering(838) 00:18:58.589 fused_ordering(839) 00:18:58.589 fused_ordering(840) 00:18:58.589 fused_ordering(841) 00:18:58.589 fused_ordering(842) 00:18:58.589 fused_ordering(843) 00:18:58.589 fused_ordering(844) 00:18:58.589 fused_ordering(845) 00:18:58.589 fused_ordering(846) 00:18:58.589 fused_ordering(847) 00:18:58.589 fused_ordering(848) 00:18:58.589 fused_ordering(849) 00:18:58.589 fused_ordering(850) 00:18:58.589 fused_ordering(851) 00:18:58.589 fused_ordering(852) 00:18:58.589 fused_ordering(853) 00:18:58.589 fused_ordering(854) 00:18:58.589 fused_ordering(855) 00:18:58.589 fused_ordering(856) 00:18:58.589 fused_ordering(857) 00:18:58.589 fused_ordering(858) 00:18:58.589 fused_ordering(859) 00:18:58.589 fused_ordering(860) 00:18:58.589 fused_ordering(861) 00:18:58.589 fused_ordering(862) 00:18:58.589 fused_ordering(863) 00:18:58.589 fused_ordering(864) 00:18:58.589 fused_ordering(865) 00:18:58.589 fused_ordering(866) 00:18:58.589 fused_ordering(867) 00:18:58.589 fused_ordering(868) 00:18:58.589 fused_ordering(869) 00:18:58.589 fused_ordering(870) 00:18:58.589 fused_ordering(871) 00:18:58.589 fused_ordering(872) 00:18:58.589 fused_ordering(873) 00:18:58.589 fused_ordering(874) 00:18:58.589 fused_ordering(875) 00:18:58.589 fused_ordering(876) 00:18:58.589 fused_ordering(877) 00:18:58.589 fused_ordering(878) 00:18:58.589 fused_ordering(879) 00:18:58.589 fused_ordering(880) 00:18:58.590 fused_ordering(881) 00:18:58.590 fused_ordering(882) 00:18:58.590 fused_ordering(883) 00:18:58.590 fused_ordering(884) 00:18:58.590 fused_ordering(885) 00:18:58.590 fused_ordering(886) 00:18:58.590 fused_ordering(887) 00:18:58.590 fused_ordering(888) 00:18:58.590 fused_ordering(889) 00:18:58.590 fused_ordering(890) 00:18:58.590 fused_ordering(891) 00:18:58.590 fused_ordering(892) 00:18:58.590 fused_ordering(893) 00:18:58.590 fused_ordering(894) 00:18:58.590 fused_ordering(895) 00:18:58.590 fused_ordering(896) 00:18:58.590 fused_ordering(897) 00:18:58.590 fused_ordering(898) 00:18:58.590 fused_ordering(899) 00:18:58.590 fused_ordering(900) 00:18:58.590 fused_ordering(901) 00:18:58.590 fused_ordering(902) 00:18:58.590 fused_ordering(903) 00:18:58.590 fused_ordering(904) 00:18:58.590 fused_ordering(905) 00:18:58.590 fused_ordering(906) 00:18:58.590 fused_ordering(907) 00:18:58.590 fused_ordering(908) 00:18:58.590 fused_ordering(909) 00:18:58.590 fused_ordering(910) 00:18:58.590 fused_ordering(911) 00:18:58.590 fused_ordering(912) 00:18:58.590 fused_ordering(913) 00:18:58.590 fused_ordering(914) 00:18:58.590 fused_ordering(915) 00:18:58.590 fused_ordering(916) 00:18:58.590 fused_ordering(917) 00:18:58.590 fused_ordering(918) 00:18:58.590 fused_ordering(919) 00:18:58.590 fused_ordering(920) 00:18:58.590 fused_ordering(921) 00:18:58.590 fused_ordering(922) 00:18:58.590 fused_ordering(923) 00:18:58.590 fused_ordering(924) 00:18:58.590 fused_ordering(925) 00:18:58.590 fused_ordering(926) 00:18:58.590 fused_ordering(927) 00:18:58.590 fused_ordering(928) 00:18:58.590 fused_ordering(929) 00:18:58.590 fused_ordering(930) 00:18:58.590 fused_ordering(931) 00:18:58.590 fused_ordering(932) 00:18:58.590 fused_ordering(933) 00:18:58.590 fused_ordering(934) 00:18:58.590 fused_ordering(935) 00:18:58.590 fused_ordering(936) 00:18:58.590 fused_ordering(937) 00:18:58.590 fused_ordering(938) 00:18:58.590 fused_ordering(939) 00:18:58.590 fused_ordering(940) 00:18:58.590 fused_ordering(941) 00:18:58.590 fused_ordering(942) 00:18:58.590 fused_ordering(943) 00:18:58.590 fused_ordering(944) 00:18:58.590 fused_ordering(945) 00:18:58.590 fused_ordering(946) 00:18:58.590 fused_ordering(947) 00:18:58.590 fused_ordering(948) 00:18:58.590 fused_ordering(949) 00:18:58.590 fused_ordering(950) 00:18:58.590 fused_ordering(951) 00:18:58.590 fused_ordering(952) 00:18:58.590 fused_ordering(953) 00:18:58.590 fused_ordering(954) 00:18:58.590 fused_ordering(955) 00:18:58.590 fused_ordering(956) 00:18:58.590 fused_ordering(957) 00:18:58.590 fused_ordering(958) 00:18:58.590 fused_ordering(959) 00:18:58.590 fused_ordering(960) 00:18:58.590 fused_ordering(961) 00:18:58.590 fused_ordering(962) 00:18:58.590 fused_ordering(963) 00:18:58.590 fused_ordering(964) 00:18:58.590 fused_ordering(965) 00:18:58.590 fused_ordering(966) 00:18:58.590 fused_ordering(967) 00:18:58.590 fused_ordering(968) 00:18:58.590 fused_ordering(969) 00:18:58.590 fused_ordering(970) 00:18:58.590 fused_ordering(971) 00:18:58.590 fused_ordering(972) 00:18:58.590 fused_ordering(973) 00:18:58.590 fused_ordering(974) 00:18:58.590 fused_ordering(975) 00:18:58.590 fused_ordering(976) 00:18:58.590 fused_ordering(977) 00:18:58.590 fused_ordering(978) 00:18:58.590 fused_ordering(979) 00:18:58.590 fused_ordering(980) 00:18:58.590 fused_ordering(981) 00:18:58.590 fused_ordering(982) 00:18:58.590 fused_ordering(983) 00:18:58.590 fused_ordering(984) 00:18:58.590 fused_ordering(985) 00:18:58.590 fused_ordering(986) 00:18:58.590 fused_ordering(987) 00:18:58.590 fused_ordering(988) 00:18:58.590 fused_ordering(989) 00:18:58.590 fused_ordering(990) 00:18:58.590 fused_ordering(991) 00:18:58.590 fused_ordering(992) 00:18:58.590 fused_ordering(993) 00:18:58.590 fused_ordering(994) 00:18:58.590 fused_ordering(995) 00:18:58.590 fused_ordering(996) 00:18:58.590 fused_ordering(997) 00:18:58.590 fused_ordering(998) 00:18:58.590 fused_ordering(999) 00:18:58.590 fused_ordering(1000) 00:18:58.590 fused_ordering(1001) 00:18:58.590 fused_ordering(1002) 00:18:58.590 fused_ordering(1003) 00:18:58.590 fused_ordering(1004) 00:18:58.590 fused_ordering(1005) 00:18:58.590 fused_ordering(1006) 00:18:58.590 fused_ordering(1007) 00:18:58.590 fused_ordering(1008) 00:18:58.590 fused_ordering(1009) 00:18:58.590 fused_ordering(1010) 00:18:58.590 fused_ordering(1011) 00:18:58.590 fused_ordering(1012) 00:18:58.590 fused_ordering(1013) 00:18:58.590 fused_ordering(1014) 00:18:58.590 fused_ordering(1015) 00:18:58.590 fused_ordering(1016) 00:18:58.590 fused_ordering(1017) 00:18:58.590 fused_ordering(1018) 00:18:58.590 fused_ordering(1019) 00:18:58.590 fused_ordering(1020) 00:18:58.590 fused_ordering(1021) 00:18:58.590 fused_ordering(1022) 00:18:58.590 fused_ordering(1023) 00:18:58.590 13:24:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:58.590 13:24:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:58.590 13:24:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:58.590 13:24:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:18:58.590 13:24:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:58.590 13:24:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:18:58.590 13:24:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:58.590 13:24:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:58.590 rmmod nvme_tcp 00:18:58.852 rmmod nvme_fabrics 00:18:58.852 rmmod nvme_keyring 00:18:58.852 13:24:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:58.852 13:24:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:18:58.852 13:24:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:18:58.852 13:24:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3824745 ']' 00:18:58.852 13:24:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3824745 00:18:58.852 13:24:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' -z 3824745 ']' 00:18:58.852 13:24:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # kill -0 3824745 00:18:58.852 13:24:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # uname 00:18:58.852 13:24:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:58.852 13:24:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3824745 00:18:58.852 13:24:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:58.852 13:24:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:58.852 13:24:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3824745' 00:18:58.852 killing process with pid 3824745 00:18:58.852 13:24:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # kill 3824745 00:18:58.852 13:24:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@976 -- # wait 3824745 00:18:59.796 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:59.796 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:59.796 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:59.796 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:18:59.796 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:18:59.796 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:59.796 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:18:59.796 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:59.796 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:59.796 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.796 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:59.796 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:01.839 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:01.839 00:19:01.839 real 0m15.306s 00:19:01.839 user 0m8.437s 00:19:01.839 sys 0m7.885s 00:19:01.839 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:01.839 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:01.839 ************************************ 00:19:01.839 END TEST nvmf_fused_ordering 00:19:01.839 ************************************ 00:19:01.839 13:24:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:19:01.839 13:24:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:01.839 13:24:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:01.839 13:24:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:01.839 ************************************ 00:19:01.839 START TEST nvmf_ns_masking 00:19:01.839 ************************************ 00:19:01.839 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:19:01.839 * Looking for test storage... 00:19:01.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:01.839 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:01.839 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:19:01.839 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:02.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.121 --rc genhtml_branch_coverage=1 00:19:02.121 --rc genhtml_function_coverage=1 00:19:02.121 --rc genhtml_legend=1 00:19:02.121 --rc geninfo_all_blocks=1 00:19:02.121 --rc geninfo_unexecuted_blocks=1 00:19:02.121 00:19:02.121 ' 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:02.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.121 --rc genhtml_branch_coverage=1 00:19:02.121 --rc genhtml_function_coverage=1 00:19:02.121 --rc genhtml_legend=1 00:19:02.121 --rc geninfo_all_blocks=1 00:19:02.121 --rc geninfo_unexecuted_blocks=1 00:19:02.121 00:19:02.121 ' 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:02.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.121 --rc genhtml_branch_coverage=1 00:19:02.121 --rc genhtml_function_coverage=1 00:19:02.121 --rc genhtml_legend=1 00:19:02.121 --rc geninfo_all_blocks=1 00:19:02.121 --rc geninfo_unexecuted_blocks=1 00:19:02.121 00:19:02.121 ' 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:02.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.121 --rc genhtml_branch_coverage=1 00:19:02.121 --rc genhtml_function_coverage=1 00:19:02.121 --rc genhtml_legend=1 00:19:02.121 --rc geninfo_all_blocks=1 00:19:02.121 --rc geninfo_unexecuted_blocks=1 00:19:02.121 00:19:02.121 ' 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:02.121 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.122 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.122 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.122 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:19:02.122 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.122 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:19:02.122 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:02.122 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:02.122 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:02.122 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:02.122 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:02.122 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:02.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:02.122 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:02.122 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:02.122 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:02.122 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:02.122 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:19:02.122 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:19:02.122 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:19:02.122 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=424de704-fa48-463b-a972-e6705eb1ef32 00:19:02.122 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:19:02.122 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=f3c81ee1-95b2-4421-a17d-0e194966d636 00:19:02.122 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:19:02.122 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:19:02.122 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:19:02.122 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:19:02.122 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=c4de8db9-acf6-4961-9b5c-6ae357f12757 00:19:02.122 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:19:02.122 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:02.122 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:02.122 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:02.122 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:02.122 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:02.122 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:02.122 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:02.122 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:02.122 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:02.122 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:02.122 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:19:02.122 13:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:10.269 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:10.269 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:10.269 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:10.270 Found net devices under 0000:31:00.0: cvl_0_0 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:10.270 Found net devices under 0000:31:00.1: cvl_0_1 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:10.270 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:10.531 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:10.531 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:10.531 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:10.531 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:10.531 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:10.531 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:10.531 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:10.531 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:10.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:10.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.538 ms 00:19:10.531 00:19:10.531 --- 10.0.0.2 ping statistics --- 00:19:10.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.531 rtt min/avg/max/mdev = 0.538/0.538/0.538/0.000 ms 00:19:10.531 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:10.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:10.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:19:10.531 00:19:10.531 --- 10.0.0.1 ping statistics --- 00:19:10.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.531 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:19:10.531 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:10.531 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:19:10.531 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:10.531 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:10.531 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:10.531 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:10.531 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:10.532 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:10.532 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:10.532 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:19:10.532 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:10.532 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:10.532 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:10.532 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3830780 00:19:10.532 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3830780 00:19:10.532 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:10.532 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 3830780 ']' 00:19:10.532 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.532 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:10.532 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.532 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:10.532 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:10.793 [2024-11-07 13:24:18.608361] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:19:10.793 [2024-11-07 13:24:18.608487] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:10.793 [2024-11-07 13:24:18.767916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.053 [2024-11-07 13:24:18.862692] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:11.053 [2024-11-07 13:24:18.862737] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:11.053 [2024-11-07 13:24:18.862749] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:11.053 [2024-11-07 13:24:18.862761] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:11.053 [2024-11-07 13:24:18.862772] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:11.053 [2024-11-07 13:24:18.864031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.625 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:11.625 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:19:11.625 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:11.625 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:11.625 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:11.625 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:11.625 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:11.625 [2024-11-07 13:24:19.570492] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:11.625 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:19:11.625 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:19:11.625 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:11.886 Malloc1 00:19:11.886 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:12.147 Malloc2 00:19:12.147 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:12.408 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:19:12.668 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:12.668 [2024-11-07 13:24:20.580702] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:12.668 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:19:12.668 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c4de8db9-acf6-4961-9b5c-6ae357f12757 -a 10.0.0.2 -s 4420 -i 4 00:19:12.929 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:19:12.929 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:19:12.929 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:19:12.929 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:19:12.929 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:19:14.844 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:19:14.844 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:19:14.844 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:19:15.105 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:19:15.105 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:19:15.105 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:19:15.105 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:15.105 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:15.105 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:15.105 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:15.105 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:19:15.105 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:15.105 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:15.105 [ 0]:0x1 00:19:15.105 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:15.105 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:15.105 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c04042e9f6a74fd5b18fbaf9b25945d7 00:19:15.105 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c04042e9f6a74fd5b18fbaf9b25945d7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:15.105 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:19:15.366 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:19:15.366 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:15.366 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:15.366 [ 0]:0x1 00:19:15.366 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:15.366 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:15.366 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c04042e9f6a74fd5b18fbaf9b25945d7 00:19:15.366 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c04042e9f6a74fd5b18fbaf9b25945d7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:15.366 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:19:15.366 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:15.366 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:15.366 [ 1]:0x2 00:19:15.366 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:15.366 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:15.366 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=113c42603b4849c0bd4f14d56105899b 00:19:15.366 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 113c42603b4849c0bd4f14d56105899b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:15.366 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:19:15.366 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:15.366 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:15.626 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:15.627 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:19:15.887 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:19:15.887 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c4de8db9-acf6-4961-9b5c-6ae357f12757 -a 10.0.0.2 -s 4420 -i 4 00:19:16.148 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:19:16.148 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:19:16.148 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:19:16.148 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 1 ]] 00:19:16.148 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=1 00:19:16.148 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:19:18.063 13:24:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:19:18.063 13:24:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:19:18.063 13:24:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:19:18.063 13:24:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:19:18.063 13:24:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:19:18.063 13:24:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:19:18.063 13:24:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:18.063 13:24:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:18.063 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:18.063 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:18.063 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:19:18.063 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:18.063 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:19:18.063 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:19:18.063 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:18.063 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:19:18.063 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:18.063 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:19:18.063 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:18.063 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:18.324 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:18.324 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:18.324 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:18.324 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:18.324 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:18.324 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:18.324 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:18.324 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:18.324 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:19:18.324 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:18.324 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:18.324 [ 0]:0x2 00:19:18.324 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:18.324 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:18.324 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=113c42603b4849c0bd4f14d56105899b 00:19:18.324 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 113c42603b4849c0bd4f14d56105899b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:18.324 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:18.584 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:19:18.584 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:18.584 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:18.584 [ 0]:0x1 00:19:18.585 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:18.585 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:18.585 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c04042e9f6a74fd5b18fbaf9b25945d7 00:19:18.585 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c04042e9f6a74fd5b18fbaf9b25945d7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:18.585 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:19:18.585 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:18.585 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:18.585 [ 1]:0x2 00:19:18.585 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:18.585 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:18.585 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=113c42603b4849c0bd4f14d56105899b 00:19:18.585 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 113c42603b4849c0bd4f14d56105899b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:18.585 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:18.846 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:19:18.846 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:18.846 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:19:18.846 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:19:18.846 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:18.846 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:19:18.846 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:18.846 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:19:18.846 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:18.846 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:18.846 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:18.846 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:18.846 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:18.846 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:18.846 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:18.846 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:18.846 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:18.846 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:18.846 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:19:18.846 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:18.846 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:18.846 [ 0]:0x2 00:19:18.846 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:18.846 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:18.846 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=113c42603b4849c0bd4f14d56105899b 00:19:18.846 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 113c42603b4849c0bd4f14d56105899b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:18.846 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:19:18.846 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:19.106 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:19.106 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:19.106 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:19:19.106 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c4de8db9-acf6-4961-9b5c-6ae357f12757 -a 10.0.0.2 -s 4420 -i 4 00:19:19.367 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:19.367 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:19:19.367 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:19:19.367 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:19:19.367 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:19:19.367 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:19:21.911 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:19:21.911 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:19:21.911 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:19:21.911 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:19:21.911 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:19:21.911 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:19:21.911 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:21.911 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:21.911 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:21.911 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:21.911 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:19:21.912 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:21.912 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:21.912 [ 0]:0x1 00:19:21.912 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:21.912 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:21.912 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c04042e9f6a74fd5b18fbaf9b25945d7 00:19:21.912 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c04042e9f6a74fd5b18fbaf9b25945d7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:21.912 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:19:21.912 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:21.912 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:21.912 [ 1]:0x2 00:19:21.912 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:21.912 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:21.912 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=113c42603b4849c0bd4f14d56105899b 00:19:21.912 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 113c42603b4849c0bd4f14d56105899b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:21.912 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:21.912 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:19:21.912 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:21.912 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:19:21.912 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:19:21.912 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:21.912 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:19:21.912 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:21.912 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:19:21.912 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:21.912 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:21.912 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:21.912 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:21.912 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:21.912 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:21.912 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:21.912 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:21.912 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:21.912 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:21.912 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:19:21.912 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:21.912 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:21.912 [ 0]:0x2 00:19:21.912 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:21.912 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:22.172 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=113c42603b4849c0bd4f14d56105899b 00:19:22.172 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 113c42603b4849c0bd4f14d56105899b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:22.172 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:22.172 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:22.172 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:22.172 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:22.172 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:22.172 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:22.172 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:22.172 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:22.172 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:22.172 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:22.172 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:19:22.172 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:22.172 [2024-11-07 13:24:30.114908] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:19:22.172 request: 00:19:22.172 { 00:19:22.172 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.172 "nsid": 2, 00:19:22.172 "host": "nqn.2016-06.io.spdk:host1", 00:19:22.172 "method": "nvmf_ns_remove_host", 00:19:22.172 "req_id": 1 00:19:22.172 } 00:19:22.172 Got JSON-RPC error response 00:19:22.172 response: 00:19:22.172 { 00:19:22.172 "code": -32602, 00:19:22.172 "message": "Invalid parameters" 00:19:22.172 } 00:19:22.172 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:22.172 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:22.172 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:22.172 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:22.172 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:19:22.172 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:22.172 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:19:22.172 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:19:22.172 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:22.172 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:19:22.172 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:22.172 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:19:22.172 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:22.172 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:22.173 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:22.173 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:22.433 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:22.434 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:22.434 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:22.434 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:22.434 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:22.434 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:22.434 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:19:22.434 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:22.434 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:22.434 [ 0]:0x2 00:19:22.434 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:22.434 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:22.434 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=113c42603b4849c0bd4f14d56105899b 00:19:22.434 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 113c42603b4849c0bd4f14d56105899b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:22.434 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:19:22.434 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:22.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:22.695 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3833191 00:19:22.695 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:19:22.695 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:19:22.695 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3833191 /var/tmp/host.sock 00:19:22.695 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 3833191 ']' 00:19:22.695 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:19:22.695 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:22.695 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:22.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:22.695 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:22.695 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:22.695 [2024-11-07 13:24:30.548805] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:19:22.695 [2024-11-07 13:24:30.548912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3833191 ] 00:19:22.695 [2024-11-07 13:24:30.690592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.956 [2024-11-07 13:24:30.789683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:23.528 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:23.528 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:19:23.528 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:23.789 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:23.789 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 424de704-fa48-463b-a972-e6705eb1ef32 00:19:23.789 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:23.789 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 424DE704FA48463BA972E6705EB1EF32 -i 00:19:24.049 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid f3c81ee1-95b2-4421-a17d-0e194966d636 00:19:24.049 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:24.049 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g F3C81EE195B24421A17D0E194966D636 -i 00:19:24.310 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:24.310 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:19:24.570 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:24.570 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:24.831 nvme0n1 00:19:24.831 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:24.831 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:25.093 nvme1n2 00:19:25.093 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:19:25.093 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:19:25.093 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:19:25.093 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:19:25.093 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:19:25.353 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:19:25.354 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:19:25.354 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:19:25.354 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:19:25.613 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 424de704-fa48-463b-a972-e6705eb1ef32 == \4\2\4\d\e\7\0\4\-\f\a\4\8\-\4\6\3\b\-\a\9\7\2\-\e\6\7\0\5\e\b\1\e\f\3\2 ]] 00:19:25.613 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:19:25.613 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:19:25.613 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:19:25.613 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ f3c81ee1-95b2-4421-a17d-0e194966d636 == \f\3\c\8\1\e\e\1\-\9\5\b\2\-\4\4\2\1\-\a\1\7\d\-\0\e\1\9\4\9\6\6\d\6\3\6 ]] 00:19:25.613 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:25.873 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:26.134 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 424de704-fa48-463b-a972-e6705eb1ef32 00:19:26.134 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:26.134 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 424DE704FA48463BA972E6705EB1EF32 00:19:26.134 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:26.134 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 424DE704FA48463BA972E6705EB1EF32 00:19:26.134 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:26.134 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:26.134 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:26.134 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:26.134 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:26.134 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:26.134 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:26.134 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:19:26.134 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 424DE704FA48463BA972E6705EB1EF32 00:19:26.134 [2024-11-07 13:24:34.090774] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:19:26.134 [2024-11-07 13:24:34.090825] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:19:26.134 [2024-11-07 13:24:34.090843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.134 request: 00:19:26.134 { 00:19:26.134 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:26.134 "namespace": { 00:19:26.134 "bdev_name": "invalid", 00:19:26.134 "nsid": 1, 00:19:26.134 "nguid": "424DE704FA48463BA972E6705EB1EF32", 00:19:26.134 "no_auto_visible": false 00:19:26.134 }, 00:19:26.134 "method": "nvmf_subsystem_add_ns", 00:19:26.134 "req_id": 1 00:19:26.134 } 00:19:26.134 Got JSON-RPC error response 00:19:26.134 response: 00:19:26.134 { 00:19:26.134 "code": -32602, 00:19:26.134 "message": "Invalid parameters" 00:19:26.134 } 00:19:26.134 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:26.134 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:26.134 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:26.134 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:26.134 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 424de704-fa48-463b-a972-e6705eb1ef32 00:19:26.134 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:26.134 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 424DE704FA48463BA972E6705EB1EF32 -i 00:19:26.396 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:19:28.311 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:19:28.311 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:19:28.311 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:19:28.573 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:19:28.573 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3833191 00:19:28.573 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 3833191 ']' 00:19:28.573 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 3833191 00:19:28.573 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:19:28.573 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:28.573 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3833191 00:19:28.573 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:28.573 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:28.573 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3833191' 00:19:28.573 killing process with pid 3833191 00:19:28.573 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 3833191 00:19:28.573 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 3833191 00:19:29.959 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:29.959 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:19:29.959 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:19:29.959 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:29.959 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:19:29.959 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:29.959 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:19:29.959 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:29.959 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:29.959 rmmod nvme_tcp 00:19:29.959 rmmod nvme_fabrics 00:19:29.959 rmmod nvme_keyring 00:19:29.959 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:29.959 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:19:29.959 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:19:29.959 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3830780 ']' 00:19:29.959 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3830780 00:19:29.959 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 3830780 ']' 00:19:29.959 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 3830780 00:19:29.959 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:19:29.959 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:29.959 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3830780 00:19:30.220 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:30.220 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:30.220 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3830780' 00:19:30.220 killing process with pid 3830780 00:19:30.220 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 3830780 00:19:30.220 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 3830780 00:19:31.163 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:31.163 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:31.163 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:31.163 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:19:31.163 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:19:31.163 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:31.163 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:19:31.163 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:31.163 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:31.163 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.163 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:31.163 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.077 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:33.077 00:19:33.077 real 0m31.361s 00:19:33.077 user 0m34.987s 00:19:33.077 sys 0m9.125s 00:19:33.077 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:33.077 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:33.077 ************************************ 00:19:33.077 END TEST nvmf_ns_masking 00:19:33.077 ************************************ 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:33.340 ************************************ 00:19:33.340 START TEST nvmf_nvme_cli 00:19:33.340 ************************************ 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:19:33.340 * Looking for test storage... 00:19:33.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:33.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.340 --rc genhtml_branch_coverage=1 00:19:33.340 --rc genhtml_function_coverage=1 00:19:33.340 --rc genhtml_legend=1 00:19:33.340 --rc geninfo_all_blocks=1 00:19:33.340 --rc geninfo_unexecuted_blocks=1 00:19:33.340 00:19:33.340 ' 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:33.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.340 --rc genhtml_branch_coverage=1 00:19:33.340 --rc genhtml_function_coverage=1 00:19:33.340 --rc genhtml_legend=1 00:19:33.340 --rc geninfo_all_blocks=1 00:19:33.340 --rc geninfo_unexecuted_blocks=1 00:19:33.340 00:19:33.340 ' 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:33.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.340 --rc genhtml_branch_coverage=1 00:19:33.340 --rc genhtml_function_coverage=1 00:19:33.340 --rc genhtml_legend=1 00:19:33.340 --rc geninfo_all_blocks=1 00:19:33.340 --rc geninfo_unexecuted_blocks=1 00:19:33.340 00:19:33.340 ' 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:33.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.340 --rc genhtml_branch_coverage=1 00:19:33.340 --rc genhtml_function_coverage=1 00:19:33.340 --rc genhtml_legend=1 00:19:33.340 --rc geninfo_all_blocks=1 00:19:33.340 --rc geninfo_unexecuted_blocks=1 00:19:33.340 00:19:33.340 ' 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:33.340 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:33.341 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:33.341 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:33.341 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:19:33.341 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:33.341 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:33.341 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:33.341 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.341 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.341 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.341 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:19:33.341 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.341 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:19:33.341 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:33.341 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:33.341 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:33.341 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:33.341 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:33.341 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:33.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:33.341 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:33.341 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:33.341 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:33.341 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:33.341 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:33.341 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:19:33.341 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:19:33.341 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:33.341 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:33.341 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:33.341 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:33.341 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:33.341 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.341 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:33.341 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.603 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:33.603 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:33.603 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:19:33.603 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:41.747 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:41.747 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:19:41.747 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:41.747 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:41.747 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:41.747 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:41.747 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:41.748 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:41.748 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:41.748 Found net devices under 0000:31:00.0: cvl_0_0 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:41.748 Found net devices under 0000:31:00.1: cvl_0_1 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:41.748 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:42.009 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:42.009 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:42.009 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:42.009 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:42.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:42.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.481 ms 00:19:42.009 00:19:42.009 --- 10.0.0.2 ping statistics --- 00:19:42.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:42.009 rtt min/avg/max/mdev = 0.481/0.481/0.481/0.000 ms 00:19:42.009 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:42.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:42.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:19:42.009 00:19:42.009 --- 10.0.0.1 ping statistics --- 00:19:42.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:42.009 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:19:42.009 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:42.009 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:19:42.009 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:42.009 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:42.009 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:42.009 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:42.009 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:42.009 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:42.009 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:42.009 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:19:42.009 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:42.009 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:42.009 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:42.009 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3839685 00:19:42.009 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3839685 00:19:42.009 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:42.009 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # '[' -z 3839685 ']' 00:19:42.009 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.009 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:42.009 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.009 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:42.009 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:42.009 [2024-11-07 13:24:49.967457] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:19:42.009 [2024-11-07 13:24:49.967567] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:42.270 [2024-11-07 13:24:50.128929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:42.270 [2024-11-07 13:24:50.231494] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:42.270 [2024-11-07 13:24:50.231543] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:42.270 [2024-11-07 13:24:50.231555] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:42.270 [2024-11-07 13:24:50.231567] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:42.270 [2024-11-07 13:24:50.231577] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:42.270 [2024-11-07 13:24:50.233880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:42.270 [2024-11-07 13:24:50.233968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:42.270 [2024-11-07 13:24:50.234305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.270 [2024-11-07 13:24:50.234322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:42.840 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:42.840 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@866 -- # return 0 00:19:42.840 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:42.840 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:42.840 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:42.840 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:42.840 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:42.840 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.840 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:42.840 [2024-11-07 13:24:50.787257] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:42.840 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.840 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:42.840 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.840 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:43.100 Malloc0 00:19:43.100 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.100 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:43.100 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.100 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:43.100 Malloc1 00:19:43.100 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.100 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:19:43.100 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.100 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:43.100 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.100 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:43.100 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.100 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:43.100 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.100 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:43.100 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.100 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:43.100 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.100 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:43.100 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.100 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:43.100 [2024-11-07 13:24:50.965596] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:43.100 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.100 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:43.100 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.100 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:43.100 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.100 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:19:43.360 00:19:43.360 Discovery Log Number of Records 2, Generation counter 2 00:19:43.360 =====Discovery Log Entry 0====== 00:19:43.360 trtype: tcp 00:19:43.360 adrfam: ipv4 00:19:43.360 subtype: current discovery subsystem 00:19:43.360 treq: not required 00:19:43.360 portid: 0 00:19:43.360 trsvcid: 4420 00:19:43.360 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:43.360 traddr: 10.0.0.2 00:19:43.360 eflags: explicit discovery connections, duplicate discovery information 00:19:43.360 sectype: none 00:19:43.360 =====Discovery Log Entry 1====== 00:19:43.360 trtype: tcp 00:19:43.360 adrfam: ipv4 00:19:43.360 subtype: nvme subsystem 00:19:43.360 treq: not required 00:19:43.360 portid: 0 00:19:43.360 trsvcid: 4420 00:19:43.360 subnqn: nqn.2016-06.io.spdk:cnode1 00:19:43.360 traddr: 10.0.0.2 00:19:43.360 eflags: none 00:19:43.360 sectype: none 00:19:43.360 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:19:43.360 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:19:43.360 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:43.360 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:43.360 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:43.360 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:43.360 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:43.360 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:43.360 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:43.360 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:19:43.360 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:44.745 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:44.745 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # local i=0 00:19:44.745 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:19:44.745 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:19:44.745 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:19:44.745 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # sleep 2 00:19:47.285 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:19:47.285 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:19:47.285 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:19:47.285 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:19:47.285 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:19:47.285 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # return 0 00:19:47.285 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:19:47.285 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:47.285 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:47.285 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:47.285 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:47.285 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:47.285 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:47.285 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:47.285 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:47.285 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:19:47.285 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:47.285 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:47.285 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:19:47.285 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:47.285 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:19:47.285 /dev/nvme0n2 ]] 00:19:47.285 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:19:47.285 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:19:47.285 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:47.285 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:47.285 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:47.285 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:47.285 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:47.285 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:47.285 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:47.285 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:47.285 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:19:47.286 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:47.286 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:47.286 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:19:47.286 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:47.286 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:19:47.286 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:47.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:47.286 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:47.286 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # local i=0 00:19:47.286 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:19:47.286 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:47.286 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:19:47.286 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:47.286 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1233 -- # return 0 00:19:47.286 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:19:47.286 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:47.286 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.286 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:47.286 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.286 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:19:47.286 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:19:47.286 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:47.286 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:19:47.286 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:47.286 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:19:47.286 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:47.286 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:47.286 rmmod nvme_tcp 00:19:47.286 rmmod nvme_fabrics 00:19:47.286 rmmod nvme_keyring 00:19:47.286 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:47.286 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:19:47.286 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:19:47.286 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3839685 ']' 00:19:47.286 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3839685 00:19:47.286 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' -z 3839685 ']' 00:19:47.286 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # kill -0 3839685 00:19:47.286 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # uname 00:19:47.286 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:47.286 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3839685 00:19:47.286 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:47.286 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:47.286 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3839685' 00:19:47.286 killing process with pid 3839685 00:19:47.286 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # kill 3839685 00:19:47.286 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@976 -- # wait 3839685 00:19:48.223 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:48.223 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:48.223 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:48.223 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:19:48.223 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:19:48.223 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:48.223 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:19:48.223 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:48.223 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:48.223 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:48.223 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:48.223 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.762 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:50.762 00:19:50.762 real 0m17.188s 00:19:50.762 user 0m25.667s 00:19:50.762 sys 0m7.177s 00:19:50.762 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:50.762 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:50.762 ************************************ 00:19:50.762 END TEST nvmf_nvme_cli 00:19:50.762 ************************************ 00:19:50.762 13:24:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:19:50.762 13:24:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:50.762 13:24:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:50.762 13:24:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:50.762 13:24:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:50.762 ************************************ 00:19:50.762 START TEST nvmf_auth_target 00:19:50.762 ************************************ 00:19:50.762 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:50.762 * Looking for test storage... 00:19:50.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:50.762 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:50.762 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:19:50.762 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:50.762 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:50.762 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:50.762 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:50.762 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:50.762 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:50.762 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:50.762 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:50.762 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:50.762 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:50.762 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:50.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.763 --rc genhtml_branch_coverage=1 00:19:50.763 --rc genhtml_function_coverage=1 00:19:50.763 --rc genhtml_legend=1 00:19:50.763 --rc geninfo_all_blocks=1 00:19:50.763 --rc geninfo_unexecuted_blocks=1 00:19:50.763 00:19:50.763 ' 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:50.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.763 --rc genhtml_branch_coverage=1 00:19:50.763 --rc genhtml_function_coverage=1 00:19:50.763 --rc genhtml_legend=1 00:19:50.763 --rc geninfo_all_blocks=1 00:19:50.763 --rc geninfo_unexecuted_blocks=1 00:19:50.763 00:19:50.763 ' 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:50.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.763 --rc genhtml_branch_coverage=1 00:19:50.763 --rc genhtml_function_coverage=1 00:19:50.763 --rc genhtml_legend=1 00:19:50.763 --rc geninfo_all_blocks=1 00:19:50.763 --rc geninfo_unexecuted_blocks=1 00:19:50.763 00:19:50.763 ' 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:50.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.763 --rc genhtml_branch_coverage=1 00:19:50.763 --rc genhtml_function_coverage=1 00:19:50.763 --rc genhtml_legend=1 00:19:50.763 --rc geninfo_all_blocks=1 00:19:50.763 --rc geninfo_unexecuted_blocks=1 00:19:50.763 00:19:50.763 ' 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:50.763 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.763 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:50.764 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.764 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:50.764 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:50.764 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:50.764 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.899 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:58.900 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:58.900 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:58.900 Found net devices under 0000:31:00.0: cvl_0_0 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:58.900 Found net devices under 0000:31:00.1: cvl_0_1 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:58.900 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:58.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:58.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.675 ms 00:19:58.900 00:19:58.900 --- 10.0.0.2 ping statistics --- 00:19:58.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.900 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:19:59.162 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:59.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:59.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.352 ms 00:19:59.162 00:19:59.162 --- 10.0.0.1 ping statistics --- 00:19:59.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.162 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:19:59.162 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:59.162 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:19:59.162 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:59.162 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:59.162 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:59.162 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:59.162 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:59.162 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:59.162 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:59.162 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:59.162 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:59.162 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:59.162 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.162 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3845471 00:19:59.162 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3845471 00:19:59.162 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:59.162 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3845471 ']' 00:19:59.162 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.162 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:59.162 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.162 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:59.162 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3845730 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=14f19e78550c748c23a7941f5314bc675e04ba6aa920831a 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.sM1 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 14f19e78550c748c23a7941f5314bc675e04ba6aa920831a 0 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 14f19e78550c748c23a7941f5314bc675e04ba6aa920831a 0 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=14f19e78550c748c23a7941f5314bc675e04ba6aa920831a 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.sM1 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.sM1 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.sM1 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=458f8a8f12e07a6f6360319bc6ea19c797c1abf4f4da9b1a1bfc2dac073c2bcb 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.kV7 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 458f8a8f12e07a6f6360319bc6ea19c797c1abf4f4da9b1a1bfc2dac073c2bcb 3 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 458f8a8f12e07a6f6360319bc6ea19c797c1abf4f4da9b1a1bfc2dac073c2bcb 3 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=458f8a8f12e07a6f6360319bc6ea19c797c1abf4f4da9b1a1bfc2dac073c2bcb 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.kV7 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.kV7 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.kV7 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:20:00.104 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:00.104 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f4c793ba025ecabda9e4af405fd2fd6c 00:20:00.104 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:00.104 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.H7o 00:20:00.104 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f4c793ba025ecabda9e4af405fd2fd6c 1 00:20:00.104 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f4c793ba025ecabda9e4af405fd2fd6c 1 00:20:00.104 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:00.104 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:00.104 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f4c793ba025ecabda9e4af405fd2fd6c 00:20:00.104 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:20:00.104 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:00.104 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.H7o 00:20:00.105 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.H7o 00:20:00.105 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.H7o 00:20:00.105 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:20:00.105 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:00.105 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:00.105 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:00.105 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:20:00.105 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:00.105 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:00.105 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b85d56de10e934f488abea7a6726ef4ad45dbaee267de76b 00:20:00.105 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:00.105 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.FQM 00:20:00.105 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b85d56de10e934f488abea7a6726ef4ad45dbaee267de76b 2 00:20:00.105 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b85d56de10e934f488abea7a6726ef4ad45dbaee267de76b 2 00:20:00.105 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:00.105 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:00.105 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b85d56de10e934f488abea7a6726ef4ad45dbaee267de76b 00:20:00.105 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:20:00.105 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.FQM 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.FQM 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.FQM 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=08796bad00bc20bf0e1e6eb5120a7576d43575b908b86759 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.i46 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 08796bad00bc20bf0e1e6eb5120a7576d43575b908b86759 2 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 08796bad00bc20bf0e1e6eb5120a7576d43575b908b86759 2 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=08796bad00bc20bf0e1e6eb5120a7576d43575b908b86759 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.i46 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.i46 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.i46 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ce1f5b2a5da0a7d6ec2e1ac0bc03bab0 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.G3b 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ce1f5b2a5da0a7d6ec2e1ac0bc03bab0 1 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ce1f5b2a5da0a7d6ec2e1ac0bc03bab0 1 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ce1f5b2a5da0a7d6ec2e1ac0bc03bab0 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.G3b 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.G3b 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.G3b 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=247308d3701156a1931640d942a46334539acef0613b36b755d0c9731af6b0e8 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.tvF 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 247308d3701156a1931640d942a46334539acef0613b36b755d0c9731af6b0e8 3 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 247308d3701156a1931640d942a46334539acef0613b36b755d0c9731af6b0e8 3 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=247308d3701156a1931640d942a46334539acef0613b36b755d0c9731af6b0e8 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.tvF 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.tvF 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.tvF 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3845471 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3845471 ']' 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:00.366 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.627 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:00.627 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:20:00.628 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3845730 /var/tmp/host.sock 00:20:00.628 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3845730 ']' 00:20:00.628 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:20:00.628 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:00.628 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:00.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:00.628 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:00.628 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.888 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:00.888 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:20:00.888 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:20:00.888 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.889 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.889 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.889 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:00.889 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.sM1 00:20:00.889 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.889 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.889 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.889 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.sM1 00:20:00.889 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.sM1 00:20:01.150 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.kV7 ]] 00:20:01.150 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kV7 00:20:01.150 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.150 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.150 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.150 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kV7 00:20:01.150 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kV7 00:20:01.410 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:01.410 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.H7o 00:20:01.410 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.410 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.410 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.410 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.H7o 00:20:01.410 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.H7o 00:20:01.410 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.FQM ]] 00:20:01.410 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.FQM 00:20:01.410 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.410 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.410 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.410 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.FQM 00:20:01.410 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.FQM 00:20:01.670 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:01.670 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.i46 00:20:01.670 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.670 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.670 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.670 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.i46 00:20:01.670 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.i46 00:20:01.961 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.G3b ]] 00:20:01.961 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.G3b 00:20:01.961 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.961 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.961 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.961 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.G3b 00:20:01.961 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.G3b 00:20:01.961 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:01.961 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.tvF 00:20:01.961 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.961 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.961 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.961 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.tvF 00:20:01.961 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.tvF 00:20:02.242 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:20:02.242 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:02.242 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:02.242 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.242 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:02.242 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:02.542 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:20:02.542 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.542 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:02.542 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:02.542 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:02.542 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.542 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.542 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.542 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.542 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.542 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.542 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.543 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.543 00:20:02.543 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.543 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.543 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.803 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.803 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.803 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.803 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.803 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.803 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.803 { 00:20:02.803 "cntlid": 1, 00:20:02.803 "qid": 0, 00:20:02.803 "state": "enabled", 00:20:02.803 "thread": "nvmf_tgt_poll_group_000", 00:20:02.803 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:02.803 "listen_address": { 00:20:02.803 "trtype": "TCP", 00:20:02.803 "adrfam": "IPv4", 00:20:02.803 "traddr": "10.0.0.2", 00:20:02.803 "trsvcid": "4420" 00:20:02.803 }, 00:20:02.803 "peer_address": { 00:20:02.803 "trtype": "TCP", 00:20:02.803 "adrfam": "IPv4", 00:20:02.803 "traddr": "10.0.0.1", 00:20:02.803 "trsvcid": "56400" 00:20:02.803 }, 00:20:02.803 "auth": { 00:20:02.803 "state": "completed", 00:20:02.803 "digest": "sha256", 00:20:02.803 "dhgroup": "null" 00:20:02.803 } 00:20:02.803 } 00:20:02.803 ]' 00:20:02.803 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.803 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:02.803 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.803 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:02.803 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.065 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.065 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.065 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.065 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTRmMTllNzg1NTBjNzQ4YzIzYTc5NDFmNTMxNGJjNjc1ZTA0YmE2YWE5MjA4MzFhdJuIoA==: --dhchap-ctrl-secret DHHC-1:03:NDU4ZjhhOGYxMmUwN2E2ZjYzNjAzMTliYzZlYTE5Yzc5N2MxYWJmNGY0ZGE5YjFhMWJmYzJkYWMwNzNjMmJjYoHPW94=: 00:20:03.065 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MTRmMTllNzg1NTBjNzQ4YzIzYTc5NDFmNTMxNGJjNjc1ZTA0YmE2YWE5MjA4MzFhdJuIoA==: --dhchap-ctrl-secret DHHC-1:03:NDU4ZjhhOGYxMmUwN2E2ZjYzNjAzMTliYzZlYTE5Yzc5N2MxYWJmNGY0ZGE5YjFhMWJmYzJkYWMwNzNjMmJjYoHPW94=: 00:20:04.005 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.005 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.005 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:04.005 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.005 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.005 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.005 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:04.005 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:04.005 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:04.266 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:20:04.266 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.266 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:04.266 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:04.266 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:04.266 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.266 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.266 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.266 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.266 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.266 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.266 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.266 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.266 00:20:04.527 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.527 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.527 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.527 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.527 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.527 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.527 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.527 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.527 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:04.527 { 00:20:04.527 "cntlid": 3, 00:20:04.527 "qid": 0, 00:20:04.527 "state": "enabled", 00:20:04.527 "thread": "nvmf_tgt_poll_group_000", 00:20:04.527 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:04.527 "listen_address": { 00:20:04.527 "trtype": "TCP", 00:20:04.527 "adrfam": "IPv4", 00:20:04.527 "traddr": "10.0.0.2", 00:20:04.527 "trsvcid": "4420" 00:20:04.527 }, 00:20:04.527 "peer_address": { 00:20:04.527 "trtype": "TCP", 00:20:04.527 "adrfam": "IPv4", 00:20:04.527 "traddr": "10.0.0.1", 00:20:04.527 "trsvcid": "45870" 00:20:04.527 }, 00:20:04.527 "auth": { 00:20:04.527 "state": "completed", 00:20:04.527 "digest": "sha256", 00:20:04.527 "dhgroup": "null" 00:20:04.527 } 00:20:04.527 } 00:20:04.527 ]' 00:20:04.527 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:04.527 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:04.527 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.788 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:04.788 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.788 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.788 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.788 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.788 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjRjNzkzYmEwMjVlY2FiZGE5ZTRhZjQwNWZkMmZkNmOMnVeg: --dhchap-ctrl-secret DHHC-1:02:Yjg1ZDU2ZGUxMGU5MzRmNDg4YWJlYTdhNjcyNmVmNGFkNDVkYmFlZTI2N2RlNzZiu66vkw==: 00:20:04.788 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZjRjNzkzYmEwMjVlY2FiZGE5ZTRhZjQwNWZkMmZkNmOMnVeg: --dhchap-ctrl-secret DHHC-1:02:Yjg1ZDU2ZGUxMGU5MzRmNDg4YWJlYTdhNjcyNmVmNGFkNDVkYmFlZTI2N2RlNzZiu66vkw==: 00:20:05.732 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.732 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:05.732 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.732 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.732 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.732 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.732 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:05.732 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:05.732 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:20:05.732 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.732 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:05.732 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:05.732 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:05.732 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.732 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.732 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.732 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.732 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.732 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.732 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.733 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.993 00:20:05.993 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.993 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.993 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.253 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.253 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.253 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.253 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.253 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.253 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:06.253 { 00:20:06.253 "cntlid": 5, 00:20:06.253 "qid": 0, 00:20:06.253 "state": "enabled", 00:20:06.253 "thread": "nvmf_tgt_poll_group_000", 00:20:06.253 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:06.253 "listen_address": { 00:20:06.253 "trtype": "TCP", 00:20:06.253 "adrfam": "IPv4", 00:20:06.253 "traddr": "10.0.0.2", 00:20:06.253 "trsvcid": "4420" 00:20:06.253 }, 00:20:06.253 "peer_address": { 00:20:06.253 "trtype": "TCP", 00:20:06.253 "adrfam": "IPv4", 00:20:06.253 "traddr": "10.0.0.1", 00:20:06.253 "trsvcid": "45906" 00:20:06.253 }, 00:20:06.253 "auth": { 00:20:06.253 "state": "completed", 00:20:06.253 "digest": "sha256", 00:20:06.253 "dhgroup": "null" 00:20:06.253 } 00:20:06.253 } 00:20:06.253 ]' 00:20:06.253 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:06.253 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:06.253 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:06.253 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:06.253 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:06.514 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.514 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.514 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.514 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: --dhchap-ctrl-secret DHHC-1:01:Y2UxZjViMmE1ZGEwYTdkNmVjMmUxYWMwYmMwM2JhYjDtP5LJ: 00:20:06.514 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: --dhchap-ctrl-secret DHHC-1:01:Y2UxZjViMmE1ZGEwYTdkNmVjMmUxYWMwYmMwM2JhYjDtP5LJ: 00:20:07.456 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.456 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:07.456 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.456 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.456 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.456 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:07.456 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:07.456 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:07.456 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:20:07.456 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.456 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:07.456 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:07.456 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:07.456 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.456 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:07.456 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.456 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.456 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.456 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:07.456 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:07.456 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:07.717 00:20:07.717 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.717 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.717 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.978 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.978 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.978 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.978 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.978 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.978 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.978 { 00:20:07.978 "cntlid": 7, 00:20:07.978 "qid": 0, 00:20:07.978 "state": "enabled", 00:20:07.978 "thread": "nvmf_tgt_poll_group_000", 00:20:07.978 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:07.978 "listen_address": { 00:20:07.978 "trtype": "TCP", 00:20:07.978 "adrfam": "IPv4", 00:20:07.978 "traddr": "10.0.0.2", 00:20:07.978 "trsvcid": "4420" 00:20:07.978 }, 00:20:07.978 "peer_address": { 00:20:07.978 "trtype": "TCP", 00:20:07.978 "adrfam": "IPv4", 00:20:07.978 "traddr": "10.0.0.1", 00:20:07.978 "trsvcid": "45938" 00:20:07.978 }, 00:20:07.978 "auth": { 00:20:07.978 "state": "completed", 00:20:07.978 "digest": "sha256", 00:20:07.978 "dhgroup": "null" 00:20:07.978 } 00:20:07.978 } 00:20:07.978 ]' 00:20:07.978 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.978 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:07.978 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.978 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:07.978 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:08.238 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.238 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.238 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.238 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQ3MzA4ZDM3MDExNTZhMTkzMTY0MGQ5NDJhNDYzMzQ1MzlhY2VmMDYxM2IzNmI3NTVkMGM5NzMxYWY2YjBlOM8LAXc=: 00:20:08.238 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MjQ3MzA4ZDM3MDExNTZhMTkzMTY0MGQ5NDJhNDYzMzQ1MzlhY2VmMDYxM2IzNmI3NTVkMGM5NzMxYWY2YjBlOM8LAXc=: 00:20:09.179 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.179 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:09.179 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.179 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.179 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.179 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:09.179 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.179 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:09.179 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:09.179 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:20:09.179 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:09.179 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:09.179 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:09.179 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:09.179 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.179 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.179 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.179 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.179 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.179 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.179 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.179 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.439 00:20:09.439 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.439 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.439 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.700 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.700 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.700 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.700 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.700 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.700 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.700 { 00:20:09.700 "cntlid": 9, 00:20:09.700 "qid": 0, 00:20:09.700 "state": "enabled", 00:20:09.700 "thread": "nvmf_tgt_poll_group_000", 00:20:09.700 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:09.700 "listen_address": { 00:20:09.700 "trtype": "TCP", 00:20:09.700 "adrfam": "IPv4", 00:20:09.700 "traddr": "10.0.0.2", 00:20:09.700 "trsvcid": "4420" 00:20:09.700 }, 00:20:09.700 "peer_address": { 00:20:09.700 "trtype": "TCP", 00:20:09.700 "adrfam": "IPv4", 00:20:09.700 "traddr": "10.0.0.1", 00:20:09.700 "trsvcid": "45954" 00:20:09.700 }, 00:20:09.700 "auth": { 00:20:09.700 "state": "completed", 00:20:09.700 "digest": "sha256", 00:20:09.700 "dhgroup": "ffdhe2048" 00:20:09.700 } 00:20:09.700 } 00:20:09.700 ]' 00:20:09.700 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.700 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:09.700 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.701 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:09.701 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.701 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.701 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.701 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.962 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTRmMTllNzg1NTBjNzQ4YzIzYTc5NDFmNTMxNGJjNjc1ZTA0YmE2YWE5MjA4MzFhdJuIoA==: --dhchap-ctrl-secret DHHC-1:03:NDU4ZjhhOGYxMmUwN2E2ZjYzNjAzMTliYzZlYTE5Yzc5N2MxYWJmNGY0ZGE5YjFhMWJmYzJkYWMwNzNjMmJjYoHPW94=: 00:20:09.962 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MTRmMTllNzg1NTBjNzQ4YzIzYTc5NDFmNTMxNGJjNjc1ZTA0YmE2YWE5MjA4MzFhdJuIoA==: --dhchap-ctrl-secret DHHC-1:03:NDU4ZjhhOGYxMmUwN2E2ZjYzNjAzMTliYzZlYTE5Yzc5N2MxYWJmNGY0ZGE5YjFhMWJmYzJkYWMwNzNjMmJjYoHPW94=: 00:20:10.904 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.904 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:10.904 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.904 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.904 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.904 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.904 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:10.904 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:10.904 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:20:10.904 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.904 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:10.904 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:10.904 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:10.904 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.904 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.904 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.904 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.904 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.904 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.904 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.904 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.165 00:20:11.165 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:11.165 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:11.165 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.425 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.425 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.425 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.426 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.426 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.426 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:11.426 { 00:20:11.426 "cntlid": 11, 00:20:11.426 "qid": 0, 00:20:11.426 "state": "enabled", 00:20:11.426 "thread": "nvmf_tgt_poll_group_000", 00:20:11.426 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:11.426 "listen_address": { 00:20:11.426 "trtype": "TCP", 00:20:11.426 "adrfam": "IPv4", 00:20:11.426 "traddr": "10.0.0.2", 00:20:11.426 "trsvcid": "4420" 00:20:11.426 }, 00:20:11.426 "peer_address": { 00:20:11.426 "trtype": "TCP", 00:20:11.426 "adrfam": "IPv4", 00:20:11.426 "traddr": "10.0.0.1", 00:20:11.426 "trsvcid": "45984" 00:20:11.426 }, 00:20:11.426 "auth": { 00:20:11.426 "state": "completed", 00:20:11.426 "digest": "sha256", 00:20:11.426 "dhgroup": "ffdhe2048" 00:20:11.426 } 00:20:11.426 } 00:20:11.426 ]' 00:20:11.426 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.426 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:11.426 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.426 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:11.426 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.686 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.686 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.686 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.686 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjRjNzkzYmEwMjVlY2FiZGE5ZTRhZjQwNWZkMmZkNmOMnVeg: --dhchap-ctrl-secret DHHC-1:02:Yjg1ZDU2ZGUxMGU5MzRmNDg4YWJlYTdhNjcyNmVmNGFkNDVkYmFlZTI2N2RlNzZiu66vkw==: 00:20:11.686 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZjRjNzkzYmEwMjVlY2FiZGE5ZTRhZjQwNWZkMmZkNmOMnVeg: --dhchap-ctrl-secret DHHC-1:02:Yjg1ZDU2ZGUxMGU5MzRmNDg4YWJlYTdhNjcyNmVmNGFkNDVkYmFlZTI2N2RlNzZiu66vkw==: 00:20:12.628 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.628 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:12.628 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.628 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.628 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.628 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.628 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:12.628 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:12.628 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:20:12.628 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.628 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:12.628 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:12.628 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:12.628 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.628 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.628 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.628 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.628 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.628 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.628 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.628 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.888 00:20:12.888 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.888 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.888 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.148 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.148 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.148 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.148 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.148 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.148 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:13.148 { 00:20:13.148 "cntlid": 13, 00:20:13.148 "qid": 0, 00:20:13.148 "state": "enabled", 00:20:13.148 "thread": "nvmf_tgt_poll_group_000", 00:20:13.148 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:13.148 "listen_address": { 00:20:13.148 "trtype": "TCP", 00:20:13.148 "adrfam": "IPv4", 00:20:13.148 "traddr": "10.0.0.2", 00:20:13.148 "trsvcid": "4420" 00:20:13.148 }, 00:20:13.148 "peer_address": { 00:20:13.148 "trtype": "TCP", 00:20:13.148 "adrfam": "IPv4", 00:20:13.148 "traddr": "10.0.0.1", 00:20:13.148 "trsvcid": "46000" 00:20:13.148 }, 00:20:13.148 "auth": { 00:20:13.148 "state": "completed", 00:20:13.148 "digest": "sha256", 00:20:13.148 "dhgroup": "ffdhe2048" 00:20:13.148 } 00:20:13.148 } 00:20:13.148 ]' 00:20:13.148 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:13.148 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:13.148 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:13.148 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:13.148 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:13.148 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.148 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.148 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.408 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: --dhchap-ctrl-secret DHHC-1:01:Y2UxZjViMmE1ZGEwYTdkNmVjMmUxYWMwYmMwM2JhYjDtP5LJ: 00:20:13.408 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: --dhchap-ctrl-secret DHHC-1:01:Y2UxZjViMmE1ZGEwYTdkNmVjMmUxYWMwYmMwM2JhYjDtP5LJ: 00:20:14.349 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.349 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:14.349 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.349 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.349 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.349 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.349 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:14.349 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:14.349 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:20:14.349 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.349 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:14.349 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:14.350 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:14.350 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.350 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:14.350 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.350 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.350 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.350 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:14.350 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:14.350 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:14.610 00:20:14.610 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.610 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.610 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.871 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.871 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.871 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.871 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.871 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.871 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.871 { 00:20:14.871 "cntlid": 15, 00:20:14.872 "qid": 0, 00:20:14.872 "state": "enabled", 00:20:14.872 "thread": "nvmf_tgt_poll_group_000", 00:20:14.872 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:14.872 "listen_address": { 00:20:14.872 "trtype": "TCP", 00:20:14.872 "adrfam": "IPv4", 00:20:14.872 "traddr": "10.0.0.2", 00:20:14.872 "trsvcid": "4420" 00:20:14.872 }, 00:20:14.872 "peer_address": { 00:20:14.872 "trtype": "TCP", 00:20:14.872 "adrfam": "IPv4", 00:20:14.872 "traddr": "10.0.0.1", 00:20:14.872 "trsvcid": "39172" 00:20:14.872 }, 00:20:14.872 "auth": { 00:20:14.872 "state": "completed", 00:20:14.872 "digest": "sha256", 00:20:14.872 "dhgroup": "ffdhe2048" 00:20:14.872 } 00:20:14.872 } 00:20:14.872 ]' 00:20:14.872 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.872 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:14.872 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.872 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:14.872 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.872 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.872 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.872 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.133 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQ3MzA4ZDM3MDExNTZhMTkzMTY0MGQ5NDJhNDYzMzQ1MzlhY2VmMDYxM2IzNmI3NTVkMGM5NzMxYWY2YjBlOM8LAXc=: 00:20:15.133 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MjQ3MzA4ZDM3MDExNTZhMTkzMTY0MGQ5NDJhNDYzMzQ1MzlhY2VmMDYxM2IzNmI3NTVkMGM5NzMxYWY2YjBlOM8LAXc=: 00:20:16.073 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.073 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:16.073 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.073 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.073 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.073 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:16.073 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.073 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:16.073 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:16.073 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:20:16.073 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.073 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:16.073 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:16.073 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:16.073 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.073 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.073 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.073 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.073 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.073 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.073 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.073 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.333 00:20:16.334 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.334 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.334 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.594 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.594 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.594 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.594 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.594 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.594 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.594 { 00:20:16.594 "cntlid": 17, 00:20:16.594 "qid": 0, 00:20:16.594 "state": "enabled", 00:20:16.594 "thread": "nvmf_tgt_poll_group_000", 00:20:16.594 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:16.594 "listen_address": { 00:20:16.594 "trtype": "TCP", 00:20:16.594 "adrfam": "IPv4", 00:20:16.594 "traddr": "10.0.0.2", 00:20:16.594 "trsvcid": "4420" 00:20:16.594 }, 00:20:16.594 "peer_address": { 00:20:16.595 "trtype": "TCP", 00:20:16.595 "adrfam": "IPv4", 00:20:16.595 "traddr": "10.0.0.1", 00:20:16.595 "trsvcid": "39196" 00:20:16.595 }, 00:20:16.595 "auth": { 00:20:16.595 "state": "completed", 00:20:16.595 "digest": "sha256", 00:20:16.595 "dhgroup": "ffdhe3072" 00:20:16.595 } 00:20:16.595 } 00:20:16.595 ]' 00:20:16.595 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.595 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:16.595 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.595 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:16.595 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:16.595 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.595 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.595 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.855 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTRmMTllNzg1NTBjNzQ4YzIzYTc5NDFmNTMxNGJjNjc1ZTA0YmE2YWE5MjA4MzFhdJuIoA==: --dhchap-ctrl-secret DHHC-1:03:NDU4ZjhhOGYxMmUwN2E2ZjYzNjAzMTliYzZlYTE5Yzc5N2MxYWJmNGY0ZGE5YjFhMWJmYzJkYWMwNzNjMmJjYoHPW94=: 00:20:16.855 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MTRmMTllNzg1NTBjNzQ4YzIzYTc5NDFmNTMxNGJjNjc1ZTA0YmE2YWE5MjA4MzFhdJuIoA==: --dhchap-ctrl-secret DHHC-1:03:NDU4ZjhhOGYxMmUwN2E2ZjYzNjAzMTliYzZlYTE5Yzc5N2MxYWJmNGY0ZGE5YjFhMWJmYzJkYWMwNzNjMmJjYoHPW94=: 00:20:17.796 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.796 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:17.796 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.796 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.797 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.797 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.797 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:17.797 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:17.797 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:20:17.797 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.797 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:17.797 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:17.797 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:17.797 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.797 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.797 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.797 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.797 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.797 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.797 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.797 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.057 00:20:18.057 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.057 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.057 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.318 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.318 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.318 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.318 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.318 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.318 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.318 { 00:20:18.318 "cntlid": 19, 00:20:18.318 "qid": 0, 00:20:18.318 "state": "enabled", 00:20:18.318 "thread": "nvmf_tgt_poll_group_000", 00:20:18.318 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:18.318 "listen_address": { 00:20:18.318 "trtype": "TCP", 00:20:18.318 "adrfam": "IPv4", 00:20:18.318 "traddr": "10.0.0.2", 00:20:18.318 "trsvcid": "4420" 00:20:18.318 }, 00:20:18.318 "peer_address": { 00:20:18.318 "trtype": "TCP", 00:20:18.318 "adrfam": "IPv4", 00:20:18.318 "traddr": "10.0.0.1", 00:20:18.318 "trsvcid": "39232" 00:20:18.318 }, 00:20:18.318 "auth": { 00:20:18.318 "state": "completed", 00:20:18.318 "digest": "sha256", 00:20:18.318 "dhgroup": "ffdhe3072" 00:20:18.318 } 00:20:18.318 } 00:20:18.318 ]' 00:20:18.318 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.318 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:18.318 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.318 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:18.318 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.318 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.318 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.318 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.579 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjRjNzkzYmEwMjVlY2FiZGE5ZTRhZjQwNWZkMmZkNmOMnVeg: --dhchap-ctrl-secret DHHC-1:02:Yjg1ZDU2ZGUxMGU5MzRmNDg4YWJlYTdhNjcyNmVmNGFkNDVkYmFlZTI2N2RlNzZiu66vkw==: 00:20:18.579 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZjRjNzkzYmEwMjVlY2FiZGE5ZTRhZjQwNWZkMmZkNmOMnVeg: --dhchap-ctrl-secret DHHC-1:02:Yjg1ZDU2ZGUxMGU5MzRmNDg4YWJlYTdhNjcyNmVmNGFkNDVkYmFlZTI2N2RlNzZiu66vkw==: 00:20:19.519 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.519 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:19.519 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.519 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.519 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.519 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.519 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:19.519 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:19.519 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:20:19.519 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.519 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:19.519 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:19.519 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:19.519 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.519 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.519 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.519 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.519 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.519 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.519 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.519 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.780 00:20:19.780 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.780 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.780 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.040 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.040 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.040 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.040 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.040 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.040 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:20.040 { 00:20:20.040 "cntlid": 21, 00:20:20.040 "qid": 0, 00:20:20.040 "state": "enabled", 00:20:20.040 "thread": "nvmf_tgt_poll_group_000", 00:20:20.040 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:20.040 "listen_address": { 00:20:20.040 "trtype": "TCP", 00:20:20.040 "adrfam": "IPv4", 00:20:20.040 "traddr": "10.0.0.2", 00:20:20.040 "trsvcid": "4420" 00:20:20.040 }, 00:20:20.040 "peer_address": { 00:20:20.040 "trtype": "TCP", 00:20:20.040 "adrfam": "IPv4", 00:20:20.040 "traddr": "10.0.0.1", 00:20:20.040 "trsvcid": "39270" 00:20:20.040 }, 00:20:20.040 "auth": { 00:20:20.040 "state": "completed", 00:20:20.040 "digest": "sha256", 00:20:20.040 "dhgroup": "ffdhe3072" 00:20:20.040 } 00:20:20.040 } 00:20:20.040 ]' 00:20:20.040 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:20.040 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:20.040 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.040 13:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:20.040 13:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.300 13:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.300 13:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.300 13:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.300 13:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: --dhchap-ctrl-secret DHHC-1:01:Y2UxZjViMmE1ZGEwYTdkNmVjMmUxYWMwYmMwM2JhYjDtP5LJ: 00:20:20.300 13:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: --dhchap-ctrl-secret DHHC-1:01:Y2UxZjViMmE1ZGEwYTdkNmVjMmUxYWMwYmMwM2JhYjDtP5LJ: 00:20:21.239 13:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.239 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:21.239 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.239 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.239 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.240 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:21.240 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:21.240 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:21.240 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:20:21.240 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.240 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:21.240 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:21.240 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:21.240 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.240 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:21.240 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.240 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.240 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.240 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:21.240 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:21.240 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:21.500 00:20:21.500 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.500 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.500 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.761 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.761 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.761 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.761 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.761 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.761 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.761 { 00:20:21.761 "cntlid": 23, 00:20:21.761 "qid": 0, 00:20:21.761 "state": "enabled", 00:20:21.761 "thread": "nvmf_tgt_poll_group_000", 00:20:21.761 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:21.761 "listen_address": { 00:20:21.761 "trtype": "TCP", 00:20:21.761 "adrfam": "IPv4", 00:20:21.761 "traddr": "10.0.0.2", 00:20:21.761 "trsvcid": "4420" 00:20:21.761 }, 00:20:21.761 "peer_address": { 00:20:21.761 "trtype": "TCP", 00:20:21.761 "adrfam": "IPv4", 00:20:21.761 "traddr": "10.0.0.1", 00:20:21.761 "trsvcid": "39296" 00:20:21.761 }, 00:20:21.761 "auth": { 00:20:21.761 "state": "completed", 00:20:21.761 "digest": "sha256", 00:20:21.761 "dhgroup": "ffdhe3072" 00:20:21.761 } 00:20:21.761 } 00:20:21.761 ]' 00:20:21.761 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.761 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:21.761 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.761 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:21.761 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.021 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.021 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.021 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.021 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQ3MzA4ZDM3MDExNTZhMTkzMTY0MGQ5NDJhNDYzMzQ1MzlhY2VmMDYxM2IzNmI3NTVkMGM5NzMxYWY2YjBlOM8LAXc=: 00:20:22.021 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MjQ3MzA4ZDM3MDExNTZhMTkzMTY0MGQ5NDJhNDYzMzQ1MzlhY2VmMDYxM2IzNmI3NTVkMGM5NzMxYWY2YjBlOM8LAXc=: 00:20:22.962 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.962 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:22.962 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.962 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.962 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.962 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:22.962 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.962 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:22.962 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:22.962 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:20:22.962 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.962 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:22.962 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:22.962 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:22.962 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.962 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.962 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.962 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.963 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.963 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.963 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.963 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.223 00:20:23.223 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:23.223 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:23.223 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.484 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.484 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.484 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.484 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.484 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.484 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.484 { 00:20:23.484 "cntlid": 25, 00:20:23.484 "qid": 0, 00:20:23.484 "state": "enabled", 00:20:23.484 "thread": "nvmf_tgt_poll_group_000", 00:20:23.484 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:23.484 "listen_address": { 00:20:23.484 "trtype": "TCP", 00:20:23.484 "adrfam": "IPv4", 00:20:23.484 "traddr": "10.0.0.2", 00:20:23.484 "trsvcid": "4420" 00:20:23.484 }, 00:20:23.484 "peer_address": { 00:20:23.484 "trtype": "TCP", 00:20:23.484 "adrfam": "IPv4", 00:20:23.484 "traddr": "10.0.0.1", 00:20:23.484 "trsvcid": "39342" 00:20:23.484 }, 00:20:23.484 "auth": { 00:20:23.484 "state": "completed", 00:20:23.484 "digest": "sha256", 00:20:23.484 "dhgroup": "ffdhe4096" 00:20:23.484 } 00:20:23.484 } 00:20:23.484 ]' 00:20:23.484 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.484 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:23.484 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.484 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:23.484 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.745 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.745 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.745 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.745 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTRmMTllNzg1NTBjNzQ4YzIzYTc5NDFmNTMxNGJjNjc1ZTA0YmE2YWE5MjA4MzFhdJuIoA==: --dhchap-ctrl-secret DHHC-1:03:NDU4ZjhhOGYxMmUwN2E2ZjYzNjAzMTliYzZlYTE5Yzc5N2MxYWJmNGY0ZGE5YjFhMWJmYzJkYWMwNzNjMmJjYoHPW94=: 00:20:23.745 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MTRmMTllNzg1NTBjNzQ4YzIzYTc5NDFmNTMxNGJjNjc1ZTA0YmE2YWE5MjA4MzFhdJuIoA==: --dhchap-ctrl-secret DHHC-1:03:NDU4ZjhhOGYxMmUwN2E2ZjYzNjAzMTliYzZlYTE5Yzc5N2MxYWJmNGY0ZGE5YjFhMWJmYzJkYWMwNzNjMmJjYoHPW94=: 00:20:24.687 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.687 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:24.687 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.687 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.687 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.687 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.687 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:24.687 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:24.687 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:20:24.687 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.687 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:24.687 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:24.687 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:24.687 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.687 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.687 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.687 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.687 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.687 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.687 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.687 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.947 00:20:24.947 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.947 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.947 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.208 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.208 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.208 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.208 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.208 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.208 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:25.208 { 00:20:25.208 "cntlid": 27, 00:20:25.208 "qid": 0, 00:20:25.208 "state": "enabled", 00:20:25.208 "thread": "nvmf_tgt_poll_group_000", 00:20:25.208 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:25.208 "listen_address": { 00:20:25.208 "trtype": "TCP", 00:20:25.208 "adrfam": "IPv4", 00:20:25.208 "traddr": "10.0.0.2", 00:20:25.208 "trsvcid": "4420" 00:20:25.208 }, 00:20:25.208 "peer_address": { 00:20:25.208 "trtype": "TCP", 00:20:25.208 "adrfam": "IPv4", 00:20:25.208 "traddr": "10.0.0.1", 00:20:25.208 "trsvcid": "53958" 00:20:25.208 }, 00:20:25.208 "auth": { 00:20:25.208 "state": "completed", 00:20:25.208 "digest": "sha256", 00:20:25.208 "dhgroup": "ffdhe4096" 00:20:25.208 } 00:20:25.208 } 00:20:25.208 ]' 00:20:25.208 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:25.208 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:25.208 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:25.469 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:25.469 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:25.469 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.469 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.469 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.469 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjRjNzkzYmEwMjVlY2FiZGE5ZTRhZjQwNWZkMmZkNmOMnVeg: --dhchap-ctrl-secret DHHC-1:02:Yjg1ZDU2ZGUxMGU5MzRmNDg4YWJlYTdhNjcyNmVmNGFkNDVkYmFlZTI2N2RlNzZiu66vkw==: 00:20:25.469 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZjRjNzkzYmEwMjVlY2FiZGE5ZTRhZjQwNWZkMmZkNmOMnVeg: --dhchap-ctrl-secret DHHC-1:02:Yjg1ZDU2ZGUxMGU5MzRmNDg4YWJlYTdhNjcyNmVmNGFkNDVkYmFlZTI2N2RlNzZiu66vkw==: 00:20:26.411 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.411 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:26.411 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.411 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.411 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.411 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:26.411 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:26.412 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:26.412 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:20:26.412 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:26.412 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:26.412 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:26.412 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:26.412 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.412 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.412 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.412 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.412 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.412 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.412 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.412 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.672 00:20:26.932 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.932 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.932 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.932 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.932 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.932 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.932 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.932 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.932 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.932 { 00:20:26.932 "cntlid": 29, 00:20:26.932 "qid": 0, 00:20:26.932 "state": "enabled", 00:20:26.932 "thread": "nvmf_tgt_poll_group_000", 00:20:26.932 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:26.932 "listen_address": { 00:20:26.932 "trtype": "TCP", 00:20:26.932 "adrfam": "IPv4", 00:20:26.932 "traddr": "10.0.0.2", 00:20:26.932 "trsvcid": "4420" 00:20:26.932 }, 00:20:26.932 "peer_address": { 00:20:26.932 "trtype": "TCP", 00:20:26.932 "adrfam": "IPv4", 00:20:26.932 "traddr": "10.0.0.1", 00:20:26.932 "trsvcid": "53984" 00:20:26.932 }, 00:20:26.932 "auth": { 00:20:26.932 "state": "completed", 00:20:26.932 "digest": "sha256", 00:20:26.932 "dhgroup": "ffdhe4096" 00:20:26.932 } 00:20:26.932 } 00:20:26.932 ]' 00:20:26.933 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.933 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:26.933 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:27.193 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:27.193 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:27.193 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.193 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.193 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.193 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: --dhchap-ctrl-secret DHHC-1:01:Y2UxZjViMmE1ZGEwYTdkNmVjMmUxYWMwYmMwM2JhYjDtP5LJ: 00:20:27.193 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: --dhchap-ctrl-secret DHHC-1:01:Y2UxZjViMmE1ZGEwYTdkNmVjMmUxYWMwYmMwM2JhYjDtP5LJ: 00:20:28.133 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.133 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:28.133 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.133 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.133 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.133 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.133 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:28.133 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:28.393 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:28.393 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.393 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:28.393 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:28.393 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:28.393 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.393 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:28.393 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.393 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.393 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.393 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:28.393 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:28.393 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:28.653 00:20:28.653 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.653 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.653 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.653 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.653 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.653 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.653 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.653 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.653 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.653 { 00:20:28.653 "cntlid": 31, 00:20:28.653 "qid": 0, 00:20:28.653 "state": "enabled", 00:20:28.653 "thread": "nvmf_tgt_poll_group_000", 00:20:28.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:28.653 "listen_address": { 00:20:28.653 "trtype": "TCP", 00:20:28.653 "adrfam": "IPv4", 00:20:28.653 "traddr": "10.0.0.2", 00:20:28.653 "trsvcid": "4420" 00:20:28.653 }, 00:20:28.653 "peer_address": { 00:20:28.653 "trtype": "TCP", 00:20:28.653 "adrfam": "IPv4", 00:20:28.653 "traddr": "10.0.0.1", 00:20:28.653 "trsvcid": "54024" 00:20:28.653 }, 00:20:28.653 "auth": { 00:20:28.653 "state": "completed", 00:20:28.653 "digest": "sha256", 00:20:28.653 "dhgroup": "ffdhe4096" 00:20:28.653 } 00:20:28.653 } 00:20:28.653 ]' 00:20:28.653 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.914 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:28.914 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.914 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:28.914 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.914 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.914 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.914 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.174 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQ3MzA4ZDM3MDExNTZhMTkzMTY0MGQ5NDJhNDYzMzQ1MzlhY2VmMDYxM2IzNmI3NTVkMGM5NzMxYWY2YjBlOM8LAXc=: 00:20:29.174 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MjQ3MzA4ZDM3MDExNTZhMTkzMTY0MGQ5NDJhNDYzMzQ1MzlhY2VmMDYxM2IzNmI3NTVkMGM5NzMxYWY2YjBlOM8LAXc=: 00:20:29.744 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.744 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:29.744 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.744 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.744 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.744 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:29.744 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:29.744 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:29.744 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:30.004 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:30.004 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.004 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:30.004 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:30.004 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:30.004 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.004 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.004 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.004 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.004 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.004 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.004 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.004 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.264 00:20:30.525 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.525 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.525 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.525 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.525 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.525 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.525 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.525 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.525 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.525 { 00:20:30.525 "cntlid": 33, 00:20:30.525 "qid": 0, 00:20:30.525 "state": "enabled", 00:20:30.525 "thread": "nvmf_tgt_poll_group_000", 00:20:30.525 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:30.525 "listen_address": { 00:20:30.525 "trtype": "TCP", 00:20:30.525 "adrfam": "IPv4", 00:20:30.525 "traddr": "10.0.0.2", 00:20:30.525 "trsvcid": "4420" 00:20:30.525 }, 00:20:30.525 "peer_address": { 00:20:30.525 "trtype": "TCP", 00:20:30.525 "adrfam": "IPv4", 00:20:30.525 "traddr": "10.0.0.1", 00:20:30.525 "trsvcid": "54044" 00:20:30.525 }, 00:20:30.525 "auth": { 00:20:30.525 "state": "completed", 00:20:30.525 "digest": "sha256", 00:20:30.525 "dhgroup": "ffdhe6144" 00:20:30.525 } 00:20:30.525 } 00:20:30.525 ]' 00:20:30.525 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.525 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:30.525 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.785 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:30.785 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.785 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.785 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.785 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.785 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTRmMTllNzg1NTBjNzQ4YzIzYTc5NDFmNTMxNGJjNjc1ZTA0YmE2YWE5MjA4MzFhdJuIoA==: --dhchap-ctrl-secret DHHC-1:03:NDU4ZjhhOGYxMmUwN2E2ZjYzNjAzMTliYzZlYTE5Yzc5N2MxYWJmNGY0ZGE5YjFhMWJmYzJkYWMwNzNjMmJjYoHPW94=: 00:20:30.785 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MTRmMTllNzg1NTBjNzQ4YzIzYTc5NDFmNTMxNGJjNjc1ZTA0YmE2YWE5MjA4MzFhdJuIoA==: --dhchap-ctrl-secret DHHC-1:03:NDU4ZjhhOGYxMmUwN2E2ZjYzNjAzMTliYzZlYTE5Yzc5N2MxYWJmNGY0ZGE5YjFhMWJmYzJkYWMwNzNjMmJjYoHPW94=: 00:20:31.725 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.725 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:31.725 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.725 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.725 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.725 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.725 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:31.725 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:31.986 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:31.986 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.986 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:31.986 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:31.986 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:31.986 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.986 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.986 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.986 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.986 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.986 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.986 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.986 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.246 00:20:32.246 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.246 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.246 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.506 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.506 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.506 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.506 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.506 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.506 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.506 { 00:20:32.506 "cntlid": 35, 00:20:32.506 "qid": 0, 00:20:32.506 "state": "enabled", 00:20:32.506 "thread": "nvmf_tgt_poll_group_000", 00:20:32.506 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:32.506 "listen_address": { 00:20:32.506 "trtype": "TCP", 00:20:32.506 "adrfam": "IPv4", 00:20:32.506 "traddr": "10.0.0.2", 00:20:32.506 "trsvcid": "4420" 00:20:32.506 }, 00:20:32.506 "peer_address": { 00:20:32.506 "trtype": "TCP", 00:20:32.506 "adrfam": "IPv4", 00:20:32.506 "traddr": "10.0.0.1", 00:20:32.506 "trsvcid": "54070" 00:20:32.506 }, 00:20:32.506 "auth": { 00:20:32.506 "state": "completed", 00:20:32.506 "digest": "sha256", 00:20:32.506 "dhgroup": "ffdhe6144" 00:20:32.506 } 00:20:32.506 } 00:20:32.506 ]' 00:20:32.506 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.506 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:32.506 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.506 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:32.506 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.506 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.506 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.506 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.767 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjRjNzkzYmEwMjVlY2FiZGE5ZTRhZjQwNWZkMmZkNmOMnVeg: --dhchap-ctrl-secret DHHC-1:02:Yjg1ZDU2ZGUxMGU5MzRmNDg4YWJlYTdhNjcyNmVmNGFkNDVkYmFlZTI2N2RlNzZiu66vkw==: 00:20:32.767 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZjRjNzkzYmEwMjVlY2FiZGE5ZTRhZjQwNWZkMmZkNmOMnVeg: --dhchap-ctrl-secret DHHC-1:02:Yjg1ZDU2ZGUxMGU5MzRmNDg4YWJlYTdhNjcyNmVmNGFkNDVkYmFlZTI2N2RlNzZiu66vkw==: 00:20:33.708 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.708 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:33.708 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.708 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.708 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.708 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.708 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:33.708 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:33.708 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:33.708 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.708 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:33.708 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:33.708 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:33.708 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.708 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.708 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.708 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.708 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.708 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.708 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.708 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.968 00:20:33.968 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:33.968 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.968 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.228 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.228 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.228 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.228 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.228 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.228 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.228 { 00:20:34.228 "cntlid": 37, 00:20:34.228 "qid": 0, 00:20:34.228 "state": "enabled", 00:20:34.228 "thread": "nvmf_tgt_poll_group_000", 00:20:34.228 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:34.228 "listen_address": { 00:20:34.228 "trtype": "TCP", 00:20:34.228 "adrfam": "IPv4", 00:20:34.228 "traddr": "10.0.0.2", 00:20:34.228 "trsvcid": "4420" 00:20:34.228 }, 00:20:34.228 "peer_address": { 00:20:34.228 "trtype": "TCP", 00:20:34.228 "adrfam": "IPv4", 00:20:34.228 "traddr": "10.0.0.1", 00:20:34.228 "trsvcid": "38534" 00:20:34.228 }, 00:20:34.228 "auth": { 00:20:34.228 "state": "completed", 00:20:34.228 "digest": "sha256", 00:20:34.228 "dhgroup": "ffdhe6144" 00:20:34.228 } 00:20:34.228 } 00:20:34.228 ]' 00:20:34.228 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.228 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:34.228 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.488 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:34.488 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.488 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.488 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.488 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.488 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: --dhchap-ctrl-secret DHHC-1:01:Y2UxZjViMmE1ZGEwYTdkNmVjMmUxYWMwYmMwM2JhYjDtP5LJ: 00:20:34.488 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: --dhchap-ctrl-secret DHHC-1:01:Y2UxZjViMmE1ZGEwYTdkNmVjMmUxYWMwYmMwM2JhYjDtP5LJ: 00:20:35.427 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.427 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:35.427 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.427 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.427 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.427 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.428 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:35.428 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:35.688 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:35.688 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.688 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:35.688 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:35.688 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:35.688 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.688 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:35.688 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.688 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.688 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.688 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:35.688 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:35.688 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:35.948 00:20:35.948 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.948 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.948 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.208 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.208 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.208 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.208 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.208 13:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.208 13:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:36.208 { 00:20:36.208 "cntlid": 39, 00:20:36.208 "qid": 0, 00:20:36.208 "state": "enabled", 00:20:36.208 "thread": "nvmf_tgt_poll_group_000", 00:20:36.208 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:36.208 "listen_address": { 00:20:36.208 "trtype": "TCP", 00:20:36.208 "adrfam": "IPv4", 00:20:36.208 "traddr": "10.0.0.2", 00:20:36.208 "trsvcid": "4420" 00:20:36.208 }, 00:20:36.208 "peer_address": { 00:20:36.208 "trtype": "TCP", 00:20:36.208 "adrfam": "IPv4", 00:20:36.208 "traddr": "10.0.0.1", 00:20:36.208 "trsvcid": "38556" 00:20:36.208 }, 00:20:36.208 "auth": { 00:20:36.208 "state": "completed", 00:20:36.208 "digest": "sha256", 00:20:36.208 "dhgroup": "ffdhe6144" 00:20:36.208 } 00:20:36.208 } 00:20:36.208 ]' 00:20:36.208 13:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:36.208 13:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:36.208 13:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:36.208 13:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:36.208 13:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:36.208 13:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.208 13:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.208 13:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.468 13:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQ3MzA4ZDM3MDExNTZhMTkzMTY0MGQ5NDJhNDYzMzQ1MzlhY2VmMDYxM2IzNmI3NTVkMGM5NzMxYWY2YjBlOM8LAXc=: 00:20:36.468 13:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MjQ3MzA4ZDM3MDExNTZhMTkzMTY0MGQ5NDJhNDYzMzQ1MzlhY2VmMDYxM2IzNmI3NTVkMGM5NzMxYWY2YjBlOM8LAXc=: 00:20:37.408 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.408 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:37.408 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.408 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.408 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.408 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:37.408 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:37.408 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:37.408 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:37.408 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:37.408 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:37.408 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:37.408 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:37.408 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:37.408 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.408 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.408 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.408 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.409 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.409 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.409 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.409 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.979 00:20:37.979 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.979 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.979 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.979 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.979 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.979 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.979 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.979 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.979 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.979 { 00:20:37.979 "cntlid": 41, 00:20:37.979 "qid": 0, 00:20:37.979 "state": "enabled", 00:20:37.979 "thread": "nvmf_tgt_poll_group_000", 00:20:37.979 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:37.979 "listen_address": { 00:20:37.979 "trtype": "TCP", 00:20:37.979 "adrfam": "IPv4", 00:20:37.979 "traddr": "10.0.0.2", 00:20:37.979 "trsvcid": "4420" 00:20:37.979 }, 00:20:37.979 "peer_address": { 00:20:37.980 "trtype": "TCP", 00:20:37.980 "adrfam": "IPv4", 00:20:37.980 "traddr": "10.0.0.1", 00:20:37.980 "trsvcid": "38582" 00:20:37.980 }, 00:20:37.980 "auth": { 00:20:37.980 "state": "completed", 00:20:37.980 "digest": "sha256", 00:20:37.980 "dhgroup": "ffdhe8192" 00:20:37.980 } 00:20:37.980 } 00:20:37.980 ]' 00:20:37.980 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:38.240 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:38.240 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:38.240 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:38.240 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:38.240 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.240 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.240 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.501 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTRmMTllNzg1NTBjNzQ4YzIzYTc5NDFmNTMxNGJjNjc1ZTA0YmE2YWE5MjA4MzFhdJuIoA==: --dhchap-ctrl-secret DHHC-1:03:NDU4ZjhhOGYxMmUwN2E2ZjYzNjAzMTliYzZlYTE5Yzc5N2MxYWJmNGY0ZGE5YjFhMWJmYzJkYWMwNzNjMmJjYoHPW94=: 00:20:38.501 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MTRmMTllNzg1NTBjNzQ4YzIzYTc5NDFmNTMxNGJjNjc1ZTA0YmE2YWE5MjA4MzFhdJuIoA==: --dhchap-ctrl-secret DHHC-1:03:NDU4ZjhhOGYxMmUwN2E2ZjYzNjAzMTliYzZlYTE5Yzc5N2MxYWJmNGY0ZGE5YjFhMWJmYzJkYWMwNzNjMmJjYoHPW94=: 00:20:39.078 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.078 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:39.078 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.078 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.078 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.078 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:39.078 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:39.078 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:39.338 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:39.339 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:39.339 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:39.339 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:39.339 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:39.339 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.339 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.339 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.339 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.339 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.339 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.339 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.339 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.909 00:20:39.909 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.909 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.909 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.169 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.169 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.169 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.169 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.169 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.169 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.169 { 00:20:40.169 "cntlid": 43, 00:20:40.169 "qid": 0, 00:20:40.169 "state": "enabled", 00:20:40.169 "thread": "nvmf_tgt_poll_group_000", 00:20:40.169 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:40.169 "listen_address": { 00:20:40.169 "trtype": "TCP", 00:20:40.169 "adrfam": "IPv4", 00:20:40.169 "traddr": "10.0.0.2", 00:20:40.169 "trsvcid": "4420" 00:20:40.169 }, 00:20:40.169 "peer_address": { 00:20:40.169 "trtype": "TCP", 00:20:40.169 "adrfam": "IPv4", 00:20:40.169 "traddr": "10.0.0.1", 00:20:40.169 "trsvcid": "38602" 00:20:40.169 }, 00:20:40.169 "auth": { 00:20:40.169 "state": "completed", 00:20:40.169 "digest": "sha256", 00:20:40.169 "dhgroup": "ffdhe8192" 00:20:40.169 } 00:20:40.169 } 00:20:40.169 ]' 00:20:40.169 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.169 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:40.169 13:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:40.169 13:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:40.169 13:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:40.169 13:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.169 13:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.169 13:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.429 13:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjRjNzkzYmEwMjVlY2FiZGE5ZTRhZjQwNWZkMmZkNmOMnVeg: --dhchap-ctrl-secret DHHC-1:02:Yjg1ZDU2ZGUxMGU5MzRmNDg4YWJlYTdhNjcyNmVmNGFkNDVkYmFlZTI2N2RlNzZiu66vkw==: 00:20:40.429 13:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZjRjNzkzYmEwMjVlY2FiZGE5ZTRhZjQwNWZkMmZkNmOMnVeg: --dhchap-ctrl-secret DHHC-1:02:Yjg1ZDU2ZGUxMGU5MzRmNDg4YWJlYTdhNjcyNmVmNGFkNDVkYmFlZTI2N2RlNzZiu66vkw==: 00:20:41.394 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.394 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:41.394 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.394 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.394 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.394 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.394 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:41.394 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:41.394 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:41.394 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.394 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:41.394 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:41.394 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:41.394 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.394 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.394 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.394 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.394 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.394 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.394 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.394 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.089 00:20:42.089 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.089 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.089 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.089 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.089 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.089 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.089 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.089 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.089 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.089 { 00:20:42.089 "cntlid": 45, 00:20:42.089 "qid": 0, 00:20:42.089 "state": "enabled", 00:20:42.089 "thread": "nvmf_tgt_poll_group_000", 00:20:42.089 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:42.089 "listen_address": { 00:20:42.089 "trtype": "TCP", 00:20:42.089 "adrfam": "IPv4", 00:20:42.089 "traddr": "10.0.0.2", 00:20:42.089 "trsvcid": "4420" 00:20:42.089 }, 00:20:42.089 "peer_address": { 00:20:42.089 "trtype": "TCP", 00:20:42.089 "adrfam": "IPv4", 00:20:42.089 "traddr": "10.0.0.1", 00:20:42.089 "trsvcid": "38628" 00:20:42.089 }, 00:20:42.089 "auth": { 00:20:42.089 "state": "completed", 00:20:42.089 "digest": "sha256", 00:20:42.089 "dhgroup": "ffdhe8192" 00:20:42.089 } 00:20:42.089 } 00:20:42.089 ]' 00:20:42.089 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.089 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:42.089 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.089 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:42.089 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.350 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.350 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.350 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.350 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: --dhchap-ctrl-secret DHHC-1:01:Y2UxZjViMmE1ZGEwYTdkNmVjMmUxYWMwYmMwM2JhYjDtP5LJ: 00:20:42.350 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: --dhchap-ctrl-secret DHHC-1:01:Y2UxZjViMmE1ZGEwYTdkNmVjMmUxYWMwYmMwM2JhYjDtP5LJ: 00:20:43.295 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.295 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:43.295 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.295 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.295 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.295 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.295 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:43.295 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:43.295 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:43.295 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.295 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:43.295 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:43.295 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:43.295 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.295 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:43.295 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.295 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.295 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.295 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:43.295 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:43.295 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:43.866 00:20:43.866 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.866 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.866 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.127 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.127 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.127 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.127 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.127 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.127 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.127 { 00:20:44.127 "cntlid": 47, 00:20:44.127 "qid": 0, 00:20:44.127 "state": "enabled", 00:20:44.127 "thread": "nvmf_tgt_poll_group_000", 00:20:44.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:44.127 "listen_address": { 00:20:44.127 "trtype": "TCP", 00:20:44.127 "adrfam": "IPv4", 00:20:44.127 "traddr": "10.0.0.2", 00:20:44.127 "trsvcid": "4420" 00:20:44.127 }, 00:20:44.127 "peer_address": { 00:20:44.127 "trtype": "TCP", 00:20:44.127 "adrfam": "IPv4", 00:20:44.127 "traddr": "10.0.0.1", 00:20:44.127 "trsvcid": "38646" 00:20:44.127 }, 00:20:44.127 "auth": { 00:20:44.127 "state": "completed", 00:20:44.127 "digest": "sha256", 00:20:44.127 "dhgroup": "ffdhe8192" 00:20:44.127 } 00:20:44.127 } 00:20:44.127 ]' 00:20:44.127 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.127 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:44.127 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.127 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:44.127 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.127 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.127 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.389 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.389 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQ3MzA4ZDM3MDExNTZhMTkzMTY0MGQ5NDJhNDYzMzQ1MzlhY2VmMDYxM2IzNmI3NTVkMGM5NzMxYWY2YjBlOM8LAXc=: 00:20:44.389 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MjQ3MzA4ZDM3MDExNTZhMTkzMTY0MGQ5NDJhNDYzMzQ1MzlhY2VmMDYxM2IzNmI3NTVkMGM5NzMxYWY2YjBlOM8LAXc=: 00:20:45.331 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.331 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:45.331 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.331 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.331 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.331 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:45.331 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:45.331 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.331 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:45.331 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:45.331 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:45.331 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.331 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:45.331 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:45.331 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:45.331 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.331 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.331 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.331 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.331 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.331 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.331 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.331 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.592 00:20:45.592 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.592 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.592 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.853 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.853 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.853 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.853 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.853 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.853 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.853 { 00:20:45.853 "cntlid": 49, 00:20:45.853 "qid": 0, 00:20:45.853 "state": "enabled", 00:20:45.853 "thread": "nvmf_tgt_poll_group_000", 00:20:45.853 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:45.853 "listen_address": { 00:20:45.853 "trtype": "TCP", 00:20:45.853 "adrfam": "IPv4", 00:20:45.853 "traddr": "10.0.0.2", 00:20:45.853 "trsvcid": "4420" 00:20:45.853 }, 00:20:45.853 "peer_address": { 00:20:45.853 "trtype": "TCP", 00:20:45.853 "adrfam": "IPv4", 00:20:45.853 "traddr": "10.0.0.1", 00:20:45.853 "trsvcid": "57714" 00:20:45.853 }, 00:20:45.853 "auth": { 00:20:45.853 "state": "completed", 00:20:45.853 "digest": "sha384", 00:20:45.853 "dhgroup": "null" 00:20:45.853 } 00:20:45.853 } 00:20:45.853 ]' 00:20:45.853 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.853 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.853 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.853 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:45.853 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.853 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.853 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.853 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.115 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTRmMTllNzg1NTBjNzQ4YzIzYTc5NDFmNTMxNGJjNjc1ZTA0YmE2YWE5MjA4MzFhdJuIoA==: --dhchap-ctrl-secret DHHC-1:03:NDU4ZjhhOGYxMmUwN2E2ZjYzNjAzMTliYzZlYTE5Yzc5N2MxYWJmNGY0ZGE5YjFhMWJmYzJkYWMwNzNjMmJjYoHPW94=: 00:20:46.115 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MTRmMTllNzg1NTBjNzQ4YzIzYTc5NDFmNTMxNGJjNjc1ZTA0YmE2YWE5MjA4MzFhdJuIoA==: --dhchap-ctrl-secret DHHC-1:03:NDU4ZjhhOGYxMmUwN2E2ZjYzNjAzMTliYzZlYTE5Yzc5N2MxYWJmNGY0ZGE5YjFhMWJmYzJkYWMwNzNjMmJjYoHPW94=: 00:20:47.056 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.056 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.056 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:47.056 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.056 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.056 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.056 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.056 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:47.057 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:47.057 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:47.057 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.057 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:47.057 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:47.057 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:47.057 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.057 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.057 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.057 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.057 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.057 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.057 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.057 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.317 00:20:47.317 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.317 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.317 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.578 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.578 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.578 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.578 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.578 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.578 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.578 { 00:20:47.578 "cntlid": 51, 00:20:47.578 "qid": 0, 00:20:47.578 "state": "enabled", 00:20:47.578 "thread": "nvmf_tgt_poll_group_000", 00:20:47.578 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:47.578 "listen_address": { 00:20:47.578 "trtype": "TCP", 00:20:47.578 "adrfam": "IPv4", 00:20:47.578 "traddr": "10.0.0.2", 00:20:47.578 "trsvcid": "4420" 00:20:47.578 }, 00:20:47.578 "peer_address": { 00:20:47.578 "trtype": "TCP", 00:20:47.578 "adrfam": "IPv4", 00:20:47.578 "traddr": "10.0.0.1", 00:20:47.578 "trsvcid": "57750" 00:20:47.578 }, 00:20:47.578 "auth": { 00:20:47.578 "state": "completed", 00:20:47.578 "digest": "sha384", 00:20:47.578 "dhgroup": "null" 00:20:47.578 } 00:20:47.578 } 00:20:47.578 ]' 00:20:47.578 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.578 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.578 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.578 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:47.578 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:47.578 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.578 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.578 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.849 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjRjNzkzYmEwMjVlY2FiZGE5ZTRhZjQwNWZkMmZkNmOMnVeg: --dhchap-ctrl-secret DHHC-1:02:Yjg1ZDU2ZGUxMGU5MzRmNDg4YWJlYTdhNjcyNmVmNGFkNDVkYmFlZTI2N2RlNzZiu66vkw==: 00:20:47.849 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZjRjNzkzYmEwMjVlY2FiZGE5ZTRhZjQwNWZkMmZkNmOMnVeg: --dhchap-ctrl-secret DHHC-1:02:Yjg1ZDU2ZGUxMGU5MzRmNDg4YWJlYTdhNjcyNmVmNGFkNDVkYmFlZTI2N2RlNzZiu66vkw==: 00:20:48.790 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.790 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:48.790 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.790 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.790 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.790 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:48.790 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:48.790 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:48.790 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:48.790 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.790 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:48.790 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:48.790 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:48.790 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.790 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.790 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.790 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.790 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.790 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.790 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.790 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.050 00:20:49.050 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:49.050 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:49.050 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.310 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.310 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.310 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.310 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.310 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.310 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.310 { 00:20:49.310 "cntlid": 53, 00:20:49.310 "qid": 0, 00:20:49.310 "state": "enabled", 00:20:49.310 "thread": "nvmf_tgt_poll_group_000", 00:20:49.310 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:49.310 "listen_address": { 00:20:49.310 "trtype": "TCP", 00:20:49.310 "adrfam": "IPv4", 00:20:49.310 "traddr": "10.0.0.2", 00:20:49.310 "trsvcid": "4420" 00:20:49.310 }, 00:20:49.310 "peer_address": { 00:20:49.310 "trtype": "TCP", 00:20:49.310 "adrfam": "IPv4", 00:20:49.310 "traddr": "10.0.0.1", 00:20:49.310 "trsvcid": "57784" 00:20:49.310 }, 00:20:49.310 "auth": { 00:20:49.310 "state": "completed", 00:20:49.310 "digest": "sha384", 00:20:49.310 "dhgroup": "null" 00:20:49.310 } 00:20:49.310 } 00:20:49.310 ]' 00:20:49.310 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:49.310 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:49.310 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:49.310 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:49.310 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:49.310 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.310 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.310 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.571 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: --dhchap-ctrl-secret DHHC-1:01:Y2UxZjViMmE1ZGEwYTdkNmVjMmUxYWMwYmMwM2JhYjDtP5LJ: 00:20:49.571 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: --dhchap-ctrl-secret DHHC-1:01:Y2UxZjViMmE1ZGEwYTdkNmVjMmUxYWMwYmMwM2JhYjDtP5LJ: 00:20:50.141 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.402 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:50.402 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.402 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.402 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.402 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:50.402 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:50.402 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:50.402 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:50.402 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.402 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:50.402 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:50.402 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:50.402 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.402 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:50.402 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.402 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.402 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.402 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:50.402 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:50.402 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:50.663 00:20:50.663 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.663 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.663 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.924 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.924 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.924 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.924 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.924 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.924 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.924 { 00:20:50.924 "cntlid": 55, 00:20:50.924 "qid": 0, 00:20:50.924 "state": "enabled", 00:20:50.924 "thread": "nvmf_tgt_poll_group_000", 00:20:50.924 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:50.924 "listen_address": { 00:20:50.924 "trtype": "TCP", 00:20:50.924 "adrfam": "IPv4", 00:20:50.924 "traddr": "10.0.0.2", 00:20:50.924 "trsvcid": "4420" 00:20:50.924 }, 00:20:50.924 "peer_address": { 00:20:50.924 "trtype": "TCP", 00:20:50.924 "adrfam": "IPv4", 00:20:50.924 "traddr": "10.0.0.1", 00:20:50.924 "trsvcid": "57804" 00:20:50.924 }, 00:20:50.924 "auth": { 00:20:50.924 "state": "completed", 00:20:50.924 "digest": "sha384", 00:20:50.924 "dhgroup": "null" 00:20:50.924 } 00:20:50.924 } 00:20:50.924 ]' 00:20:50.924 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.924 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.924 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.924 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:50.924 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.185 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.185 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.185 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.186 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQ3MzA4ZDM3MDExNTZhMTkzMTY0MGQ5NDJhNDYzMzQ1MzlhY2VmMDYxM2IzNmI3NTVkMGM5NzMxYWY2YjBlOM8LAXc=: 00:20:51.186 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MjQ3MzA4ZDM3MDExNTZhMTkzMTY0MGQ5NDJhNDYzMzQ1MzlhY2VmMDYxM2IzNmI3NTVkMGM5NzMxYWY2YjBlOM8LAXc=: 00:20:52.129 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.129 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.129 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:52.129 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.129 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.129 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.129 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:52.129 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.129 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:52.129 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:52.129 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:52.129 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.129 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:52.129 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:52.129 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:52.129 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.129 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.129 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.129 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.129 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.129 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.129 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.130 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.390 00:20:52.390 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.390 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.390 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.651 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.651 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.651 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.651 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.651 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.651 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.651 { 00:20:52.651 "cntlid": 57, 00:20:52.651 "qid": 0, 00:20:52.651 "state": "enabled", 00:20:52.651 "thread": "nvmf_tgt_poll_group_000", 00:20:52.651 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:52.651 "listen_address": { 00:20:52.651 "trtype": "TCP", 00:20:52.651 "adrfam": "IPv4", 00:20:52.651 "traddr": "10.0.0.2", 00:20:52.651 "trsvcid": "4420" 00:20:52.651 }, 00:20:52.651 "peer_address": { 00:20:52.651 "trtype": "TCP", 00:20:52.651 "adrfam": "IPv4", 00:20:52.651 "traddr": "10.0.0.1", 00:20:52.651 "trsvcid": "57820" 00:20:52.651 }, 00:20:52.651 "auth": { 00:20:52.651 "state": "completed", 00:20:52.651 "digest": "sha384", 00:20:52.651 "dhgroup": "ffdhe2048" 00:20:52.651 } 00:20:52.651 } 00:20:52.651 ]' 00:20:52.651 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.651 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:52.651 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.651 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:52.651 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:52.651 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.651 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.651 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.912 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTRmMTllNzg1NTBjNzQ4YzIzYTc5NDFmNTMxNGJjNjc1ZTA0YmE2YWE5MjA4MzFhdJuIoA==: --dhchap-ctrl-secret DHHC-1:03:NDU4ZjhhOGYxMmUwN2E2ZjYzNjAzMTliYzZlYTE5Yzc5N2MxYWJmNGY0ZGE5YjFhMWJmYzJkYWMwNzNjMmJjYoHPW94=: 00:20:52.912 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MTRmMTllNzg1NTBjNzQ4YzIzYTc5NDFmNTMxNGJjNjc1ZTA0YmE2YWE5MjA4MzFhdJuIoA==: --dhchap-ctrl-secret DHHC-1:03:NDU4ZjhhOGYxMmUwN2E2ZjYzNjAzMTliYzZlYTE5Yzc5N2MxYWJmNGY0ZGE5YjFhMWJmYzJkYWMwNzNjMmJjYoHPW94=: 00:20:53.853 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.853 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:53.853 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.853 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.853 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.853 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.853 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:53.853 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:53.853 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:53.853 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:53.853 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:53.853 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:53.853 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:53.853 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.853 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.853 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.853 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.853 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.853 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.853 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.853 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.113 00:20:54.113 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.113 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.113 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.374 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.374 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.374 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.374 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.374 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.374 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.374 { 00:20:54.374 "cntlid": 59, 00:20:54.374 "qid": 0, 00:20:54.374 "state": "enabled", 00:20:54.375 "thread": "nvmf_tgt_poll_group_000", 00:20:54.375 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:54.375 "listen_address": { 00:20:54.375 "trtype": "TCP", 00:20:54.375 "adrfam": "IPv4", 00:20:54.375 "traddr": "10.0.0.2", 00:20:54.375 "trsvcid": "4420" 00:20:54.375 }, 00:20:54.375 "peer_address": { 00:20:54.375 "trtype": "TCP", 00:20:54.375 "adrfam": "IPv4", 00:20:54.375 "traddr": "10.0.0.1", 00:20:54.375 "trsvcid": "43614" 00:20:54.375 }, 00:20:54.375 "auth": { 00:20:54.375 "state": "completed", 00:20:54.375 "digest": "sha384", 00:20:54.375 "dhgroup": "ffdhe2048" 00:20:54.375 } 00:20:54.375 } 00:20:54.375 ]' 00:20:54.375 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.375 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.375 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.375 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:54.375 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.375 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.375 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.375 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.635 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjRjNzkzYmEwMjVlY2FiZGE5ZTRhZjQwNWZkMmZkNmOMnVeg: --dhchap-ctrl-secret DHHC-1:02:Yjg1ZDU2ZGUxMGU5MzRmNDg4YWJlYTdhNjcyNmVmNGFkNDVkYmFlZTI2N2RlNzZiu66vkw==: 00:20:54.635 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZjRjNzkzYmEwMjVlY2FiZGE5ZTRhZjQwNWZkMmZkNmOMnVeg: --dhchap-ctrl-secret DHHC-1:02:Yjg1ZDU2ZGUxMGU5MzRmNDg4YWJlYTdhNjcyNmVmNGFkNDVkYmFlZTI2N2RlNzZiu66vkw==: 00:20:55.577 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.577 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:55.577 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.577 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.577 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.577 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.577 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:55.577 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:55.577 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:55.577 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.577 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:55.577 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:55.577 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:55.577 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.577 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.577 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.577 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.577 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.577 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.577 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.577 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.838 00:20:55.838 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.838 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.838 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.098 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.098 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.098 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.098 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.098 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.098 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.098 { 00:20:56.098 "cntlid": 61, 00:20:56.098 "qid": 0, 00:20:56.098 "state": "enabled", 00:20:56.098 "thread": "nvmf_tgt_poll_group_000", 00:20:56.098 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:56.098 "listen_address": { 00:20:56.098 "trtype": "TCP", 00:20:56.098 "adrfam": "IPv4", 00:20:56.098 "traddr": "10.0.0.2", 00:20:56.098 "trsvcid": "4420" 00:20:56.098 }, 00:20:56.098 "peer_address": { 00:20:56.098 "trtype": "TCP", 00:20:56.098 "adrfam": "IPv4", 00:20:56.098 "traddr": "10.0.0.1", 00:20:56.098 "trsvcid": "43636" 00:20:56.098 }, 00:20:56.098 "auth": { 00:20:56.098 "state": "completed", 00:20:56.098 "digest": "sha384", 00:20:56.098 "dhgroup": "ffdhe2048" 00:20:56.098 } 00:20:56.098 } 00:20:56.098 ]' 00:20:56.098 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.098 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:56.098 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.098 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:56.098 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.098 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.098 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.098 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.359 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: --dhchap-ctrl-secret DHHC-1:01:Y2UxZjViMmE1ZGEwYTdkNmVjMmUxYWMwYmMwM2JhYjDtP5LJ: 00:20:56.359 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: --dhchap-ctrl-secret DHHC-1:01:Y2UxZjViMmE1ZGEwYTdkNmVjMmUxYWMwYmMwM2JhYjDtP5LJ: 00:20:56.929 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.188 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:57.188 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.188 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.188 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.188 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.188 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:57.188 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:57.188 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:57.188 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.188 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:57.188 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:57.188 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:57.188 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.188 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:57.188 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.188 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.188 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.188 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:57.188 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:57.188 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:57.448 00:20:57.448 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.448 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.448 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.709 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.709 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.709 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.709 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.709 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.709 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.709 { 00:20:57.709 "cntlid": 63, 00:20:57.709 "qid": 0, 00:20:57.709 "state": "enabled", 00:20:57.709 "thread": "nvmf_tgt_poll_group_000", 00:20:57.709 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:57.709 "listen_address": { 00:20:57.709 "trtype": "TCP", 00:20:57.709 "adrfam": "IPv4", 00:20:57.709 "traddr": "10.0.0.2", 00:20:57.709 "trsvcid": "4420" 00:20:57.709 }, 00:20:57.709 "peer_address": { 00:20:57.709 "trtype": "TCP", 00:20:57.709 "adrfam": "IPv4", 00:20:57.709 "traddr": "10.0.0.1", 00:20:57.709 "trsvcid": "43652" 00:20:57.709 }, 00:20:57.709 "auth": { 00:20:57.709 "state": "completed", 00:20:57.709 "digest": "sha384", 00:20:57.709 "dhgroup": "ffdhe2048" 00:20:57.709 } 00:20:57.709 } 00:20:57.709 ]' 00:20:57.709 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.709 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:57.709 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.709 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:57.709 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.969 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.969 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.969 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.969 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQ3MzA4ZDM3MDExNTZhMTkzMTY0MGQ5NDJhNDYzMzQ1MzlhY2VmMDYxM2IzNmI3NTVkMGM5NzMxYWY2YjBlOM8LAXc=: 00:20:57.969 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MjQ3MzA4ZDM3MDExNTZhMTkzMTY0MGQ5NDJhNDYzMzQ1MzlhY2VmMDYxM2IzNmI3NTVkMGM5NzMxYWY2YjBlOM8LAXc=: 00:20:58.909 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.909 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:58.909 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.909 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.909 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.909 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:58.909 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:58.909 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:58.909 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:59.169 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:59.169 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.169 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:59.169 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:59.169 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:59.169 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.169 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.169 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.169 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.170 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.170 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.170 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.170 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.430 00:20:59.430 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.430 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.430 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.430 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.430 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.430 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.430 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.430 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.430 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.430 { 00:20:59.430 "cntlid": 65, 00:20:59.430 "qid": 0, 00:20:59.430 "state": "enabled", 00:20:59.430 "thread": "nvmf_tgt_poll_group_000", 00:20:59.430 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:59.430 "listen_address": { 00:20:59.430 "trtype": "TCP", 00:20:59.430 "adrfam": "IPv4", 00:20:59.430 "traddr": "10.0.0.2", 00:20:59.430 "trsvcid": "4420" 00:20:59.430 }, 00:20:59.430 "peer_address": { 00:20:59.430 "trtype": "TCP", 00:20:59.430 "adrfam": "IPv4", 00:20:59.430 "traddr": "10.0.0.1", 00:20:59.430 "trsvcid": "43690" 00:20:59.430 }, 00:20:59.430 "auth": { 00:20:59.430 "state": "completed", 00:20:59.430 "digest": "sha384", 00:20:59.430 "dhgroup": "ffdhe3072" 00:20:59.430 } 00:20:59.430 } 00:20:59.430 ]' 00:20:59.430 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.690 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:59.690 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.690 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:59.690 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.690 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.690 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.690 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.950 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTRmMTllNzg1NTBjNzQ4YzIzYTc5NDFmNTMxNGJjNjc1ZTA0YmE2YWE5MjA4MzFhdJuIoA==: --dhchap-ctrl-secret DHHC-1:03:NDU4ZjhhOGYxMmUwN2E2ZjYzNjAzMTliYzZlYTE5Yzc5N2MxYWJmNGY0ZGE5YjFhMWJmYzJkYWMwNzNjMmJjYoHPW94=: 00:20:59.951 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MTRmMTllNzg1NTBjNzQ4YzIzYTc5NDFmNTMxNGJjNjc1ZTA0YmE2YWE5MjA4MzFhdJuIoA==: --dhchap-ctrl-secret DHHC-1:03:NDU4ZjhhOGYxMmUwN2E2ZjYzNjAzMTliYzZlYTE5Yzc5N2MxYWJmNGY0ZGE5YjFhMWJmYzJkYWMwNzNjMmJjYoHPW94=: 00:21:00.522 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.522 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:00.522 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.522 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.522 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.522 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.522 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:00.522 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:00.782 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:21:00.782 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.782 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:00.782 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:00.782 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:00.782 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.782 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.782 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.782 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.782 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.782 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.782 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.782 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.042 00:21:01.042 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:01.042 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.042 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.302 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.302 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.302 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.302 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.302 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.302 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.302 { 00:21:01.302 "cntlid": 67, 00:21:01.302 "qid": 0, 00:21:01.302 "state": "enabled", 00:21:01.302 "thread": "nvmf_tgt_poll_group_000", 00:21:01.302 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:01.302 "listen_address": { 00:21:01.302 "trtype": "TCP", 00:21:01.302 "adrfam": "IPv4", 00:21:01.302 "traddr": "10.0.0.2", 00:21:01.302 "trsvcid": "4420" 00:21:01.302 }, 00:21:01.302 "peer_address": { 00:21:01.302 "trtype": "TCP", 00:21:01.302 "adrfam": "IPv4", 00:21:01.302 "traddr": "10.0.0.1", 00:21:01.302 "trsvcid": "43706" 00:21:01.302 }, 00:21:01.302 "auth": { 00:21:01.302 "state": "completed", 00:21:01.302 "digest": "sha384", 00:21:01.302 "dhgroup": "ffdhe3072" 00:21:01.302 } 00:21:01.302 } 00:21:01.302 ]' 00:21:01.302 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.302 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:01.302 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.302 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:01.302 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.302 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.302 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.302 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.563 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjRjNzkzYmEwMjVlY2FiZGE5ZTRhZjQwNWZkMmZkNmOMnVeg: --dhchap-ctrl-secret DHHC-1:02:Yjg1ZDU2ZGUxMGU5MzRmNDg4YWJlYTdhNjcyNmVmNGFkNDVkYmFlZTI2N2RlNzZiu66vkw==: 00:21:01.563 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZjRjNzkzYmEwMjVlY2FiZGE5ZTRhZjQwNWZkMmZkNmOMnVeg: --dhchap-ctrl-secret DHHC-1:02:Yjg1ZDU2ZGUxMGU5MzRmNDg4YWJlYTdhNjcyNmVmNGFkNDVkYmFlZTI2N2RlNzZiu66vkw==: 00:21:02.510 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.510 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:02.510 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.510 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.510 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.510 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.510 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:02.510 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:02.510 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:21:02.510 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.510 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:02.510 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:02.510 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:02.510 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.510 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.510 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.510 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.510 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.510 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.510 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.510 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.771 00:21:02.771 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.771 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.771 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.031 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.031 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.031 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.032 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.032 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.032 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.032 { 00:21:03.032 "cntlid": 69, 00:21:03.032 "qid": 0, 00:21:03.032 "state": "enabled", 00:21:03.032 "thread": "nvmf_tgt_poll_group_000", 00:21:03.032 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:03.032 "listen_address": { 00:21:03.032 "trtype": "TCP", 00:21:03.032 "adrfam": "IPv4", 00:21:03.032 "traddr": "10.0.0.2", 00:21:03.032 "trsvcid": "4420" 00:21:03.032 }, 00:21:03.032 "peer_address": { 00:21:03.032 "trtype": "TCP", 00:21:03.032 "adrfam": "IPv4", 00:21:03.032 "traddr": "10.0.0.1", 00:21:03.032 "trsvcid": "43716" 00:21:03.032 }, 00:21:03.032 "auth": { 00:21:03.032 "state": "completed", 00:21:03.032 "digest": "sha384", 00:21:03.032 "dhgroup": "ffdhe3072" 00:21:03.032 } 00:21:03.032 } 00:21:03.032 ]' 00:21:03.032 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.032 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:03.032 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.032 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:03.032 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.032 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.032 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.032 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.293 13:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: --dhchap-ctrl-secret DHHC-1:01:Y2UxZjViMmE1ZGEwYTdkNmVjMmUxYWMwYmMwM2JhYjDtP5LJ: 00:21:03.293 13:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: --dhchap-ctrl-secret DHHC-1:01:Y2UxZjViMmE1ZGEwYTdkNmVjMmUxYWMwYmMwM2JhYjDtP5LJ: 00:21:04.240 13:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.240 13:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:04.240 13:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.240 13:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.240 13:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.240 13:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.240 13:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:04.240 13:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:04.240 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:21:04.240 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.240 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:04.240 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:04.240 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:04.240 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.240 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:04.240 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.240 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.240 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.240 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:04.240 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:04.240 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:04.500 00:21:04.500 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.500 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.500 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.761 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.761 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.761 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.761 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.761 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.761 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.761 { 00:21:04.761 "cntlid": 71, 00:21:04.761 "qid": 0, 00:21:04.761 "state": "enabled", 00:21:04.761 "thread": "nvmf_tgt_poll_group_000", 00:21:04.761 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:04.761 "listen_address": { 00:21:04.761 "trtype": "TCP", 00:21:04.761 "adrfam": "IPv4", 00:21:04.761 "traddr": "10.0.0.2", 00:21:04.761 "trsvcid": "4420" 00:21:04.761 }, 00:21:04.761 "peer_address": { 00:21:04.761 "trtype": "TCP", 00:21:04.761 "adrfam": "IPv4", 00:21:04.761 "traddr": "10.0.0.1", 00:21:04.761 "trsvcid": "54064" 00:21:04.761 }, 00:21:04.761 "auth": { 00:21:04.761 "state": "completed", 00:21:04.761 "digest": "sha384", 00:21:04.761 "dhgroup": "ffdhe3072" 00:21:04.761 } 00:21:04.761 } 00:21:04.761 ]' 00:21:04.761 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.761 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:04.761 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.761 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:04.761 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.761 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.761 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.761 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.022 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQ3MzA4ZDM3MDExNTZhMTkzMTY0MGQ5NDJhNDYzMzQ1MzlhY2VmMDYxM2IzNmI3NTVkMGM5NzMxYWY2YjBlOM8LAXc=: 00:21:05.022 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MjQ3MzA4ZDM3MDExNTZhMTkzMTY0MGQ5NDJhNDYzMzQ1MzlhY2VmMDYxM2IzNmI3NTVkMGM5NzMxYWY2YjBlOM8LAXc=: 00:21:05.966 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.966 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:05.966 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.966 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.966 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.966 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:05.966 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.966 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:05.966 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:05.966 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:21:05.966 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.966 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:05.966 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:05.966 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:05.966 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.967 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.967 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.967 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.967 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.967 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.967 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.967 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.227 00:21:06.227 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.227 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.227 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.489 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.489 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.489 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.489 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.489 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.489 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.489 { 00:21:06.489 "cntlid": 73, 00:21:06.489 "qid": 0, 00:21:06.489 "state": "enabled", 00:21:06.489 "thread": "nvmf_tgt_poll_group_000", 00:21:06.489 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:06.489 "listen_address": { 00:21:06.489 "trtype": "TCP", 00:21:06.489 "adrfam": "IPv4", 00:21:06.489 "traddr": "10.0.0.2", 00:21:06.489 "trsvcid": "4420" 00:21:06.489 }, 00:21:06.489 "peer_address": { 00:21:06.489 "trtype": "TCP", 00:21:06.489 "adrfam": "IPv4", 00:21:06.489 "traddr": "10.0.0.1", 00:21:06.489 "trsvcid": "54086" 00:21:06.489 }, 00:21:06.489 "auth": { 00:21:06.489 "state": "completed", 00:21:06.489 "digest": "sha384", 00:21:06.489 "dhgroup": "ffdhe4096" 00:21:06.489 } 00:21:06.489 } 00:21:06.489 ]' 00:21:06.489 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.489 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:06.489 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.489 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:06.489 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.489 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.489 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.489 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.750 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTRmMTllNzg1NTBjNzQ4YzIzYTc5NDFmNTMxNGJjNjc1ZTA0YmE2YWE5MjA4MzFhdJuIoA==: --dhchap-ctrl-secret DHHC-1:03:NDU4ZjhhOGYxMmUwN2E2ZjYzNjAzMTliYzZlYTE5Yzc5N2MxYWJmNGY0ZGE5YjFhMWJmYzJkYWMwNzNjMmJjYoHPW94=: 00:21:06.750 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MTRmMTllNzg1NTBjNzQ4YzIzYTc5NDFmNTMxNGJjNjc1ZTA0YmE2YWE5MjA4MzFhdJuIoA==: --dhchap-ctrl-secret DHHC-1:03:NDU4ZjhhOGYxMmUwN2E2ZjYzNjAzMTliYzZlYTE5Yzc5N2MxYWJmNGY0ZGE5YjFhMWJmYzJkYWMwNzNjMmJjYoHPW94=: 00:21:07.694 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.694 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:07.694 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.694 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.694 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.694 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.694 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:07.694 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:07.694 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:21:07.694 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.694 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:07.694 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:07.694 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:07.694 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.694 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.694 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.694 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.694 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.694 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.694 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.694 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.955 00:21:07.955 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.955 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.955 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.217 13:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.217 13:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.217 13:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.217 13:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.217 13:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.217 13:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.217 { 00:21:08.217 "cntlid": 75, 00:21:08.217 "qid": 0, 00:21:08.217 "state": "enabled", 00:21:08.217 "thread": "nvmf_tgt_poll_group_000", 00:21:08.217 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:08.217 "listen_address": { 00:21:08.217 "trtype": "TCP", 00:21:08.217 "adrfam": "IPv4", 00:21:08.217 "traddr": "10.0.0.2", 00:21:08.217 "trsvcid": "4420" 00:21:08.217 }, 00:21:08.217 "peer_address": { 00:21:08.217 "trtype": "TCP", 00:21:08.217 "adrfam": "IPv4", 00:21:08.217 "traddr": "10.0.0.1", 00:21:08.217 "trsvcid": "54118" 00:21:08.217 }, 00:21:08.217 "auth": { 00:21:08.217 "state": "completed", 00:21:08.217 "digest": "sha384", 00:21:08.217 "dhgroup": "ffdhe4096" 00:21:08.217 } 00:21:08.217 } 00:21:08.217 ]' 00:21:08.217 13:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.217 13:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:08.217 13:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.217 13:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:08.217 13:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.217 13:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.217 13:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.217 13:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.479 13:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjRjNzkzYmEwMjVlY2FiZGE5ZTRhZjQwNWZkMmZkNmOMnVeg: --dhchap-ctrl-secret DHHC-1:02:Yjg1ZDU2ZGUxMGU5MzRmNDg4YWJlYTdhNjcyNmVmNGFkNDVkYmFlZTI2N2RlNzZiu66vkw==: 00:21:08.479 13:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZjRjNzkzYmEwMjVlY2FiZGE5ZTRhZjQwNWZkMmZkNmOMnVeg: --dhchap-ctrl-secret DHHC-1:02:Yjg1ZDU2ZGUxMGU5MzRmNDg4YWJlYTdhNjcyNmVmNGFkNDVkYmFlZTI2N2RlNzZiu66vkw==: 00:21:09.421 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.421 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:09.421 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.421 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.421 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.421 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.421 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:09.421 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:09.421 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:21:09.421 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.421 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:09.421 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:09.421 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:09.421 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.421 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.421 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.421 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.421 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.421 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.421 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.421 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.682 00:21:09.682 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.682 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.682 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.943 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.943 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.943 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.943 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.943 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.943 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.943 { 00:21:09.943 "cntlid": 77, 00:21:09.943 "qid": 0, 00:21:09.943 "state": "enabled", 00:21:09.943 "thread": "nvmf_tgt_poll_group_000", 00:21:09.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:09.943 "listen_address": { 00:21:09.943 "trtype": "TCP", 00:21:09.943 "adrfam": "IPv4", 00:21:09.943 "traddr": "10.0.0.2", 00:21:09.943 "trsvcid": "4420" 00:21:09.943 }, 00:21:09.943 "peer_address": { 00:21:09.943 "trtype": "TCP", 00:21:09.943 "adrfam": "IPv4", 00:21:09.943 "traddr": "10.0.0.1", 00:21:09.943 "trsvcid": "54150" 00:21:09.943 }, 00:21:09.943 "auth": { 00:21:09.943 "state": "completed", 00:21:09.943 "digest": "sha384", 00:21:09.943 "dhgroup": "ffdhe4096" 00:21:09.943 } 00:21:09.943 } 00:21:09.943 ]' 00:21:09.943 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.943 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:09.943 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.943 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:09.943 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.204 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.204 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.204 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.204 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: --dhchap-ctrl-secret DHHC-1:01:Y2UxZjViMmE1ZGEwYTdkNmVjMmUxYWMwYmMwM2JhYjDtP5LJ: 00:21:10.204 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: --dhchap-ctrl-secret DHHC-1:01:Y2UxZjViMmE1ZGEwYTdkNmVjMmUxYWMwYmMwM2JhYjDtP5LJ: 00:21:11.146 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.146 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:11.146 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.146 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.146 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.146 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.146 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:11.146 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:11.146 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:21:11.146 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.146 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:11.146 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:11.146 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:11.146 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.146 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:11.146 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.146 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.146 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.146 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:11.146 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:11.146 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:11.407 00:21:11.407 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.407 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.407 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.672 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.672 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.672 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.672 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.672 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.672 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.672 { 00:21:11.672 "cntlid": 79, 00:21:11.672 "qid": 0, 00:21:11.672 "state": "enabled", 00:21:11.672 "thread": "nvmf_tgt_poll_group_000", 00:21:11.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:11.672 "listen_address": { 00:21:11.672 "trtype": "TCP", 00:21:11.672 "adrfam": "IPv4", 00:21:11.672 "traddr": "10.0.0.2", 00:21:11.672 "trsvcid": "4420" 00:21:11.672 }, 00:21:11.672 "peer_address": { 00:21:11.672 "trtype": "TCP", 00:21:11.672 "adrfam": "IPv4", 00:21:11.672 "traddr": "10.0.0.1", 00:21:11.672 "trsvcid": "54168" 00:21:11.672 }, 00:21:11.672 "auth": { 00:21:11.672 "state": "completed", 00:21:11.672 "digest": "sha384", 00:21:11.672 "dhgroup": "ffdhe4096" 00:21:11.672 } 00:21:11.672 } 00:21:11.672 ]' 00:21:11.672 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.672 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:11.672 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.672 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:11.672 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.672 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.672 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.672 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.933 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQ3MzA4ZDM3MDExNTZhMTkzMTY0MGQ5NDJhNDYzMzQ1MzlhY2VmMDYxM2IzNmI3NTVkMGM5NzMxYWY2YjBlOM8LAXc=: 00:21:11.933 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MjQ3MzA4ZDM3MDExNTZhMTkzMTY0MGQ5NDJhNDYzMzQ1MzlhY2VmMDYxM2IzNmI3NTVkMGM5NzMxYWY2YjBlOM8LAXc=: 00:21:12.875 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.875 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:12.875 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.875 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.875 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.875 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:12.875 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.875 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:12.875 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:12.875 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:21:12.875 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.875 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:12.875 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:12.875 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:12.876 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.876 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.876 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.876 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.876 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.876 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.876 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.876 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.447 00:21:13.447 13:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.447 13:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.447 13:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.447 13:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.447 13:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.447 13:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.447 13:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.447 13:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.447 13:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.447 { 00:21:13.447 "cntlid": 81, 00:21:13.447 "qid": 0, 00:21:13.447 "state": "enabled", 00:21:13.447 "thread": "nvmf_tgt_poll_group_000", 00:21:13.447 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:13.447 "listen_address": { 00:21:13.447 "trtype": "TCP", 00:21:13.447 "adrfam": "IPv4", 00:21:13.447 "traddr": "10.0.0.2", 00:21:13.447 "trsvcid": "4420" 00:21:13.447 }, 00:21:13.447 "peer_address": { 00:21:13.447 "trtype": "TCP", 00:21:13.447 "adrfam": "IPv4", 00:21:13.447 "traddr": "10.0.0.1", 00:21:13.447 "trsvcid": "54184" 00:21:13.447 }, 00:21:13.447 "auth": { 00:21:13.447 "state": "completed", 00:21:13.447 "digest": "sha384", 00:21:13.447 "dhgroup": "ffdhe6144" 00:21:13.447 } 00:21:13.447 } 00:21:13.447 ]' 00:21:13.447 13:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.447 13:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:13.447 13:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.447 13:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:13.447 13:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.708 13:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.708 13:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.708 13:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.708 13:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTRmMTllNzg1NTBjNzQ4YzIzYTc5NDFmNTMxNGJjNjc1ZTA0YmE2YWE5MjA4MzFhdJuIoA==: --dhchap-ctrl-secret DHHC-1:03:NDU4ZjhhOGYxMmUwN2E2ZjYzNjAzMTliYzZlYTE5Yzc5N2MxYWJmNGY0ZGE5YjFhMWJmYzJkYWMwNzNjMmJjYoHPW94=: 00:21:13.708 13:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MTRmMTllNzg1NTBjNzQ4YzIzYTc5NDFmNTMxNGJjNjc1ZTA0YmE2YWE5MjA4MzFhdJuIoA==: --dhchap-ctrl-secret DHHC-1:03:NDU4ZjhhOGYxMmUwN2E2ZjYzNjAzMTliYzZlYTE5Yzc5N2MxYWJmNGY0ZGE5YjFhMWJmYzJkYWMwNzNjMmJjYoHPW94=: 00:21:14.651 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.651 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:14.651 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.651 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.651 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.651 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.651 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:14.651 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:14.651 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:21:14.651 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.651 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:14.651 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:14.651 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:14.651 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.651 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.651 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.651 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.651 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.651 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.651 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.651 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.223 00:21:15.223 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.223 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:15.223 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.223 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.223 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.223 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.223 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.223 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.483 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.483 { 00:21:15.483 "cntlid": 83, 00:21:15.483 "qid": 0, 00:21:15.483 "state": "enabled", 00:21:15.483 "thread": "nvmf_tgt_poll_group_000", 00:21:15.483 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:15.483 "listen_address": { 00:21:15.483 "trtype": "TCP", 00:21:15.483 "adrfam": "IPv4", 00:21:15.483 "traddr": "10.0.0.2", 00:21:15.483 "trsvcid": "4420" 00:21:15.483 }, 00:21:15.483 "peer_address": { 00:21:15.483 "trtype": "TCP", 00:21:15.483 "adrfam": "IPv4", 00:21:15.483 "traddr": "10.0.0.1", 00:21:15.483 "trsvcid": "56694" 00:21:15.484 }, 00:21:15.484 "auth": { 00:21:15.484 "state": "completed", 00:21:15.484 "digest": "sha384", 00:21:15.484 "dhgroup": "ffdhe6144" 00:21:15.484 } 00:21:15.484 } 00:21:15.484 ]' 00:21:15.484 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.484 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:15.484 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.484 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:15.484 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.484 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.484 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.484 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.745 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjRjNzkzYmEwMjVlY2FiZGE5ZTRhZjQwNWZkMmZkNmOMnVeg: --dhchap-ctrl-secret DHHC-1:02:Yjg1ZDU2ZGUxMGU5MzRmNDg4YWJlYTdhNjcyNmVmNGFkNDVkYmFlZTI2N2RlNzZiu66vkw==: 00:21:15.745 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZjRjNzkzYmEwMjVlY2FiZGE5ZTRhZjQwNWZkMmZkNmOMnVeg: --dhchap-ctrl-secret DHHC-1:02:Yjg1ZDU2ZGUxMGU5MzRmNDg4YWJlYTdhNjcyNmVmNGFkNDVkYmFlZTI2N2RlNzZiu66vkw==: 00:21:16.315 13:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.315 13:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:16.315 13:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.315 13:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.576 13:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.576 13:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:16.576 13:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:16.576 13:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:16.576 13:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:21:16.576 13:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.576 13:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:16.576 13:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:16.576 13:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:16.576 13:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.576 13:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.576 13:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.576 13:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.576 13:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.576 13:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.577 13:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.577 13:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.148 00:21:17.148 13:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:17.148 13:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:17.148 13:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.148 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.148 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.148 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.148 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.148 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.148 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.148 { 00:21:17.148 "cntlid": 85, 00:21:17.148 "qid": 0, 00:21:17.148 "state": "enabled", 00:21:17.148 "thread": "nvmf_tgt_poll_group_000", 00:21:17.148 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:17.148 "listen_address": { 00:21:17.148 "trtype": "TCP", 00:21:17.148 "adrfam": "IPv4", 00:21:17.148 "traddr": "10.0.0.2", 00:21:17.148 "trsvcid": "4420" 00:21:17.148 }, 00:21:17.148 "peer_address": { 00:21:17.148 "trtype": "TCP", 00:21:17.148 "adrfam": "IPv4", 00:21:17.148 "traddr": "10.0.0.1", 00:21:17.148 "trsvcid": "56710" 00:21:17.148 }, 00:21:17.148 "auth": { 00:21:17.148 "state": "completed", 00:21:17.148 "digest": "sha384", 00:21:17.148 "dhgroup": "ffdhe6144" 00:21:17.148 } 00:21:17.148 } 00:21:17.149 ]' 00:21:17.149 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.149 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:17.149 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.410 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:17.410 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.410 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.410 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.410 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.410 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: --dhchap-ctrl-secret DHHC-1:01:Y2UxZjViMmE1ZGEwYTdkNmVjMmUxYWMwYmMwM2JhYjDtP5LJ: 00:21:17.410 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: --dhchap-ctrl-secret DHHC-1:01:Y2UxZjViMmE1ZGEwYTdkNmVjMmUxYWMwYmMwM2JhYjDtP5LJ: 00:21:18.352 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.352 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:18.352 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.352 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.352 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.352 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.352 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:18.352 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:18.352 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:21:18.352 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.352 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:18.352 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:18.352 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:18.352 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.352 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:18.352 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.352 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.352 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.352 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:18.352 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:18.352 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:18.924 00:21:18.924 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.924 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.924 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.924 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.924 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.924 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.924 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.924 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.924 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.924 { 00:21:18.924 "cntlid": 87, 00:21:18.924 "qid": 0, 00:21:18.924 "state": "enabled", 00:21:18.924 "thread": "nvmf_tgt_poll_group_000", 00:21:18.924 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:18.924 "listen_address": { 00:21:18.924 "trtype": "TCP", 00:21:18.924 "adrfam": "IPv4", 00:21:18.924 "traddr": "10.0.0.2", 00:21:18.924 "trsvcid": "4420" 00:21:18.924 }, 00:21:18.924 "peer_address": { 00:21:18.924 "trtype": "TCP", 00:21:18.924 "adrfam": "IPv4", 00:21:18.924 "traddr": "10.0.0.1", 00:21:18.924 "trsvcid": "56744" 00:21:18.924 }, 00:21:18.924 "auth": { 00:21:18.924 "state": "completed", 00:21:18.924 "digest": "sha384", 00:21:18.924 "dhgroup": "ffdhe6144" 00:21:18.924 } 00:21:18.924 } 00:21:18.924 ]' 00:21:18.924 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.184 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:19.184 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.184 13:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:19.184 13:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.184 13:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.184 13:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.184 13:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.445 13:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQ3MzA4ZDM3MDExNTZhMTkzMTY0MGQ5NDJhNDYzMzQ1MzlhY2VmMDYxM2IzNmI3NTVkMGM5NzMxYWY2YjBlOM8LAXc=: 00:21:19.445 13:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MjQ3MzA4ZDM3MDExNTZhMTkzMTY0MGQ5NDJhNDYzMzQ1MzlhY2VmMDYxM2IzNmI3NTVkMGM5NzMxYWY2YjBlOM8LAXc=: 00:21:20.015 13:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.015 13:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:20.015 13:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.015 13:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.015 13:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.016 13:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:20.016 13:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.016 13:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:20.016 13:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:20.276 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:21:20.276 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.276 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:20.276 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:20.276 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:20.276 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.276 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.276 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.276 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.276 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.276 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.276 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.276 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.847 00:21:20.847 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:20.847 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:20.847 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.107 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.107 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.107 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.107 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.107 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.107 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.107 { 00:21:21.107 "cntlid": 89, 00:21:21.107 "qid": 0, 00:21:21.107 "state": "enabled", 00:21:21.107 "thread": "nvmf_tgt_poll_group_000", 00:21:21.107 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:21.107 "listen_address": { 00:21:21.107 "trtype": "TCP", 00:21:21.107 "adrfam": "IPv4", 00:21:21.107 "traddr": "10.0.0.2", 00:21:21.107 "trsvcid": "4420" 00:21:21.107 }, 00:21:21.107 "peer_address": { 00:21:21.107 "trtype": "TCP", 00:21:21.107 "adrfam": "IPv4", 00:21:21.107 "traddr": "10.0.0.1", 00:21:21.107 "trsvcid": "56788" 00:21:21.107 }, 00:21:21.107 "auth": { 00:21:21.107 "state": "completed", 00:21:21.107 "digest": "sha384", 00:21:21.107 "dhgroup": "ffdhe8192" 00:21:21.107 } 00:21:21.107 } 00:21:21.107 ]' 00:21:21.107 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.107 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:21.107 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.107 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:21.107 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.107 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.107 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.107 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.368 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTRmMTllNzg1NTBjNzQ4YzIzYTc5NDFmNTMxNGJjNjc1ZTA0YmE2YWE5MjA4MzFhdJuIoA==: --dhchap-ctrl-secret DHHC-1:03:NDU4ZjhhOGYxMmUwN2E2ZjYzNjAzMTliYzZlYTE5Yzc5N2MxYWJmNGY0ZGE5YjFhMWJmYzJkYWMwNzNjMmJjYoHPW94=: 00:21:21.368 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MTRmMTllNzg1NTBjNzQ4YzIzYTc5NDFmNTMxNGJjNjc1ZTA0YmE2YWE5MjA4MzFhdJuIoA==: --dhchap-ctrl-secret DHHC-1:03:NDU4ZjhhOGYxMmUwN2E2ZjYzNjAzMTliYzZlYTE5Yzc5N2MxYWJmNGY0ZGE5YjFhMWJmYzJkYWMwNzNjMmJjYoHPW94=: 00:21:22.374 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.374 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:22.374 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.374 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.374 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.374 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:22.374 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:22.374 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:22.374 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:21:22.374 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.374 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:22.374 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:22.374 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:22.374 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.374 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.374 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.374 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.374 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.374 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.374 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.374 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.019 00:21:23.019 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:23.019 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:23.019 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.019 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.019 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.019 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.019 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.019 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.019 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.019 { 00:21:23.019 "cntlid": 91, 00:21:23.019 "qid": 0, 00:21:23.019 "state": "enabled", 00:21:23.019 "thread": "nvmf_tgt_poll_group_000", 00:21:23.019 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:23.019 "listen_address": { 00:21:23.019 "trtype": "TCP", 00:21:23.019 "adrfam": "IPv4", 00:21:23.019 "traddr": "10.0.0.2", 00:21:23.019 "trsvcid": "4420" 00:21:23.019 }, 00:21:23.019 "peer_address": { 00:21:23.019 "trtype": "TCP", 00:21:23.019 "adrfam": "IPv4", 00:21:23.019 "traddr": "10.0.0.1", 00:21:23.019 "trsvcid": "56808" 00:21:23.019 }, 00:21:23.019 "auth": { 00:21:23.019 "state": "completed", 00:21:23.019 "digest": "sha384", 00:21:23.019 "dhgroup": "ffdhe8192" 00:21:23.019 } 00:21:23.019 } 00:21:23.019 ]' 00:21:23.019 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.019 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:23.019 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.019 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:23.279 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:23.279 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.279 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.279 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.279 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjRjNzkzYmEwMjVlY2FiZGE5ZTRhZjQwNWZkMmZkNmOMnVeg: --dhchap-ctrl-secret DHHC-1:02:Yjg1ZDU2ZGUxMGU5MzRmNDg4YWJlYTdhNjcyNmVmNGFkNDVkYmFlZTI2N2RlNzZiu66vkw==: 00:21:23.279 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZjRjNzkzYmEwMjVlY2FiZGE5ZTRhZjQwNWZkMmZkNmOMnVeg: --dhchap-ctrl-secret DHHC-1:02:Yjg1ZDU2ZGUxMGU5MzRmNDg4YWJlYTdhNjcyNmVmNGFkNDVkYmFlZTI2N2RlNzZiu66vkw==: 00:21:24.219 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.219 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:24.219 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.219 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.219 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.219 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:24.219 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:24.219 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:24.219 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:24.219 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.219 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:24.219 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:24.219 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:24.219 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.219 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.219 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.219 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.480 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.480 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.480 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.480 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.050 00:21:25.050 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:25.050 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.050 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.050 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.050 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.050 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.050 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.050 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.050 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.050 { 00:21:25.050 "cntlid": 93, 00:21:25.050 "qid": 0, 00:21:25.050 "state": "enabled", 00:21:25.050 "thread": "nvmf_tgt_poll_group_000", 00:21:25.050 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:25.050 "listen_address": { 00:21:25.050 "trtype": "TCP", 00:21:25.050 "adrfam": "IPv4", 00:21:25.050 "traddr": "10.0.0.2", 00:21:25.050 "trsvcid": "4420" 00:21:25.050 }, 00:21:25.050 "peer_address": { 00:21:25.050 "trtype": "TCP", 00:21:25.050 "adrfam": "IPv4", 00:21:25.050 "traddr": "10.0.0.1", 00:21:25.050 "trsvcid": "60440" 00:21:25.050 }, 00:21:25.050 "auth": { 00:21:25.050 "state": "completed", 00:21:25.050 "digest": "sha384", 00:21:25.050 "dhgroup": "ffdhe8192" 00:21:25.050 } 00:21:25.050 } 00:21:25.050 ]' 00:21:25.050 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.050 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:25.050 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.050 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:25.050 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.310 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.310 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.310 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.310 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: --dhchap-ctrl-secret DHHC-1:01:Y2UxZjViMmE1ZGEwYTdkNmVjMmUxYWMwYmMwM2JhYjDtP5LJ: 00:21:25.310 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: --dhchap-ctrl-secret DHHC-1:01:Y2UxZjViMmE1ZGEwYTdkNmVjMmUxYWMwYmMwM2JhYjDtP5LJ: 00:21:26.253 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.253 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:26.253 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.253 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.253 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.253 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.253 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:26.253 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:26.253 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:26.253 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.253 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:26.253 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:26.253 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:26.253 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.253 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:26.253 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.253 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.253 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.253 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:26.253 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:26.253 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:26.824 00:21:26.824 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.824 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.824 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.084 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.084 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.084 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.084 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.084 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.084 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.084 { 00:21:27.084 "cntlid": 95, 00:21:27.084 "qid": 0, 00:21:27.084 "state": "enabled", 00:21:27.084 "thread": "nvmf_tgt_poll_group_000", 00:21:27.084 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:27.084 "listen_address": { 00:21:27.084 "trtype": "TCP", 00:21:27.084 "adrfam": "IPv4", 00:21:27.084 "traddr": "10.0.0.2", 00:21:27.084 "trsvcid": "4420" 00:21:27.084 }, 00:21:27.084 "peer_address": { 00:21:27.084 "trtype": "TCP", 00:21:27.084 "adrfam": "IPv4", 00:21:27.084 "traddr": "10.0.0.1", 00:21:27.084 "trsvcid": "60458" 00:21:27.084 }, 00:21:27.084 "auth": { 00:21:27.084 "state": "completed", 00:21:27.084 "digest": "sha384", 00:21:27.084 "dhgroup": "ffdhe8192" 00:21:27.084 } 00:21:27.084 } 00:21:27.084 ]' 00:21:27.084 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.084 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:27.084 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.084 13:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:27.084 13:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.084 13:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.084 13:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.084 13:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.343 13:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQ3MzA4ZDM3MDExNTZhMTkzMTY0MGQ5NDJhNDYzMzQ1MzlhY2VmMDYxM2IzNmI3NTVkMGM5NzMxYWY2YjBlOM8LAXc=: 00:21:27.343 13:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MjQ3MzA4ZDM3MDExNTZhMTkzMTY0MGQ5NDJhNDYzMzQ1MzlhY2VmMDYxM2IzNmI3NTVkMGM5NzMxYWY2YjBlOM8LAXc=: 00:21:28.282 13:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.282 13:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:28.282 13:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.282 13:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.282 13:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.282 13:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:28.282 13:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:28.282 13:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.282 13:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:28.282 13:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:28.282 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:28.282 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.282 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:28.282 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:28.282 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:28.282 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.282 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.282 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.282 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.282 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.282 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.283 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.283 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.543 00:21:28.543 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:28.543 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:28.543 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.803 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.803 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.803 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.803 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.803 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.803 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:28.803 { 00:21:28.803 "cntlid": 97, 00:21:28.803 "qid": 0, 00:21:28.803 "state": "enabled", 00:21:28.803 "thread": "nvmf_tgt_poll_group_000", 00:21:28.803 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:28.803 "listen_address": { 00:21:28.803 "trtype": "TCP", 00:21:28.803 "adrfam": "IPv4", 00:21:28.803 "traddr": "10.0.0.2", 00:21:28.803 "trsvcid": "4420" 00:21:28.803 }, 00:21:28.803 "peer_address": { 00:21:28.803 "trtype": "TCP", 00:21:28.803 "adrfam": "IPv4", 00:21:28.803 "traddr": "10.0.0.1", 00:21:28.803 "trsvcid": "60484" 00:21:28.803 }, 00:21:28.803 "auth": { 00:21:28.803 "state": "completed", 00:21:28.803 "digest": "sha512", 00:21:28.803 "dhgroup": "null" 00:21:28.803 } 00:21:28.803 } 00:21:28.803 ]' 00:21:28.803 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:28.803 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:28.803 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:28.803 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:28.803 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:28.803 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.803 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.803 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.063 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTRmMTllNzg1NTBjNzQ4YzIzYTc5NDFmNTMxNGJjNjc1ZTA0YmE2YWE5MjA4MzFhdJuIoA==: --dhchap-ctrl-secret DHHC-1:03:NDU4ZjhhOGYxMmUwN2E2ZjYzNjAzMTliYzZlYTE5Yzc5N2MxYWJmNGY0ZGE5YjFhMWJmYzJkYWMwNzNjMmJjYoHPW94=: 00:21:29.063 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MTRmMTllNzg1NTBjNzQ4YzIzYTc5NDFmNTMxNGJjNjc1ZTA0YmE2YWE5MjA4MzFhdJuIoA==: --dhchap-ctrl-secret DHHC-1:03:NDU4ZjhhOGYxMmUwN2E2ZjYzNjAzMTliYzZlYTE5Yzc5N2MxYWJmNGY0ZGE5YjFhMWJmYzJkYWMwNzNjMmJjYoHPW94=: 00:21:30.006 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.006 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:30.006 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.006 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.006 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.006 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.006 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:30.007 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:30.007 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:30.007 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:30.007 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:30.007 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:30.007 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:30.007 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.007 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.007 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.007 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.007 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.007 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.007 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.007 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.267 00:21:30.267 13:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:30.267 13:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:30.267 13:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.527 13:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.527 13:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.527 13:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.527 13:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.527 13:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.527 13:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:30.527 { 00:21:30.527 "cntlid": 99, 00:21:30.527 "qid": 0, 00:21:30.527 "state": "enabled", 00:21:30.527 "thread": "nvmf_tgt_poll_group_000", 00:21:30.527 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:30.527 "listen_address": { 00:21:30.527 "trtype": "TCP", 00:21:30.527 "adrfam": "IPv4", 00:21:30.527 "traddr": "10.0.0.2", 00:21:30.527 "trsvcid": "4420" 00:21:30.527 }, 00:21:30.527 "peer_address": { 00:21:30.527 "trtype": "TCP", 00:21:30.527 "adrfam": "IPv4", 00:21:30.527 "traddr": "10.0.0.1", 00:21:30.527 "trsvcid": "60530" 00:21:30.527 }, 00:21:30.527 "auth": { 00:21:30.527 "state": "completed", 00:21:30.527 "digest": "sha512", 00:21:30.527 "dhgroup": "null" 00:21:30.527 } 00:21:30.527 } 00:21:30.527 ]' 00:21:30.527 13:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:30.527 13:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:30.527 13:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:30.527 13:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:30.527 13:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:30.527 13:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.527 13:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.527 13:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.786 13:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjRjNzkzYmEwMjVlY2FiZGE5ZTRhZjQwNWZkMmZkNmOMnVeg: --dhchap-ctrl-secret DHHC-1:02:Yjg1ZDU2ZGUxMGU5MzRmNDg4YWJlYTdhNjcyNmVmNGFkNDVkYmFlZTI2N2RlNzZiu66vkw==: 00:21:30.786 13:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZjRjNzkzYmEwMjVlY2FiZGE5ZTRhZjQwNWZkMmZkNmOMnVeg: --dhchap-ctrl-secret DHHC-1:02:Yjg1ZDU2ZGUxMGU5MzRmNDg4YWJlYTdhNjcyNmVmNGFkNDVkYmFlZTI2N2RlNzZiu66vkw==: 00:21:31.356 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.356 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:31.356 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.356 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.356 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.356 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:31.356 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:31.356 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:31.617 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:31.617 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.617 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:31.617 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:31.617 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:31.617 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.617 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.617 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.617 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.617 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.617 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.617 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.617 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.877 00:21:31.877 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.877 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.877 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.136 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.136 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.137 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.137 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.137 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.137 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:32.137 { 00:21:32.137 "cntlid": 101, 00:21:32.137 "qid": 0, 00:21:32.137 "state": "enabled", 00:21:32.137 "thread": "nvmf_tgt_poll_group_000", 00:21:32.137 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:32.137 "listen_address": { 00:21:32.137 "trtype": "TCP", 00:21:32.137 "adrfam": "IPv4", 00:21:32.137 "traddr": "10.0.0.2", 00:21:32.137 "trsvcid": "4420" 00:21:32.137 }, 00:21:32.137 "peer_address": { 00:21:32.137 "trtype": "TCP", 00:21:32.137 "adrfam": "IPv4", 00:21:32.137 "traddr": "10.0.0.1", 00:21:32.137 "trsvcid": "60570" 00:21:32.137 }, 00:21:32.137 "auth": { 00:21:32.137 "state": "completed", 00:21:32.137 "digest": "sha512", 00:21:32.137 "dhgroup": "null" 00:21:32.137 } 00:21:32.137 } 00:21:32.137 ]' 00:21:32.137 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:32.137 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.137 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:32.137 13:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:32.137 13:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:32.137 13:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.137 13:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.137 13:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.396 13:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: --dhchap-ctrl-secret DHHC-1:01:Y2UxZjViMmE1ZGEwYTdkNmVjMmUxYWMwYmMwM2JhYjDtP5LJ: 00:21:32.396 13:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: --dhchap-ctrl-secret DHHC-1:01:Y2UxZjViMmE1ZGEwYTdkNmVjMmUxYWMwYmMwM2JhYjDtP5LJ: 00:21:33.336 13:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.336 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.336 13:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:33.336 13:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.336 13:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.336 13:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.336 13:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:33.336 13:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:33.336 13:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:33.336 13:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:33.336 13:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:33.336 13:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:33.336 13:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:33.336 13:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:33.336 13:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.336 13:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:33.336 13:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.336 13:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.336 13:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.336 13:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:33.336 13:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:33.336 13:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:33.596 00:21:33.596 13:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.596 13:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.596 13:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.856 13:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.856 13:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.856 13:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.856 13:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.856 13:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.856 13:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.856 { 00:21:33.856 "cntlid": 103, 00:21:33.856 "qid": 0, 00:21:33.856 "state": "enabled", 00:21:33.856 "thread": "nvmf_tgt_poll_group_000", 00:21:33.856 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:33.856 "listen_address": { 00:21:33.856 "trtype": "TCP", 00:21:33.856 "adrfam": "IPv4", 00:21:33.856 "traddr": "10.0.0.2", 00:21:33.856 "trsvcid": "4420" 00:21:33.856 }, 00:21:33.856 "peer_address": { 00:21:33.856 "trtype": "TCP", 00:21:33.856 "adrfam": "IPv4", 00:21:33.856 "traddr": "10.0.0.1", 00:21:33.856 "trsvcid": "60596" 00:21:33.856 }, 00:21:33.856 "auth": { 00:21:33.856 "state": "completed", 00:21:33.856 "digest": "sha512", 00:21:33.856 "dhgroup": "null" 00:21:33.856 } 00:21:33.856 } 00:21:33.856 ]' 00:21:33.856 13:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.856 13:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:33.856 13:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.856 13:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:33.856 13:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.856 13:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.856 13:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.856 13:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.116 13:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQ3MzA4ZDM3MDExNTZhMTkzMTY0MGQ5NDJhNDYzMzQ1MzlhY2VmMDYxM2IzNmI3NTVkMGM5NzMxYWY2YjBlOM8LAXc=: 00:21:34.116 13:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MjQ3MzA4ZDM3MDExNTZhMTkzMTY0MGQ5NDJhNDYzMzQ1MzlhY2VmMDYxM2IzNmI3NTVkMGM5NzMxYWY2YjBlOM8LAXc=: 00:21:35.056 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.056 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.056 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:35.056 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.056 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.056 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.056 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:35.056 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:35.056 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:35.056 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:35.056 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:35.056 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:35.056 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:35.056 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:35.056 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:35.056 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.056 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.056 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.056 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.056 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.056 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.056 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.056 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.316 00:21:35.316 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:35.316 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:35.316 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.316 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.316 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.316 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.316 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.582 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.582 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:35.582 { 00:21:35.582 "cntlid": 105, 00:21:35.582 "qid": 0, 00:21:35.582 "state": "enabled", 00:21:35.582 "thread": "nvmf_tgt_poll_group_000", 00:21:35.582 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:35.582 "listen_address": { 00:21:35.582 "trtype": "TCP", 00:21:35.582 "adrfam": "IPv4", 00:21:35.582 "traddr": "10.0.0.2", 00:21:35.582 "trsvcid": "4420" 00:21:35.582 }, 00:21:35.582 "peer_address": { 00:21:35.582 "trtype": "TCP", 00:21:35.582 "adrfam": "IPv4", 00:21:35.582 "traddr": "10.0.0.1", 00:21:35.582 "trsvcid": "42840" 00:21:35.582 }, 00:21:35.582 "auth": { 00:21:35.582 "state": "completed", 00:21:35.582 "digest": "sha512", 00:21:35.582 "dhgroup": "ffdhe2048" 00:21:35.582 } 00:21:35.582 } 00:21:35.582 ]' 00:21:35.582 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:35.582 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:35.582 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.582 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:35.582 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.582 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.582 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.582 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.843 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTRmMTllNzg1NTBjNzQ4YzIzYTc5NDFmNTMxNGJjNjc1ZTA0YmE2YWE5MjA4MzFhdJuIoA==: --dhchap-ctrl-secret DHHC-1:03:NDU4ZjhhOGYxMmUwN2E2ZjYzNjAzMTliYzZlYTE5Yzc5N2MxYWJmNGY0ZGE5YjFhMWJmYzJkYWMwNzNjMmJjYoHPW94=: 00:21:35.843 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MTRmMTllNzg1NTBjNzQ4YzIzYTc5NDFmNTMxNGJjNjc1ZTA0YmE2YWE5MjA4MzFhdJuIoA==: --dhchap-ctrl-secret DHHC-1:03:NDU4ZjhhOGYxMmUwN2E2ZjYzNjAzMTliYzZlYTE5Yzc5N2MxYWJmNGY0ZGE5YjFhMWJmYzJkYWMwNzNjMmJjYoHPW94=: 00:21:36.414 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.414 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:36.414 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.414 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.674 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.674 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:36.674 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:36.674 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:36.674 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:36.674 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.674 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:36.674 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:36.674 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:36.674 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.674 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.674 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.674 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.674 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.674 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.674 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.675 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.934 00:21:36.934 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:36.934 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.934 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.194 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.194 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.194 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.194 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.194 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.194 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:37.194 { 00:21:37.194 "cntlid": 107, 00:21:37.194 "qid": 0, 00:21:37.194 "state": "enabled", 00:21:37.194 "thread": "nvmf_tgt_poll_group_000", 00:21:37.194 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:37.194 "listen_address": { 00:21:37.194 "trtype": "TCP", 00:21:37.194 "adrfam": "IPv4", 00:21:37.194 "traddr": "10.0.0.2", 00:21:37.194 "trsvcid": "4420" 00:21:37.194 }, 00:21:37.194 "peer_address": { 00:21:37.194 "trtype": "TCP", 00:21:37.194 "adrfam": "IPv4", 00:21:37.194 "traddr": "10.0.0.1", 00:21:37.194 "trsvcid": "42878" 00:21:37.194 }, 00:21:37.194 "auth": { 00:21:37.194 "state": "completed", 00:21:37.194 "digest": "sha512", 00:21:37.194 "dhgroup": "ffdhe2048" 00:21:37.194 } 00:21:37.194 } 00:21:37.194 ]' 00:21:37.194 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:37.194 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:37.194 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:37.194 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:37.194 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:37.194 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.194 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.194 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.454 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjRjNzkzYmEwMjVlY2FiZGE5ZTRhZjQwNWZkMmZkNmOMnVeg: --dhchap-ctrl-secret DHHC-1:02:Yjg1ZDU2ZGUxMGU5MzRmNDg4YWJlYTdhNjcyNmVmNGFkNDVkYmFlZTI2N2RlNzZiu66vkw==: 00:21:37.454 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZjRjNzkzYmEwMjVlY2FiZGE5ZTRhZjQwNWZkMmZkNmOMnVeg: --dhchap-ctrl-secret DHHC-1:02:Yjg1ZDU2ZGUxMGU5MzRmNDg4YWJlYTdhNjcyNmVmNGFkNDVkYmFlZTI2N2RlNzZiu66vkw==: 00:21:38.394 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.394 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:38.394 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.394 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.394 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.394 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:38.394 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:38.394 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:38.394 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:38.394 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:38.394 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:38.394 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:38.394 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:38.394 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.394 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.394 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.394 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.394 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.394 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.394 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.394 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.654 00:21:38.654 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.654 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.654 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.914 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.914 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.914 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.914 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.914 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.914 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.914 { 00:21:38.914 "cntlid": 109, 00:21:38.914 "qid": 0, 00:21:38.914 "state": "enabled", 00:21:38.914 "thread": "nvmf_tgt_poll_group_000", 00:21:38.914 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:38.914 "listen_address": { 00:21:38.914 "trtype": "TCP", 00:21:38.914 "adrfam": "IPv4", 00:21:38.914 "traddr": "10.0.0.2", 00:21:38.914 "trsvcid": "4420" 00:21:38.914 }, 00:21:38.914 "peer_address": { 00:21:38.914 "trtype": "TCP", 00:21:38.914 "adrfam": "IPv4", 00:21:38.914 "traddr": "10.0.0.1", 00:21:38.914 "trsvcid": "42902" 00:21:38.914 }, 00:21:38.914 "auth": { 00:21:38.914 "state": "completed", 00:21:38.914 "digest": "sha512", 00:21:38.914 "dhgroup": "ffdhe2048" 00:21:38.914 } 00:21:38.914 } 00:21:38.914 ]' 00:21:38.914 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.914 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.914 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.914 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:38.914 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.914 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.914 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.914 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.174 13:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: --dhchap-ctrl-secret DHHC-1:01:Y2UxZjViMmE1ZGEwYTdkNmVjMmUxYWMwYmMwM2JhYjDtP5LJ: 00:21:39.174 13:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: --dhchap-ctrl-secret DHHC-1:01:Y2UxZjViMmE1ZGEwYTdkNmVjMmUxYWMwYmMwM2JhYjDtP5LJ: 00:21:40.113 13:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.113 13:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:40.113 13:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.113 13:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.113 13:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.113 13:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:40.113 13:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:40.113 13:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:40.113 13:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:40.113 13:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.113 13:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:40.113 13:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:40.113 13:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:40.113 13:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.113 13:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:40.113 13:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.113 13:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.114 13:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.114 13:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:40.114 13:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:40.114 13:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:40.374 00:21:40.374 13:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:40.374 13:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:40.374 13:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.634 13:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.634 13:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.634 13:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.634 13:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.634 13:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.634 13:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:40.634 { 00:21:40.634 "cntlid": 111, 00:21:40.634 "qid": 0, 00:21:40.634 "state": "enabled", 00:21:40.634 "thread": "nvmf_tgt_poll_group_000", 00:21:40.634 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:40.634 "listen_address": { 00:21:40.634 "trtype": "TCP", 00:21:40.634 "adrfam": "IPv4", 00:21:40.634 "traddr": "10.0.0.2", 00:21:40.634 "trsvcid": "4420" 00:21:40.634 }, 00:21:40.634 "peer_address": { 00:21:40.634 "trtype": "TCP", 00:21:40.634 "adrfam": "IPv4", 00:21:40.634 "traddr": "10.0.0.1", 00:21:40.634 "trsvcid": "42934" 00:21:40.634 }, 00:21:40.634 "auth": { 00:21:40.634 "state": "completed", 00:21:40.634 "digest": "sha512", 00:21:40.634 "dhgroup": "ffdhe2048" 00:21:40.634 } 00:21:40.634 } 00:21:40.634 ]' 00:21:40.634 13:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:40.634 13:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:40.634 13:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:40.634 13:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:40.634 13:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:40.634 13:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.634 13:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.634 13:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.895 13:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQ3MzA4ZDM3MDExNTZhMTkzMTY0MGQ5NDJhNDYzMzQ1MzlhY2VmMDYxM2IzNmI3NTVkMGM5NzMxYWY2YjBlOM8LAXc=: 00:21:40.895 13:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MjQ3MzA4ZDM3MDExNTZhMTkzMTY0MGQ5NDJhNDYzMzQ1MzlhY2VmMDYxM2IzNmI3NTVkMGM5NzMxYWY2YjBlOM8LAXc=: 00:21:41.464 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.725 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:41.725 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.725 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.725 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.725 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:41.725 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:41.725 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:41.725 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:41.725 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:41.725 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:41.725 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:41.725 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:41.725 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:41.725 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.725 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.725 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.725 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.725 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.725 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.725 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.725 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.985 00:21:41.985 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:41.985 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:41.985 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.247 13:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.247 13:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.247 13:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.247 13:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.247 13:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.247 13:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:42.247 { 00:21:42.247 "cntlid": 113, 00:21:42.247 "qid": 0, 00:21:42.247 "state": "enabled", 00:21:42.247 "thread": "nvmf_tgt_poll_group_000", 00:21:42.247 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:42.247 "listen_address": { 00:21:42.247 "trtype": "TCP", 00:21:42.247 "adrfam": "IPv4", 00:21:42.247 "traddr": "10.0.0.2", 00:21:42.247 "trsvcid": "4420" 00:21:42.247 }, 00:21:42.247 "peer_address": { 00:21:42.247 "trtype": "TCP", 00:21:42.247 "adrfam": "IPv4", 00:21:42.247 "traddr": "10.0.0.1", 00:21:42.247 "trsvcid": "42950" 00:21:42.247 }, 00:21:42.247 "auth": { 00:21:42.247 "state": "completed", 00:21:42.247 "digest": "sha512", 00:21:42.247 "dhgroup": "ffdhe3072" 00:21:42.247 } 00:21:42.247 } 00:21:42.247 ]' 00:21:42.247 13:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:42.247 13:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.247 13:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.247 13:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:42.247 13:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.515 13:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.515 13:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.515 13:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.515 13:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTRmMTllNzg1NTBjNzQ4YzIzYTc5NDFmNTMxNGJjNjc1ZTA0YmE2YWE5MjA4MzFhdJuIoA==: --dhchap-ctrl-secret DHHC-1:03:NDU4ZjhhOGYxMmUwN2E2ZjYzNjAzMTliYzZlYTE5Yzc5N2MxYWJmNGY0ZGE5YjFhMWJmYzJkYWMwNzNjMmJjYoHPW94=: 00:21:42.515 13:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MTRmMTllNzg1NTBjNzQ4YzIzYTc5NDFmNTMxNGJjNjc1ZTA0YmE2YWE5MjA4MzFhdJuIoA==: --dhchap-ctrl-secret DHHC-1:03:NDU4ZjhhOGYxMmUwN2E2ZjYzNjAzMTliYzZlYTE5Yzc5N2MxYWJmNGY0ZGE5YjFhMWJmYzJkYWMwNzNjMmJjYoHPW94=: 00:21:43.458 13:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.458 13:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:43.458 13:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.458 13:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.458 13:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.458 13:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:43.458 13:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:43.458 13:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:43.458 13:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:43.458 13:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:43.458 13:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:43.458 13:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:43.458 13:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:43.458 13:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.458 13:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.458 13:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.458 13:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.458 13:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.458 13:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.458 13:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.458 13:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.718 00:21:43.718 13:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:43.718 13:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:43.718 13:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.979 13:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.979 13:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.979 13:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.979 13:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.979 13:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.979 13:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:43.979 { 00:21:43.979 "cntlid": 115, 00:21:43.979 "qid": 0, 00:21:43.979 "state": "enabled", 00:21:43.979 "thread": "nvmf_tgt_poll_group_000", 00:21:43.979 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:43.979 "listen_address": { 00:21:43.979 "trtype": "TCP", 00:21:43.979 "adrfam": "IPv4", 00:21:43.979 "traddr": "10.0.0.2", 00:21:43.979 "trsvcid": "4420" 00:21:43.979 }, 00:21:43.979 "peer_address": { 00:21:43.979 "trtype": "TCP", 00:21:43.979 "adrfam": "IPv4", 00:21:43.979 "traddr": "10.0.0.1", 00:21:43.979 "trsvcid": "42992" 00:21:43.979 }, 00:21:43.979 "auth": { 00:21:43.979 "state": "completed", 00:21:43.979 "digest": "sha512", 00:21:43.979 "dhgroup": "ffdhe3072" 00:21:43.979 } 00:21:43.979 } 00:21:43.979 ]' 00:21:43.979 13:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:43.979 13:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.979 13:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:43.979 13:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:43.979 13:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:44.239 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.239 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.240 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.240 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjRjNzkzYmEwMjVlY2FiZGE5ZTRhZjQwNWZkMmZkNmOMnVeg: --dhchap-ctrl-secret DHHC-1:02:Yjg1ZDU2ZGUxMGU5MzRmNDg4YWJlYTdhNjcyNmVmNGFkNDVkYmFlZTI2N2RlNzZiu66vkw==: 00:21:44.240 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZjRjNzkzYmEwMjVlY2FiZGE5ZTRhZjQwNWZkMmZkNmOMnVeg: --dhchap-ctrl-secret DHHC-1:02:Yjg1ZDU2ZGUxMGU5MzRmNDg4YWJlYTdhNjcyNmVmNGFkNDVkYmFlZTI2N2RlNzZiu66vkw==: 00:21:45.179 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.179 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:45.179 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.179 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.179 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.179 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:45.179 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:45.179 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:45.180 13:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:45.180 13:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:45.180 13:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:45.180 13:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:45.180 13:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:45.180 13:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.180 13:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.180 13:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.180 13:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.180 13:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.180 13:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.180 13:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.180 13:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.440 00:21:45.440 13:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:45.440 13:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:45.440 13:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.700 13:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.700 13:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.700 13:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.700 13:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.700 13:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.700 13:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.700 { 00:21:45.700 "cntlid": 117, 00:21:45.700 "qid": 0, 00:21:45.700 "state": "enabled", 00:21:45.700 "thread": "nvmf_tgt_poll_group_000", 00:21:45.700 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:45.700 "listen_address": { 00:21:45.700 "trtype": "TCP", 00:21:45.700 "adrfam": "IPv4", 00:21:45.700 "traddr": "10.0.0.2", 00:21:45.700 "trsvcid": "4420" 00:21:45.700 }, 00:21:45.700 "peer_address": { 00:21:45.700 "trtype": "TCP", 00:21:45.700 "adrfam": "IPv4", 00:21:45.700 "traddr": "10.0.0.1", 00:21:45.700 "trsvcid": "35764" 00:21:45.700 }, 00:21:45.700 "auth": { 00:21:45.700 "state": "completed", 00:21:45.700 "digest": "sha512", 00:21:45.700 "dhgroup": "ffdhe3072" 00:21:45.700 } 00:21:45.700 } 00:21:45.700 ]' 00:21:45.700 13:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.700 13:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.700 13:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:45.960 13:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:45.960 13:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:45.960 13:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.960 13:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.960 13:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.960 13:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: --dhchap-ctrl-secret DHHC-1:01:Y2UxZjViMmE1ZGEwYTdkNmVjMmUxYWMwYmMwM2JhYjDtP5LJ: 00:21:45.960 13:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: --dhchap-ctrl-secret DHHC-1:01:Y2UxZjViMmE1ZGEwYTdkNmVjMmUxYWMwYmMwM2JhYjDtP5LJ: 00:21:46.900 13:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.900 13:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:46.900 13:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.900 13:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.900 13:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.900 13:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.900 13:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:46.900 13:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:46.900 13:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:46.900 13:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:46.900 13:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:46.900 13:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:46.900 13:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:46.900 13:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.900 13:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:46.900 13:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.900 13:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.900 13:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.900 13:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:46.900 13:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:47.160 13:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:47.160 00:21:47.420 13:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:47.420 13:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:47.420 13:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.420 13:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.420 13:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.420 13:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.420 13:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.420 13:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.420 13:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:47.420 { 00:21:47.420 "cntlid": 119, 00:21:47.420 "qid": 0, 00:21:47.420 "state": "enabled", 00:21:47.420 "thread": "nvmf_tgt_poll_group_000", 00:21:47.420 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:47.420 "listen_address": { 00:21:47.420 "trtype": "TCP", 00:21:47.420 "adrfam": "IPv4", 00:21:47.420 "traddr": "10.0.0.2", 00:21:47.420 "trsvcid": "4420" 00:21:47.420 }, 00:21:47.420 "peer_address": { 00:21:47.420 "trtype": "TCP", 00:21:47.420 "adrfam": "IPv4", 00:21:47.420 "traddr": "10.0.0.1", 00:21:47.420 "trsvcid": "35792" 00:21:47.420 }, 00:21:47.420 "auth": { 00:21:47.420 "state": "completed", 00:21:47.420 "digest": "sha512", 00:21:47.420 "dhgroup": "ffdhe3072" 00:21:47.420 } 00:21:47.420 } 00:21:47.420 ]' 00:21:47.420 13:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:47.420 13:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.420 13:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.679 13:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:47.679 13:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.679 13:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.680 13:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.680 13:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.680 13:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQ3MzA4ZDM3MDExNTZhMTkzMTY0MGQ5NDJhNDYzMzQ1MzlhY2VmMDYxM2IzNmI3NTVkMGM5NzMxYWY2YjBlOM8LAXc=: 00:21:47.680 13:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MjQ3MzA4ZDM3MDExNTZhMTkzMTY0MGQ5NDJhNDYzMzQ1MzlhY2VmMDYxM2IzNmI3NTVkMGM5NzMxYWY2YjBlOM8LAXc=: 00:21:48.619 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.619 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.619 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:48.619 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.619 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.619 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.619 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:48.619 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:48.619 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:48.619 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:48.619 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:48.619 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:48.619 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:48.619 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:48.619 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:48.619 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.620 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.620 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.620 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.880 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.880 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.880 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.880 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.880 00:21:49.141 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.141 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.141 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.141 13:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.141 13:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.141 13:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.141 13:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.141 13:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.141 13:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.141 { 00:21:49.141 "cntlid": 121, 00:21:49.141 "qid": 0, 00:21:49.141 "state": "enabled", 00:21:49.141 "thread": "nvmf_tgt_poll_group_000", 00:21:49.141 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:49.141 "listen_address": { 00:21:49.141 "trtype": "TCP", 00:21:49.141 "adrfam": "IPv4", 00:21:49.141 "traddr": "10.0.0.2", 00:21:49.141 "trsvcid": "4420" 00:21:49.141 }, 00:21:49.141 "peer_address": { 00:21:49.141 "trtype": "TCP", 00:21:49.141 "adrfam": "IPv4", 00:21:49.141 "traddr": "10.0.0.1", 00:21:49.141 "trsvcid": "35820" 00:21:49.141 }, 00:21:49.141 "auth": { 00:21:49.141 "state": "completed", 00:21:49.141 "digest": "sha512", 00:21:49.141 "dhgroup": "ffdhe4096" 00:21:49.141 } 00:21:49.141 } 00:21:49.141 ]' 00:21:49.141 13:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.141 13:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.141 13:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.402 13:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:49.402 13:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.402 13:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.402 13:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.402 13:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.402 13:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTRmMTllNzg1NTBjNzQ4YzIzYTc5NDFmNTMxNGJjNjc1ZTA0YmE2YWE5MjA4MzFhdJuIoA==: --dhchap-ctrl-secret DHHC-1:03:NDU4ZjhhOGYxMmUwN2E2ZjYzNjAzMTliYzZlYTE5Yzc5N2MxYWJmNGY0ZGE5YjFhMWJmYzJkYWMwNzNjMmJjYoHPW94=: 00:21:49.402 13:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MTRmMTllNzg1NTBjNzQ4YzIzYTc5NDFmNTMxNGJjNjc1ZTA0YmE2YWE5MjA4MzFhdJuIoA==: --dhchap-ctrl-secret DHHC-1:03:NDU4ZjhhOGYxMmUwN2E2ZjYzNjAzMTliYzZlYTE5Yzc5N2MxYWJmNGY0ZGE5YjFhMWJmYzJkYWMwNzNjMmJjYoHPW94=: 00:21:50.343 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.343 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:50.343 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.343 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.343 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.343 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:50.343 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:50.343 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:50.604 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:50.604 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:50.604 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:50.604 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:50.604 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:50.604 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.604 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.604 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.604 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.604 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.604 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.604 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.604 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.604 00:21:50.865 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:50.865 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:50.865 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.865 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.865 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.865 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.865 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.865 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.865 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:50.865 { 00:21:50.865 "cntlid": 123, 00:21:50.865 "qid": 0, 00:21:50.865 "state": "enabled", 00:21:50.865 "thread": "nvmf_tgt_poll_group_000", 00:21:50.865 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:50.865 "listen_address": { 00:21:50.865 "trtype": "TCP", 00:21:50.865 "adrfam": "IPv4", 00:21:50.865 "traddr": "10.0.0.2", 00:21:50.865 "trsvcid": "4420" 00:21:50.865 }, 00:21:50.865 "peer_address": { 00:21:50.865 "trtype": "TCP", 00:21:50.865 "adrfam": "IPv4", 00:21:50.865 "traddr": "10.0.0.1", 00:21:50.865 "trsvcid": "35854" 00:21:50.865 }, 00:21:50.865 "auth": { 00:21:50.865 "state": "completed", 00:21:50.865 "digest": "sha512", 00:21:50.865 "dhgroup": "ffdhe4096" 00:21:50.865 } 00:21:50.865 } 00:21:50.865 ]' 00:21:50.865 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:50.865 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:50.865 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:51.125 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:51.125 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:51.125 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.125 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.125 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.386 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjRjNzkzYmEwMjVlY2FiZGE5ZTRhZjQwNWZkMmZkNmOMnVeg: --dhchap-ctrl-secret DHHC-1:02:Yjg1ZDU2ZGUxMGU5MzRmNDg4YWJlYTdhNjcyNmVmNGFkNDVkYmFlZTI2N2RlNzZiu66vkw==: 00:21:51.386 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZjRjNzkzYmEwMjVlY2FiZGE5ZTRhZjQwNWZkMmZkNmOMnVeg: --dhchap-ctrl-secret DHHC-1:02:Yjg1ZDU2ZGUxMGU5MzRmNDg4YWJlYTdhNjcyNmVmNGFkNDVkYmFlZTI2N2RlNzZiu66vkw==: 00:21:51.956 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.956 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:51.956 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.956 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.956 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.956 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:51.956 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:51.956 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:52.218 13:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:52.218 13:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:52.218 13:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:52.218 13:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:52.218 13:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:52.218 13:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.218 13:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.219 13:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.219 13:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.219 13:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.219 13:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.219 13:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.219 13:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.479 00:21:52.479 13:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.479 13:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.480 13:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.740 13:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.740 13:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.740 13:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.740 13:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.740 13:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.740 13:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.740 { 00:21:52.740 "cntlid": 125, 00:21:52.740 "qid": 0, 00:21:52.740 "state": "enabled", 00:21:52.740 "thread": "nvmf_tgt_poll_group_000", 00:21:52.740 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:52.740 "listen_address": { 00:21:52.740 "trtype": "TCP", 00:21:52.740 "adrfam": "IPv4", 00:21:52.740 "traddr": "10.0.0.2", 00:21:52.740 "trsvcid": "4420" 00:21:52.740 }, 00:21:52.740 "peer_address": { 00:21:52.740 "trtype": "TCP", 00:21:52.740 "adrfam": "IPv4", 00:21:52.740 "traddr": "10.0.0.1", 00:21:52.740 "trsvcid": "35868" 00:21:52.740 }, 00:21:52.740 "auth": { 00:21:52.740 "state": "completed", 00:21:52.740 "digest": "sha512", 00:21:52.740 "dhgroup": "ffdhe4096" 00:21:52.740 } 00:21:52.740 } 00:21:52.740 ]' 00:21:52.740 13:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.740 13:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.740 13:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.740 13:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:52.740 13:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.740 13:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.740 13:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.740 13:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.001 13:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: --dhchap-ctrl-secret DHHC-1:01:Y2UxZjViMmE1ZGEwYTdkNmVjMmUxYWMwYmMwM2JhYjDtP5LJ: 00:21:53.001 13:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: --dhchap-ctrl-secret DHHC-1:01:Y2UxZjViMmE1ZGEwYTdkNmVjMmUxYWMwYmMwM2JhYjDtP5LJ: 00:21:53.943 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.943 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:53.943 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.943 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.943 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.943 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:53.943 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:53.943 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:53.943 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:53.943 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:53.943 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:53.943 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:53.943 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:53.943 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.943 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:53.943 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.943 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.943 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.943 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:53.943 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:53.943 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:54.203 00:21:54.203 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:54.203 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:54.203 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.464 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.464 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.464 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.464 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.464 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.464 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.464 { 00:21:54.464 "cntlid": 127, 00:21:54.464 "qid": 0, 00:21:54.464 "state": "enabled", 00:21:54.464 "thread": "nvmf_tgt_poll_group_000", 00:21:54.464 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:54.464 "listen_address": { 00:21:54.464 "trtype": "TCP", 00:21:54.464 "adrfam": "IPv4", 00:21:54.464 "traddr": "10.0.0.2", 00:21:54.464 "trsvcid": "4420" 00:21:54.464 }, 00:21:54.464 "peer_address": { 00:21:54.464 "trtype": "TCP", 00:21:54.464 "adrfam": "IPv4", 00:21:54.464 "traddr": "10.0.0.1", 00:21:54.464 "trsvcid": "59048" 00:21:54.464 }, 00:21:54.464 "auth": { 00:21:54.464 "state": "completed", 00:21:54.464 "digest": "sha512", 00:21:54.464 "dhgroup": "ffdhe4096" 00:21:54.464 } 00:21:54.464 } 00:21:54.464 ]' 00:21:54.464 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:54.464 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:54.464 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.464 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:54.464 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:54.464 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.464 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.464 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.724 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQ3MzA4ZDM3MDExNTZhMTkzMTY0MGQ5NDJhNDYzMzQ1MzlhY2VmMDYxM2IzNmI3NTVkMGM5NzMxYWY2YjBlOM8LAXc=: 00:21:54.724 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MjQ3MzA4ZDM3MDExNTZhMTkzMTY0MGQ5NDJhNDYzMzQ1MzlhY2VmMDYxM2IzNmI3NTVkMGM5NzMxYWY2YjBlOM8LAXc=: 00:21:55.664 13:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.664 13:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:55.664 13:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.664 13:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.664 13:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.664 13:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:55.664 13:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:55.664 13:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:55.664 13:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:55.664 13:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:55.664 13:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:55.664 13:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:55.664 13:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:55.664 13:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:55.664 13:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.664 13:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.664 13:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.664 13:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.664 13:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.664 13:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.664 13:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.664 13:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.235 00:21:56.235 13:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:56.235 13:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:56.235 13:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.235 13:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.235 13:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.235 13:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.235 13:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.235 13:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.235 13:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:56.235 { 00:21:56.235 "cntlid": 129, 00:21:56.235 "qid": 0, 00:21:56.235 "state": "enabled", 00:21:56.235 "thread": "nvmf_tgt_poll_group_000", 00:21:56.235 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:56.235 "listen_address": { 00:21:56.235 "trtype": "TCP", 00:21:56.235 "adrfam": "IPv4", 00:21:56.235 "traddr": "10.0.0.2", 00:21:56.235 "trsvcid": "4420" 00:21:56.235 }, 00:21:56.235 "peer_address": { 00:21:56.235 "trtype": "TCP", 00:21:56.235 "adrfam": "IPv4", 00:21:56.235 "traddr": "10.0.0.1", 00:21:56.235 "trsvcid": "59074" 00:21:56.235 }, 00:21:56.235 "auth": { 00:21:56.235 "state": "completed", 00:21:56.235 "digest": "sha512", 00:21:56.235 "dhgroup": "ffdhe6144" 00:21:56.235 } 00:21:56.235 } 00:21:56.235 ]' 00:21:56.235 13:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:56.235 13:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:56.235 13:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:56.495 13:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:56.495 13:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:56.495 13:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.495 13:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.495 13:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.495 13:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTRmMTllNzg1NTBjNzQ4YzIzYTc5NDFmNTMxNGJjNjc1ZTA0YmE2YWE5MjA4MzFhdJuIoA==: --dhchap-ctrl-secret DHHC-1:03:NDU4ZjhhOGYxMmUwN2E2ZjYzNjAzMTliYzZlYTE5Yzc5N2MxYWJmNGY0ZGE5YjFhMWJmYzJkYWMwNzNjMmJjYoHPW94=: 00:21:56.495 13:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MTRmMTllNzg1NTBjNzQ4YzIzYTc5NDFmNTMxNGJjNjc1ZTA0YmE2YWE5MjA4MzFhdJuIoA==: --dhchap-ctrl-secret DHHC-1:03:NDU4ZjhhOGYxMmUwN2E2ZjYzNjAzMTliYzZlYTE5Yzc5N2MxYWJmNGY0ZGE5YjFhMWJmYzJkYWMwNzNjMmJjYoHPW94=: 00:21:57.434 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.434 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.434 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:57.434 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.434 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.434 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.434 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:57.434 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:57.434 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:57.434 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:57.434 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.434 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:57.694 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:57.694 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:57.694 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.694 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.694 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.694 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.694 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.694 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.694 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.694 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.953 00:21:57.953 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:57.953 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:57.953 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.213 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.213 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.213 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.213 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.213 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.213 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:58.213 { 00:21:58.213 "cntlid": 131, 00:21:58.213 "qid": 0, 00:21:58.213 "state": "enabled", 00:21:58.213 "thread": "nvmf_tgt_poll_group_000", 00:21:58.213 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:58.213 "listen_address": { 00:21:58.213 "trtype": "TCP", 00:21:58.213 "adrfam": "IPv4", 00:21:58.213 "traddr": "10.0.0.2", 00:21:58.213 "trsvcid": "4420" 00:21:58.213 }, 00:21:58.213 "peer_address": { 00:21:58.213 "trtype": "TCP", 00:21:58.213 "adrfam": "IPv4", 00:21:58.213 "traddr": "10.0.0.1", 00:21:58.213 "trsvcid": "59114" 00:21:58.213 }, 00:21:58.213 "auth": { 00:21:58.213 "state": "completed", 00:21:58.213 "digest": "sha512", 00:21:58.213 "dhgroup": "ffdhe6144" 00:21:58.213 } 00:21:58.213 } 00:21:58.213 ]' 00:21:58.213 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:58.213 13:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.213 13:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:58.213 13:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:58.213 13:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:58.213 13:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.213 13:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.213 13:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.476 13:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjRjNzkzYmEwMjVlY2FiZGE5ZTRhZjQwNWZkMmZkNmOMnVeg: --dhchap-ctrl-secret DHHC-1:02:Yjg1ZDU2ZGUxMGU5MzRmNDg4YWJlYTdhNjcyNmVmNGFkNDVkYmFlZTI2N2RlNzZiu66vkw==: 00:21:58.476 13:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZjRjNzkzYmEwMjVlY2FiZGE5ZTRhZjQwNWZkMmZkNmOMnVeg: --dhchap-ctrl-secret DHHC-1:02:Yjg1ZDU2ZGUxMGU5MzRmNDg4YWJlYTdhNjcyNmVmNGFkNDVkYmFlZTI2N2RlNzZiu66vkw==: 00:21:59.046 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.306 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:59.306 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.306 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.306 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.306 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:59.306 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:59.306 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:59.306 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:59.306 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:59.306 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:59.306 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:59.306 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:59.306 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.306 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.306 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.306 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.306 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.306 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.306 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.306 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.884 00:21:59.884 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:59.884 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.884 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:59.884 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.884 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.884 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.884 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.884 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.884 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:59.884 { 00:21:59.884 "cntlid": 133, 00:21:59.884 "qid": 0, 00:21:59.884 "state": "enabled", 00:21:59.884 "thread": "nvmf_tgt_poll_group_000", 00:21:59.884 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:59.884 "listen_address": { 00:21:59.884 "trtype": "TCP", 00:21:59.884 "adrfam": "IPv4", 00:21:59.884 "traddr": "10.0.0.2", 00:21:59.884 "trsvcid": "4420" 00:21:59.884 }, 00:21:59.884 "peer_address": { 00:21:59.884 "trtype": "TCP", 00:21:59.884 "adrfam": "IPv4", 00:21:59.884 "traddr": "10.0.0.1", 00:21:59.884 "trsvcid": "59146" 00:21:59.884 }, 00:21:59.884 "auth": { 00:21:59.884 "state": "completed", 00:21:59.884 "digest": "sha512", 00:21:59.884 "dhgroup": "ffdhe6144" 00:21:59.884 } 00:21:59.884 } 00:21:59.884 ]' 00:21:59.884 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:59.884 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:59.884 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:00.144 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:00.144 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:00.144 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.144 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.144 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.144 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: --dhchap-ctrl-secret DHHC-1:01:Y2UxZjViMmE1ZGEwYTdkNmVjMmUxYWMwYmMwM2JhYjDtP5LJ: 00:22:00.145 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: --dhchap-ctrl-secret DHHC-1:01:Y2UxZjViMmE1ZGEwYTdkNmVjMmUxYWMwYmMwM2JhYjDtP5LJ: 00:22:01.085 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.085 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:01.085 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.085 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.085 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.085 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:01.085 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:01.085 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:01.085 13:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:22:01.085 13:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:01.085 13:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:01.085 13:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:01.085 13:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:01.085 13:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.086 13:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:01.086 13:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.086 13:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.086 13:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.086 13:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:01.086 13:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:01.086 13:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:01.717 00:22:01.717 13:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:01.717 13:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:01.717 13:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.717 13:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.717 13:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.717 13:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.717 13:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.717 13:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.717 13:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:01.717 { 00:22:01.717 "cntlid": 135, 00:22:01.717 "qid": 0, 00:22:01.717 "state": "enabled", 00:22:01.717 "thread": "nvmf_tgt_poll_group_000", 00:22:01.717 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:01.717 "listen_address": { 00:22:01.717 "trtype": "TCP", 00:22:01.717 "adrfam": "IPv4", 00:22:01.717 "traddr": "10.0.0.2", 00:22:01.717 "trsvcid": "4420" 00:22:01.717 }, 00:22:01.717 "peer_address": { 00:22:01.717 "trtype": "TCP", 00:22:01.717 "adrfam": "IPv4", 00:22:01.717 "traddr": "10.0.0.1", 00:22:01.717 "trsvcid": "59170" 00:22:01.717 }, 00:22:01.717 "auth": { 00:22:01.717 "state": "completed", 00:22:01.717 "digest": "sha512", 00:22:01.717 "dhgroup": "ffdhe6144" 00:22:01.717 } 00:22:01.717 } 00:22:01.717 ]' 00:22:01.717 13:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:01.717 13:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:01.717 13:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:02.043 13:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:02.043 13:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:02.043 13:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.043 13:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.043 13:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.043 13:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQ3MzA4ZDM3MDExNTZhMTkzMTY0MGQ5NDJhNDYzMzQ1MzlhY2VmMDYxM2IzNmI3NTVkMGM5NzMxYWY2YjBlOM8LAXc=: 00:22:02.043 13:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MjQ3MzA4ZDM3MDExNTZhMTkzMTY0MGQ5NDJhNDYzMzQ1MzlhY2VmMDYxM2IzNmI3NTVkMGM5NzMxYWY2YjBlOM8LAXc=: 00:22:02.996 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.996 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:02.996 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.996 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.996 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.996 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:02.996 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:02.996 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:02.996 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:02.996 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:22:02.996 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.996 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:02.996 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:02.996 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:02.996 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.996 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.996 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.996 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.997 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.997 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.997 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.997 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.567 00:22:03.567 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:03.567 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:03.567 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.827 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.827 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.827 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.827 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.827 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.827 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:03.827 { 00:22:03.827 "cntlid": 137, 00:22:03.827 "qid": 0, 00:22:03.827 "state": "enabled", 00:22:03.827 "thread": "nvmf_tgt_poll_group_000", 00:22:03.827 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:03.827 "listen_address": { 00:22:03.827 "trtype": "TCP", 00:22:03.827 "adrfam": "IPv4", 00:22:03.827 "traddr": "10.0.0.2", 00:22:03.827 "trsvcid": "4420" 00:22:03.827 }, 00:22:03.827 "peer_address": { 00:22:03.827 "trtype": "TCP", 00:22:03.827 "adrfam": "IPv4", 00:22:03.827 "traddr": "10.0.0.1", 00:22:03.827 "trsvcid": "59190" 00:22:03.827 }, 00:22:03.827 "auth": { 00:22:03.827 "state": "completed", 00:22:03.827 "digest": "sha512", 00:22:03.827 "dhgroup": "ffdhe8192" 00:22:03.827 } 00:22:03.827 } 00:22:03.827 ]' 00:22:03.827 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:03.827 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:03.827 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:03.827 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:03.827 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:03.827 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.827 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.827 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.088 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTRmMTllNzg1NTBjNzQ4YzIzYTc5NDFmNTMxNGJjNjc1ZTA0YmE2YWE5MjA4MzFhdJuIoA==: --dhchap-ctrl-secret DHHC-1:03:NDU4ZjhhOGYxMmUwN2E2ZjYzNjAzMTliYzZlYTE5Yzc5N2MxYWJmNGY0ZGE5YjFhMWJmYzJkYWMwNzNjMmJjYoHPW94=: 00:22:04.088 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MTRmMTllNzg1NTBjNzQ4YzIzYTc5NDFmNTMxNGJjNjc1ZTA0YmE2YWE5MjA4MzFhdJuIoA==: --dhchap-ctrl-secret DHHC-1:03:NDU4ZjhhOGYxMmUwN2E2ZjYzNjAzMTliYzZlYTE5Yzc5N2MxYWJmNGY0ZGE5YjFhMWJmYzJkYWMwNzNjMmJjYoHPW94=: 00:22:05.029 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.029 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:05.029 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.029 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.029 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.029 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:05.029 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:05.029 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:05.029 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:22:05.029 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:05.029 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:05.029 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:05.029 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:05.029 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.029 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.029 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.029 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.029 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.029 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.029 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.029 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.473 00:22:05.473 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:05.473 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:05.473 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.733 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.733 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.733 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.733 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.733 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.733 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:05.733 { 00:22:05.733 "cntlid": 139, 00:22:05.733 "qid": 0, 00:22:05.733 "state": "enabled", 00:22:05.733 "thread": "nvmf_tgt_poll_group_000", 00:22:05.733 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:05.733 "listen_address": { 00:22:05.734 "trtype": "TCP", 00:22:05.734 "adrfam": "IPv4", 00:22:05.734 "traddr": "10.0.0.2", 00:22:05.734 "trsvcid": "4420" 00:22:05.734 }, 00:22:05.734 "peer_address": { 00:22:05.734 "trtype": "TCP", 00:22:05.734 "adrfam": "IPv4", 00:22:05.734 "traddr": "10.0.0.1", 00:22:05.734 "trsvcid": "49406" 00:22:05.734 }, 00:22:05.734 "auth": { 00:22:05.734 "state": "completed", 00:22:05.734 "digest": "sha512", 00:22:05.734 "dhgroup": "ffdhe8192" 00:22:05.734 } 00:22:05.734 } 00:22:05.734 ]' 00:22:05.734 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:05.734 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:05.734 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:05.996 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:05.996 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:05.996 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.996 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.996 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.996 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjRjNzkzYmEwMjVlY2FiZGE5ZTRhZjQwNWZkMmZkNmOMnVeg: --dhchap-ctrl-secret DHHC-1:02:Yjg1ZDU2ZGUxMGU5MzRmNDg4YWJlYTdhNjcyNmVmNGFkNDVkYmFlZTI2N2RlNzZiu66vkw==: 00:22:05.996 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZjRjNzkzYmEwMjVlY2FiZGE5ZTRhZjQwNWZkMmZkNmOMnVeg: --dhchap-ctrl-secret DHHC-1:02:Yjg1ZDU2ZGUxMGU5MzRmNDg4YWJlYTdhNjcyNmVmNGFkNDVkYmFlZTI2N2RlNzZiu66vkw==: 00:22:06.939 13:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:06.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:06.939 13:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:06.940 13:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.940 13:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.940 13:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.940 13:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:06.940 13:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:06.940 13:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:06.940 13:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:22:06.940 13:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:06.940 13:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:06.940 13:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:06.940 13:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:06.940 13:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:06.940 13:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:06.940 13:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.940 13:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.940 13:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.940 13:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:06.940 13:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:06.940 13:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.510 00:22:07.510 13:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:07.510 13:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:07.510 13:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.771 13:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.771 13:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.771 13:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.771 13:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.771 13:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.771 13:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:07.771 { 00:22:07.771 "cntlid": 141, 00:22:07.771 "qid": 0, 00:22:07.771 "state": "enabled", 00:22:07.771 "thread": "nvmf_tgt_poll_group_000", 00:22:07.771 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:07.771 "listen_address": { 00:22:07.771 "trtype": "TCP", 00:22:07.771 "adrfam": "IPv4", 00:22:07.771 "traddr": "10.0.0.2", 00:22:07.771 "trsvcid": "4420" 00:22:07.771 }, 00:22:07.771 "peer_address": { 00:22:07.771 "trtype": "TCP", 00:22:07.771 "adrfam": "IPv4", 00:22:07.771 "traddr": "10.0.0.1", 00:22:07.771 "trsvcid": "49440" 00:22:07.771 }, 00:22:07.771 "auth": { 00:22:07.771 "state": "completed", 00:22:07.771 "digest": "sha512", 00:22:07.771 "dhgroup": "ffdhe8192" 00:22:07.771 } 00:22:07.771 } 00:22:07.771 ]' 00:22:07.771 13:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:07.771 13:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:07.771 13:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:07.771 13:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:07.771 13:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:08.031 13:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.031 13:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.031 13:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.031 13:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: --dhchap-ctrl-secret DHHC-1:01:Y2UxZjViMmE1ZGEwYTdkNmVjMmUxYWMwYmMwM2JhYjDtP5LJ: 00:22:08.031 13:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: --dhchap-ctrl-secret DHHC-1:01:Y2UxZjViMmE1ZGEwYTdkNmVjMmUxYWMwYmMwM2JhYjDtP5LJ: 00:22:08.973 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.974 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:08.974 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.974 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.974 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.974 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:08.974 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:08.974 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:08.974 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:22:08.974 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:08.974 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:08.974 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:08.974 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:08.974 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:08.974 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:08.974 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.974 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.974 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.974 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:08.974 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:08.974 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:09.543 00:22:09.543 13:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:09.543 13:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:09.543 13:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.804 13:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.804 13:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.804 13:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.804 13:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.804 13:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.804 13:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:09.804 { 00:22:09.804 "cntlid": 143, 00:22:09.804 "qid": 0, 00:22:09.804 "state": "enabled", 00:22:09.804 "thread": "nvmf_tgt_poll_group_000", 00:22:09.804 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:09.804 "listen_address": { 00:22:09.804 "trtype": "TCP", 00:22:09.804 "adrfam": "IPv4", 00:22:09.804 "traddr": "10.0.0.2", 00:22:09.804 "trsvcid": "4420" 00:22:09.804 }, 00:22:09.804 "peer_address": { 00:22:09.804 "trtype": "TCP", 00:22:09.804 "adrfam": "IPv4", 00:22:09.804 "traddr": "10.0.0.1", 00:22:09.804 "trsvcid": "49466" 00:22:09.804 }, 00:22:09.804 "auth": { 00:22:09.804 "state": "completed", 00:22:09.804 "digest": "sha512", 00:22:09.804 "dhgroup": "ffdhe8192" 00:22:09.804 } 00:22:09.804 } 00:22:09.804 ]' 00:22:09.804 13:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:09.804 13:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:09.804 13:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:09.804 13:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:09.804 13:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:09.804 13:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.804 13:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.804 13:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.064 13:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQ3MzA4ZDM3MDExNTZhMTkzMTY0MGQ5NDJhNDYzMzQ1MzlhY2VmMDYxM2IzNmI3NTVkMGM5NzMxYWY2YjBlOM8LAXc=: 00:22:10.064 13:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MjQ3MzA4ZDM3MDExNTZhMTkzMTY0MGQ5NDJhNDYzMzQ1MzlhY2VmMDYxM2IzNmI3NTVkMGM5NzMxYWY2YjBlOM8LAXc=: 00:22:11.017 13:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.017 13:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:11.017 13:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.017 13:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.017 13:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.017 13:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:11.017 13:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:22:11.017 13:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:11.017 13:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:11.017 13:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:11.017 13:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:11.017 13:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:22:11.017 13:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:11.017 13:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:11.017 13:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:11.017 13:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:11.017 13:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:11.017 13:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.017 13:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.017 13:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.017 13:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.017 13:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.017 13:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.017 13:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.589 00:22:11.589 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:11.589 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:11.589 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.849 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.849 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.849 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.849 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.849 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.849 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:11.849 { 00:22:11.849 "cntlid": 145, 00:22:11.849 "qid": 0, 00:22:11.849 "state": "enabled", 00:22:11.849 "thread": "nvmf_tgt_poll_group_000", 00:22:11.849 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:11.849 "listen_address": { 00:22:11.849 "trtype": "TCP", 00:22:11.849 "adrfam": "IPv4", 00:22:11.849 "traddr": "10.0.0.2", 00:22:11.849 "trsvcid": "4420" 00:22:11.849 }, 00:22:11.849 "peer_address": { 00:22:11.849 "trtype": "TCP", 00:22:11.849 "adrfam": "IPv4", 00:22:11.849 "traddr": "10.0.0.1", 00:22:11.849 "trsvcid": "49496" 00:22:11.849 }, 00:22:11.849 "auth": { 00:22:11.849 "state": "completed", 00:22:11.849 "digest": "sha512", 00:22:11.849 "dhgroup": "ffdhe8192" 00:22:11.849 } 00:22:11.849 } 00:22:11.849 ]' 00:22:11.849 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:11.849 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:11.849 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:11.849 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:11.849 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:11.849 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.849 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.849 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.110 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTRmMTllNzg1NTBjNzQ4YzIzYTc5NDFmNTMxNGJjNjc1ZTA0YmE2YWE5MjA4MzFhdJuIoA==: --dhchap-ctrl-secret DHHC-1:03:NDU4ZjhhOGYxMmUwN2E2ZjYzNjAzMTliYzZlYTE5Yzc5N2MxYWJmNGY0ZGE5YjFhMWJmYzJkYWMwNzNjMmJjYoHPW94=: 00:22:12.110 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MTRmMTllNzg1NTBjNzQ4YzIzYTc5NDFmNTMxNGJjNjc1ZTA0YmE2YWE5MjA4MzFhdJuIoA==: --dhchap-ctrl-secret DHHC-1:03:NDU4ZjhhOGYxMmUwN2E2ZjYzNjAzMTliYzZlYTE5Yzc5N2MxYWJmNGY0ZGE5YjFhMWJmYzJkYWMwNzNjMmJjYoHPW94=: 00:22:13.050 13:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:13.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:13.050 13:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:13.050 13:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.050 13:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.050 13:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.050 13:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:22:13.050 13:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.050 13:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.050 13:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.050 13:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:13.050 13:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:13.050 13:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:13.050 13:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:13.050 13:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:13.050 13:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:13.050 13:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:13.050 13:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:13.050 13:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:13.051 13:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:13.311 request: 00:22:13.311 { 00:22:13.311 "name": "nvme0", 00:22:13.311 "trtype": "tcp", 00:22:13.311 "traddr": "10.0.0.2", 00:22:13.311 "adrfam": "ipv4", 00:22:13.311 "trsvcid": "4420", 00:22:13.311 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:13.311 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:13.311 "prchk_reftag": false, 00:22:13.311 "prchk_guard": false, 00:22:13.311 "hdgst": false, 00:22:13.311 "ddgst": false, 00:22:13.311 "dhchap_key": "key2", 00:22:13.311 "allow_unrecognized_csi": false, 00:22:13.311 "method": "bdev_nvme_attach_controller", 00:22:13.311 "req_id": 1 00:22:13.311 } 00:22:13.311 Got JSON-RPC error response 00:22:13.311 response: 00:22:13.311 { 00:22:13.311 "code": -5, 00:22:13.311 "message": "Input/output error" 00:22:13.311 } 00:22:13.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:13.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:13.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:13.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:13.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:13.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:13.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:13.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:13.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:13.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:13.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:13.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:13.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:13.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:13.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:13.883 request: 00:22:13.883 { 00:22:13.883 "name": "nvme0", 00:22:13.883 "trtype": "tcp", 00:22:13.883 "traddr": "10.0.0.2", 00:22:13.883 "adrfam": "ipv4", 00:22:13.883 "trsvcid": "4420", 00:22:13.883 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:13.883 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:13.883 "prchk_reftag": false, 00:22:13.883 "prchk_guard": false, 00:22:13.883 "hdgst": false, 00:22:13.883 "ddgst": false, 00:22:13.883 "dhchap_key": "key1", 00:22:13.883 "dhchap_ctrlr_key": "ckey2", 00:22:13.883 "allow_unrecognized_csi": false, 00:22:13.883 "method": "bdev_nvme_attach_controller", 00:22:13.883 "req_id": 1 00:22:13.883 } 00:22:13.883 Got JSON-RPC error response 00:22:13.883 response: 00:22:13.883 { 00:22:13.883 "code": -5, 00:22:13.883 "message": "Input/output error" 00:22:13.883 } 00:22:13.883 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:13.883 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:13.883 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:13.883 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:13.883 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:13.883 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.883 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.883 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.883 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:22:13.883 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.883 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.883 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.883 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.883 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:13.883 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.883 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:13.883 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:13.883 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:13.883 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:13.883 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.883 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.883 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:14.453 request: 00:22:14.453 { 00:22:14.453 "name": "nvme0", 00:22:14.453 "trtype": "tcp", 00:22:14.453 "traddr": "10.0.0.2", 00:22:14.453 "adrfam": "ipv4", 00:22:14.453 "trsvcid": "4420", 00:22:14.453 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:14.453 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:14.453 "prchk_reftag": false, 00:22:14.453 "prchk_guard": false, 00:22:14.453 "hdgst": false, 00:22:14.453 "ddgst": false, 00:22:14.453 "dhchap_key": "key1", 00:22:14.453 "dhchap_ctrlr_key": "ckey1", 00:22:14.453 "allow_unrecognized_csi": false, 00:22:14.453 "method": "bdev_nvme_attach_controller", 00:22:14.453 "req_id": 1 00:22:14.453 } 00:22:14.453 Got JSON-RPC error response 00:22:14.453 response: 00:22:14.453 { 00:22:14.453 "code": -5, 00:22:14.453 "message": "Input/output error" 00:22:14.453 } 00:22:14.453 13:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:14.453 13:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:14.453 13:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:14.453 13:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:14.453 13:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:14.453 13:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.453 13:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.453 13:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.453 13:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3845471 00:22:14.453 13:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 3845471 ']' 00:22:14.453 13:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 3845471 00:22:14.453 13:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:22:14.453 13:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:14.453 13:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3845471 00:22:14.453 13:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:14.453 13:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:14.453 13:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3845471' 00:22:14.453 killing process with pid 3845471 00:22:14.453 13:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 3845471 00:22:14.453 13:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 3845471 00:22:15.395 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:15.395 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:15.395 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:15.395 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.395 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3873092 00:22:15.395 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3873092 00:22:15.395 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:15.395 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3873092 ']' 00:22:15.395 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.395 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:15.395 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.395 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:15.395 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.335 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:16.335 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:22:16.335 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:16.335 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:16.335 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.335 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:16.335 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:16.335 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3873092 00:22:16.335 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3873092 ']' 00:22:16.335 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.335 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:16.335 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.335 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:16.335 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.335 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:16.335 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:22:16.335 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:16.335 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.335 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.596 null0 00:22:16.596 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.596 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:16.596 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.sM1 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.kV7 ]] 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kV7 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.H7o 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.FQM ]] 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.FQM 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.i46 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.G3b ]] 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.G3b 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.tvF 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:16.597 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:17.539 nvme0n1 00:22:17.540 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:17.540 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:17.540 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.800 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.800 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:17.800 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.800 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.800 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.800 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:17.800 { 00:22:17.800 "cntlid": 1, 00:22:17.800 "qid": 0, 00:22:17.800 "state": "enabled", 00:22:17.800 "thread": "nvmf_tgt_poll_group_000", 00:22:17.800 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:17.800 "listen_address": { 00:22:17.800 "trtype": "TCP", 00:22:17.800 "adrfam": "IPv4", 00:22:17.800 "traddr": "10.0.0.2", 00:22:17.800 "trsvcid": "4420" 00:22:17.800 }, 00:22:17.800 "peer_address": { 00:22:17.800 "trtype": "TCP", 00:22:17.800 "adrfam": "IPv4", 00:22:17.800 "traddr": "10.0.0.1", 00:22:17.800 "trsvcid": "60116" 00:22:17.800 }, 00:22:17.800 "auth": { 00:22:17.800 "state": "completed", 00:22:17.800 "digest": "sha512", 00:22:17.800 "dhgroup": "ffdhe8192" 00:22:17.800 } 00:22:17.800 } 00:22:17.800 ]' 00:22:17.800 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:17.800 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:17.800 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:17.800 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:17.800 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:17.800 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.800 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.800 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.061 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQ3MzA4ZDM3MDExNTZhMTkzMTY0MGQ5NDJhNDYzMzQ1MzlhY2VmMDYxM2IzNmI3NTVkMGM5NzMxYWY2YjBlOM8LAXc=: 00:22:18.061 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MjQ3MzA4ZDM3MDExNTZhMTkzMTY0MGQ5NDJhNDYzMzQ1MzlhY2VmMDYxM2IzNmI3NTVkMGM5NzMxYWY2YjBlOM8LAXc=: 00:22:19.002 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.002 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:19.002 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.002 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.002 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.002 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:19.002 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.002 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.002 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.002 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:19.002 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:19.002 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:19.002 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:19.002 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:19.002 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:19.002 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:19.002 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:19.002 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:19.002 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:19.002 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:19.002 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:19.263 request: 00:22:19.263 { 00:22:19.263 "name": "nvme0", 00:22:19.263 "trtype": "tcp", 00:22:19.263 "traddr": "10.0.0.2", 00:22:19.263 "adrfam": "ipv4", 00:22:19.263 "trsvcid": "4420", 00:22:19.263 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:19.263 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:19.263 "prchk_reftag": false, 00:22:19.263 "prchk_guard": false, 00:22:19.263 "hdgst": false, 00:22:19.263 "ddgst": false, 00:22:19.263 "dhchap_key": "key3", 00:22:19.263 "allow_unrecognized_csi": false, 00:22:19.263 "method": "bdev_nvme_attach_controller", 00:22:19.263 "req_id": 1 00:22:19.263 } 00:22:19.263 Got JSON-RPC error response 00:22:19.263 response: 00:22:19.263 { 00:22:19.263 "code": -5, 00:22:19.263 "message": "Input/output error" 00:22:19.263 } 00:22:19.263 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:19.263 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:19.263 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:19.263 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:19.263 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:19.263 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:19.263 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:19.263 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:19.524 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:19.524 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:19.524 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:19.524 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:19.524 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:19.524 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:19.524 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:19.524 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:19.524 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:19.524 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:19.524 request: 00:22:19.524 { 00:22:19.524 "name": "nvme0", 00:22:19.524 "trtype": "tcp", 00:22:19.524 "traddr": "10.0.0.2", 00:22:19.524 "adrfam": "ipv4", 00:22:19.524 "trsvcid": "4420", 00:22:19.524 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:19.524 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:19.524 "prchk_reftag": false, 00:22:19.524 "prchk_guard": false, 00:22:19.524 "hdgst": false, 00:22:19.524 "ddgst": false, 00:22:19.524 "dhchap_key": "key3", 00:22:19.524 "allow_unrecognized_csi": false, 00:22:19.524 "method": "bdev_nvme_attach_controller", 00:22:19.524 "req_id": 1 00:22:19.524 } 00:22:19.524 Got JSON-RPC error response 00:22:19.524 response: 00:22:19.524 { 00:22:19.524 "code": -5, 00:22:19.524 "message": "Input/output error" 00:22:19.524 } 00:22:19.524 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:19.524 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:19.524 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:19.524 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:19.524 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:19.524 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:19.524 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:19.524 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:19.524 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:19.524 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:19.786 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:19.786 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.786 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.786 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.786 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:19.786 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.786 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.786 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.786 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:19.786 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:19.786 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:19.786 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:19.786 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:19.786 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:19.786 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:19.786 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:19.786 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:19.786 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:20.047 request: 00:22:20.047 { 00:22:20.047 "name": "nvme0", 00:22:20.047 "trtype": "tcp", 00:22:20.047 "traddr": "10.0.0.2", 00:22:20.047 "adrfam": "ipv4", 00:22:20.047 "trsvcid": "4420", 00:22:20.047 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:20.047 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:20.047 "prchk_reftag": false, 00:22:20.047 "prchk_guard": false, 00:22:20.047 "hdgst": false, 00:22:20.047 "ddgst": false, 00:22:20.047 "dhchap_key": "key0", 00:22:20.047 "dhchap_ctrlr_key": "key1", 00:22:20.047 "allow_unrecognized_csi": false, 00:22:20.047 "method": "bdev_nvme_attach_controller", 00:22:20.047 "req_id": 1 00:22:20.047 } 00:22:20.047 Got JSON-RPC error response 00:22:20.047 response: 00:22:20.047 { 00:22:20.047 "code": -5, 00:22:20.047 "message": "Input/output error" 00:22:20.047 } 00:22:20.047 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:20.047 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:20.047 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:20.047 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:20.047 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:20.047 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:20.047 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:20.307 nvme0n1 00:22:20.307 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:20.307 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:20.307 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.568 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.568 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.568 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.829 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:22:20.829 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.829 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.829 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.829 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:20.829 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:20.829 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:21.770 nvme0n1 00:22:21.770 13:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:21.770 13:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:21.770 13:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.770 13:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.770 13:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:21.770 13:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.770 13:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.770 13:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.770 13:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:21.770 13:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.770 13:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:22.031 13:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.031 13:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: --dhchap-ctrl-secret DHHC-1:03:MjQ3MzA4ZDM3MDExNTZhMTkzMTY0MGQ5NDJhNDYzMzQ1MzlhY2VmMDYxM2IzNmI3NTVkMGM5NzMxYWY2YjBlOM8LAXc=: 00:22:22.031 13:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: --dhchap-ctrl-secret DHHC-1:03:MjQ3MzA4ZDM3MDExNTZhMTkzMTY0MGQ5NDJhNDYzMzQ1MzlhY2VmMDYxM2IzNmI3NTVkMGM5NzMxYWY2YjBlOM8LAXc=: 00:22:22.601 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:22.601 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:22.601 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:22.601 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:22.601 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:22.601 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:22.601 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:22.601 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:22.601 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:22.866 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:22.866 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:22.866 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:22.866 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:22.866 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:22.866 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:22.866 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:22.866 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:22.866 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:22.866 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:23.437 request: 00:22:23.437 { 00:22:23.437 "name": "nvme0", 00:22:23.437 "trtype": "tcp", 00:22:23.437 "traddr": "10.0.0.2", 00:22:23.437 "adrfam": "ipv4", 00:22:23.437 "trsvcid": "4420", 00:22:23.437 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:23.437 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:23.437 "prchk_reftag": false, 00:22:23.437 "prchk_guard": false, 00:22:23.437 "hdgst": false, 00:22:23.437 "ddgst": false, 00:22:23.437 "dhchap_key": "key1", 00:22:23.437 "allow_unrecognized_csi": false, 00:22:23.437 "method": "bdev_nvme_attach_controller", 00:22:23.437 "req_id": 1 00:22:23.437 } 00:22:23.438 Got JSON-RPC error response 00:22:23.438 response: 00:22:23.438 { 00:22:23.438 "code": -5, 00:22:23.438 "message": "Input/output error" 00:22:23.438 } 00:22:23.438 13:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:23.438 13:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:23.438 13:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:23.438 13:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:23.438 13:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:23.438 13:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:23.438 13:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:24.377 nvme0n1 00:22:24.377 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:24.377 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:24.377 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.377 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.377 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:24.377 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.638 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:24.638 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.638 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.638 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.638 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:24.638 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:24.638 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:24.898 nvme0n1 00:22:24.898 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:24.898 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:24.898 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.158 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.158 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:25.158 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:25.158 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:25.158 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.158 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.158 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.158 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZjRjNzkzYmEwMjVlY2FiZGE5ZTRhZjQwNWZkMmZkNmOMnVeg: '' 2s 00:22:25.158 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:25.158 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:25.158 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZjRjNzkzYmEwMjVlY2FiZGE5ZTRhZjQwNWZkMmZkNmOMnVeg: 00:22:25.158 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:25.158 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:25.158 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:25.158 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZjRjNzkzYmEwMjVlY2FiZGE5ZTRhZjQwNWZkMmZkNmOMnVeg: ]] 00:22:25.158 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZjRjNzkzYmEwMjVlY2FiZGE5ZTRhZjQwNWZkMmZkNmOMnVeg: 00:22:25.158 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:25.158 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:25.158 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:27.698 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:27.698 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:22:27.698 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:22:27.698 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:22:27.698 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:22:27.698 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:22:27.698 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:22:27.698 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:27.698 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.698 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.698 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.698 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: 2s 00:22:27.698 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:27.698 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:27.698 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:27.698 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: 00:22:27.698 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:27.698 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:27.698 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:27.698 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: ]] 00:22:27.698 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MDg3OTZiYWQwMGJjMjBiZjBlMWU2ZWI1MTIwYTc1NzZkNDM1NzViOTA4Yjg2NzU5/z5iSQ==: 00:22:27.698 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:27.698 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:29.610 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:29.610 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:22:29.610 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:22:29.610 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:22:29.610 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:22:29.610 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:22:29.610 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:22:29.610 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:29.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:29.610 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:29.610 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.610 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.610 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.610 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:29.610 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:29.610 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:30.179 nvme0n1 00:22:30.179 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:30.179 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.179 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.179 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.179 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:30.179 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:30.748 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:30.748 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:30.748 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:31.008 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.009 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:31.009 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.009 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.009 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.009 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:31.009 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:31.009 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:31.009 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:31.009 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:31.269 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.269 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:31.269 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.269 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.269 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.269 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:31.269 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:31.269 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:31.270 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:31.270 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:31.270 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:31.270 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:31.270 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:31.270 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:31.839 request: 00:22:31.839 { 00:22:31.839 "name": "nvme0", 00:22:31.839 "dhchap_key": "key1", 00:22:31.839 "dhchap_ctrlr_key": "key3", 00:22:31.839 "method": "bdev_nvme_set_keys", 00:22:31.839 "req_id": 1 00:22:31.839 } 00:22:31.839 Got JSON-RPC error response 00:22:31.839 response: 00:22:31.839 { 00:22:31.839 "code": -13, 00:22:31.839 "message": "Permission denied" 00:22:31.839 } 00:22:31.839 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:31.839 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:31.839 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:31.839 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:31.839 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:31.839 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:31.839 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.098 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:32.098 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:33.037 13:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:33.037 13:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:33.037 13:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.037 13:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:33.037 13:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:33.037 13:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.037 13:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.297 13:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.297 13:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:33.297 13:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:33.297 13:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:33.865 nvme0n1 00:22:34.126 13:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:34.126 13:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.126 13:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.126 13:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.126 13:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:34.126 13:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:34.126 13:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:34.126 13:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:34.126 13:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:34.126 13:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:34.126 13:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:34.126 13:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:34.126 13:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:34.696 request: 00:22:34.696 { 00:22:34.696 "name": "nvme0", 00:22:34.696 "dhchap_key": "key2", 00:22:34.696 "dhchap_ctrlr_key": "key0", 00:22:34.696 "method": "bdev_nvme_set_keys", 00:22:34.696 "req_id": 1 00:22:34.696 } 00:22:34.696 Got JSON-RPC error response 00:22:34.696 response: 00:22:34.696 { 00:22:34.696 "code": -13, 00:22:34.696 "message": "Permission denied" 00:22:34.696 } 00:22:34.696 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:34.696 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:34.696 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:34.696 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:34.696 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:34.696 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:34.696 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:34.696 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:34.696 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:35.635 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:35.635 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:35.635 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.894 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:35.894 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:35.894 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:35.894 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3845730 00:22:35.894 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 3845730 ']' 00:22:35.894 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 3845730 00:22:35.894 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:22:35.894 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:35.894 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3845730 00:22:35.894 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:35.894 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:35.894 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3845730' 00:22:35.894 killing process with pid 3845730 00:22:35.894 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 3845730 00:22:35.894 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 3845730 00:22:37.275 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:37.275 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:37.275 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:37.275 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:37.275 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:37.275 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:37.275 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:37.275 rmmod nvme_tcp 00:22:37.275 rmmod nvme_fabrics 00:22:37.275 rmmod nvme_keyring 00:22:37.275 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:37.275 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:37.275 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:37.275 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3873092 ']' 00:22:37.275 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3873092 00:22:37.275 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 3873092 ']' 00:22:37.275 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 3873092 00:22:37.275 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:22:37.275 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:37.275 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3873092 00:22:37.275 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:37.275 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:37.275 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3873092' 00:22:37.275 killing process with pid 3873092 00:22:37.275 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 3873092 00:22:37.275 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 3873092 00:22:38.215 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:38.215 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:38.215 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:38.215 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:38.215 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:22:38.215 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:38.215 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:22:38.215 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:38.215 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:38.215 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.215 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:38.215 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.125 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:40.125 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.sM1 /tmp/spdk.key-sha256.H7o /tmp/spdk.key-sha384.i46 /tmp/spdk.key-sha512.tvF /tmp/spdk.key-sha512.kV7 /tmp/spdk.key-sha384.FQM /tmp/spdk.key-sha256.G3b '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:40.125 00:22:40.125 real 2m49.623s 00:22:40.125 user 6m15.071s 00:22:40.125 sys 0m25.210s 00:22:40.125 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:40.125 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.125 ************************************ 00:22:40.125 END TEST nvmf_auth_target 00:22:40.125 ************************************ 00:22:40.125 13:27:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:40.125 13:27:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:40.125 13:27:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:22:40.125 13:27:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:40.126 13:27:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:40.126 ************************************ 00:22:40.126 START TEST nvmf_bdevio_no_huge 00:22:40.126 ************************************ 00:22:40.126 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:40.126 * Looking for test storage... 00:22:40.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:40.126 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:40.126 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:22:40.126 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:40.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:40.387 --rc genhtml_branch_coverage=1 00:22:40.387 --rc genhtml_function_coverage=1 00:22:40.387 --rc genhtml_legend=1 00:22:40.387 --rc geninfo_all_blocks=1 00:22:40.387 --rc geninfo_unexecuted_blocks=1 00:22:40.387 00:22:40.387 ' 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:40.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:40.387 --rc genhtml_branch_coverage=1 00:22:40.387 --rc genhtml_function_coverage=1 00:22:40.387 --rc genhtml_legend=1 00:22:40.387 --rc geninfo_all_blocks=1 00:22:40.387 --rc geninfo_unexecuted_blocks=1 00:22:40.387 00:22:40.387 ' 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:40.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:40.387 --rc genhtml_branch_coverage=1 00:22:40.387 --rc genhtml_function_coverage=1 00:22:40.387 --rc genhtml_legend=1 00:22:40.387 --rc geninfo_all_blocks=1 00:22:40.387 --rc geninfo_unexecuted_blocks=1 00:22:40.387 00:22:40.387 ' 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:40.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:40.387 --rc genhtml_branch_coverage=1 00:22:40.387 --rc genhtml_function_coverage=1 00:22:40.387 --rc genhtml_legend=1 00:22:40.387 --rc geninfo_all_blocks=1 00:22:40.387 --rc geninfo_unexecuted_blocks=1 00:22:40.387 00:22:40.387 ' 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.387 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.388 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:40.388 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.388 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:40.388 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:40.388 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:40.388 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:40.388 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:40.388 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:40.388 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:40.388 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:40.388 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:40.388 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:40.388 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:40.388 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:40.388 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:40.388 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:40.388 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:40.388 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:40.388 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:40.388 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:40.388 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:40.388 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.388 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:40.388 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.388 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:40.388 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:40.388 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:40.388 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:48.528 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:48.528 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:48.528 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:48.528 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:48.528 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:48.528 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:48.528 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:48.528 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:48.528 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:48.528 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:48.528 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:48.528 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:48.528 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:48.528 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:48.528 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:48.528 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:48.528 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:48.528 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:48.528 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:48.528 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:48.528 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:48.528 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:48.528 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:48.528 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:48.528 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:48.528 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:48.528 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:48.528 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:48.528 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:48.528 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:48.528 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:48.528 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:48.529 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:48.529 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:48.529 Found net devices under 0000:31:00.0: cvl_0_0 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:48.529 Found net devices under 0000:31:00.1: cvl_0_1 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:48.529 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:48.790 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:48.790 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:48.790 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:48.790 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:48.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:48.790 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.558 ms 00:22:48.790 00:22:48.790 --- 10.0.0.2 ping statistics --- 00:22:48.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.790 rtt min/avg/max/mdev = 0.558/0.558/0.558/0.000 ms 00:22:48.790 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:48.790 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:48.790 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:22:48.790 00:22:48.790 --- 10.0.0.1 ping statistics --- 00:22:48.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.790 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:22:48.790 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:48.790 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:48.790 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:48.790 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:48.790 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:48.790 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:48.790 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:48.790 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:48.790 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:48.790 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:48.790 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:48.790 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:48.790 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:48.790 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=3882196 00:22:48.790 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 3882196 00:22:48.790 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:48.790 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 3882196 ']' 00:22:48.790 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:48.790 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:48.790 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:48.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:48.790 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:48.790 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:49.051 [2024-11-07 13:27:56.804648] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:22:49.051 [2024-11-07 13:27:56.804786] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:49.051 [2024-11-07 13:27:57.004408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:49.324 [2024-11-07 13:27:57.124674] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.324 [2024-11-07 13:27:57.124733] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.324 [2024-11-07 13:27:57.124747] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:49.324 [2024-11-07 13:27:57.124760] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:49.324 [2024-11-07 13:27:57.124770] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.324 [2024-11-07 13:27:57.127280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:49.324 [2024-11-07 13:27:57.127434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:49.324 [2024-11-07 13:27:57.127576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:49.324 [2024-11-07 13:27:57.127603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:49.624 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:49.624 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:22:49.624 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:49.624 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:49.624 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:49.917 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:49.917 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:49.917 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.917 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:49.917 [2024-11-07 13:27:57.650748] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:49.917 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.917 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:49.917 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.917 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:49.917 Malloc0 00:22:49.917 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.917 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:49.917 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.917 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:49.917 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.917 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:49.917 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.917 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:49.917 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.917 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:49.917 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.917 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:49.917 [2024-11-07 13:27:57.744970] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:49.917 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.917 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:49.917 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:49.917 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:49.917 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:49.917 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:49.917 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:49.917 { 00:22:49.917 "params": { 00:22:49.917 "name": "Nvme$subsystem", 00:22:49.917 "trtype": "$TEST_TRANSPORT", 00:22:49.917 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:49.917 "adrfam": "ipv4", 00:22:49.917 "trsvcid": "$NVMF_PORT", 00:22:49.917 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:49.917 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:49.917 "hdgst": ${hdgst:-false}, 00:22:49.917 "ddgst": ${ddgst:-false} 00:22:49.917 }, 00:22:49.917 "method": "bdev_nvme_attach_controller" 00:22:49.917 } 00:22:49.917 EOF 00:22:49.917 )") 00:22:49.917 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:49.917 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:49.917 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:49.917 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:49.917 "params": { 00:22:49.917 "name": "Nvme1", 00:22:49.917 "trtype": "tcp", 00:22:49.917 "traddr": "10.0.0.2", 00:22:49.917 "adrfam": "ipv4", 00:22:49.917 "trsvcid": "4420", 00:22:49.917 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:49.917 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:49.917 "hdgst": false, 00:22:49.917 "ddgst": false 00:22:49.917 }, 00:22:49.917 "method": "bdev_nvme_attach_controller" 00:22:49.917 }' 00:22:49.917 [2024-11-07 13:27:57.840799] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:22:49.917 [2024-11-07 13:27:57.840940] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3882389 ] 00:22:50.177 [2024-11-07 13:27:58.017929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:50.177 [2024-11-07 13:27:58.128772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:50.177 [2024-11-07 13:27:58.128856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.177 [2024-11-07 13:27:58.128861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:50.747 I/O targets: 00:22:50.747 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:50.747 00:22:50.747 00:22:50.747 CUnit - A unit testing framework for C - Version 2.1-3 00:22:50.747 http://cunit.sourceforge.net/ 00:22:50.747 00:22:50.747 00:22:50.747 Suite: bdevio tests on: Nvme1n1 00:22:50.747 Test: blockdev write read block ...passed 00:22:50.747 Test: blockdev write zeroes read block ...passed 00:22:50.747 Test: blockdev write zeroes read no split ...passed 00:22:50.747 Test: blockdev write zeroes read split ...passed 00:22:50.747 Test: blockdev write zeroes read split partial ...passed 00:22:50.747 Test: blockdev reset ...[2024-11-07 13:27:58.720786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:50.747 [2024-11-07 13:27:58.720904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000414900 (9): Bad file descriptor 00:22:51.008 [2024-11-07 13:27:58.826913] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:51.008 passed 00:22:51.008 Test: blockdev write read 8 blocks ...passed 00:22:51.008 Test: blockdev write read size > 128k ...passed 00:22:51.008 Test: blockdev write read invalid size ...passed 00:22:51.008 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:51.008 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:51.008 Test: blockdev write read max offset ...passed 00:22:51.268 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:51.268 Test: blockdev writev readv 8 blocks ...passed 00:22:51.268 Test: blockdev writev readv 30 x 1block ...passed 00:22:51.268 Test: blockdev writev readv block ...passed 00:22:51.268 Test: blockdev writev readv size > 128k ...passed 00:22:51.268 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:51.268 Test: blockdev comparev and writev ...[2024-11-07 13:27:59.138316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:51.268 [2024-11-07 13:27:59.138353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.268 [2024-11-07 13:27:59.138371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:51.268 [2024-11-07 13:27:59.138380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.268 [2024-11-07 13:27:59.138929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:51.268 [2024-11-07 13:27:59.138944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:51.268 [2024-11-07 13:27:59.138960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:51.268 [2024-11-07 13:27:59.138968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:51.268 [2024-11-07 13:27:59.139504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:51.268 [2024-11-07 13:27:59.139517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:51.268 [2024-11-07 13:27:59.139530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:51.268 [2024-11-07 13:27:59.139541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:51.268 [2024-11-07 13:27:59.140033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:51.268 [2024-11-07 13:27:59.140046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:51.268 [2024-11-07 13:27:59.140058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:51.268 [2024-11-07 13:27:59.140066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:51.268 passed 00:22:51.268 Test: blockdev nvme passthru rw ...passed 00:22:51.268 Test: blockdev nvme passthru vendor specific ...[2024-11-07 13:27:59.223712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:51.268 [2024-11-07 13:27:59.223732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:51.268 [2024-11-07 13:27:59.224126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:51.268 [2024-11-07 13:27:59.224137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:51.268 [2024-11-07 13:27:59.224516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:51.268 [2024-11-07 13:27:59.224526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:51.268 [2024-11-07 13:27:59.224899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:51.268 [2024-11-07 13:27:59.224911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:51.268 passed 00:22:51.268 Test: blockdev nvme admin passthru ...passed 00:22:51.528 Test: blockdev copy ...passed 00:22:51.528 00:22:51.528 Run Summary: Type Total Ran Passed Failed Inactive 00:22:51.528 suites 1 1 n/a 0 0 00:22:51.528 tests 23 23 23 0 0 00:22:51.528 asserts 152 152 152 0 n/a 00:22:51.528 00:22:51.528 Elapsed time = 1.620 seconds 00:22:52.098 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:52.098 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.098 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:52.098 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.098 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:52.098 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:52.098 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:52.098 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:52.098 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:52.098 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:52.098 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:52.098 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:52.098 rmmod nvme_tcp 00:22:52.098 rmmod nvme_fabrics 00:22:52.098 rmmod nvme_keyring 00:22:52.098 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:52.098 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:52.098 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:52.098 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 3882196 ']' 00:22:52.098 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 3882196 00:22:52.098 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 3882196 ']' 00:22:52.098 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 3882196 00:22:52.098 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:22:52.098 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:52.098 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3882196 00:22:52.098 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:22:52.098 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:22:52.098 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3882196' 00:22:52.098 killing process with pid 3882196 00:22:52.098 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 3882196 00:22:52.098 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 3882196 00:22:52.669 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:52.669 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:52.669 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:52.669 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:52.669 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:22:52.669 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:52.669 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:22:52.669 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:52.669 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:52.669 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.669 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:52.669 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.580 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:54.580 00:22:54.580 real 0m14.506s 00:22:54.580 user 0m19.054s 00:22:54.580 sys 0m7.632s 00:22:54.580 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:54.580 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:54.580 ************************************ 00:22:54.580 END TEST nvmf_bdevio_no_huge 00:22:54.580 ************************************ 00:22:54.580 13:28:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:54.580 13:28:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:54.580 13:28:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:54.580 13:28:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:54.580 ************************************ 00:22:54.580 START TEST nvmf_tls 00:22:54.580 ************************************ 00:22:54.580 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:54.841 * Looking for test storage... 00:22:54.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:54.841 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:54.841 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:22:54.841 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:54.841 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:54.841 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:54.841 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:54.841 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:54.841 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:54.841 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:54.841 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:54.841 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:54.841 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:54.841 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:54.841 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:54.841 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:54.841 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:54.841 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:54.841 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:54.841 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:54.841 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:54.841 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:54.841 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:54.841 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:54.841 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:54.841 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:54.841 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:54.841 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:54.841 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:54.841 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:54.841 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:54.841 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:54.841 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:54.841 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:54.841 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:54.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.841 --rc genhtml_branch_coverage=1 00:22:54.841 --rc genhtml_function_coverage=1 00:22:54.841 --rc genhtml_legend=1 00:22:54.841 --rc geninfo_all_blocks=1 00:22:54.841 --rc geninfo_unexecuted_blocks=1 00:22:54.841 00:22:54.841 ' 00:22:54.841 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:54.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.841 --rc genhtml_branch_coverage=1 00:22:54.841 --rc genhtml_function_coverage=1 00:22:54.841 --rc genhtml_legend=1 00:22:54.841 --rc geninfo_all_blocks=1 00:22:54.841 --rc geninfo_unexecuted_blocks=1 00:22:54.841 00:22:54.841 ' 00:22:54.841 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:54.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.841 --rc genhtml_branch_coverage=1 00:22:54.841 --rc genhtml_function_coverage=1 00:22:54.841 --rc genhtml_legend=1 00:22:54.841 --rc geninfo_all_blocks=1 00:22:54.841 --rc geninfo_unexecuted_blocks=1 00:22:54.841 00:22:54.841 ' 00:22:54.841 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:54.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.841 --rc genhtml_branch_coverage=1 00:22:54.841 --rc genhtml_function_coverage=1 00:22:54.841 --rc genhtml_legend=1 00:22:54.841 --rc geninfo_all_blocks=1 00:22:54.841 --rc geninfo_unexecuted_blocks=1 00:22:54.841 00:22:54.841 ' 00:22:54.841 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:54.841 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:54.841 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:54.841 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:54.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:54.842 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:02.981 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:02.981 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:02.981 Found net devices under 0000:31:00.0: cvl_0_0 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.981 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:02.982 Found net devices under 0000:31:00.1: cvl_0_1 00:23:02.982 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.982 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:02.982 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:23:02.982 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:02.982 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:02.982 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:02.982 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:02.982 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:02.982 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:02.982 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:02.982 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:02.982 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:02.982 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:02.982 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:02.982 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:02.982 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:02.982 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:02.982 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:02.982 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:02.982 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:02.982 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:03.243 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:03.243 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:03.243 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:03.243 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:03.243 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:03.243 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:03.243 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:03.243 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:03.243 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:03.243 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.709 ms 00:23:03.243 00:23:03.243 --- 10.0.0.2 ping statistics --- 00:23:03.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.243 rtt min/avg/max/mdev = 0.709/0.709/0.709/0.000 ms 00:23:03.243 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:03.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:03.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:23:03.243 00:23:03.243 --- 10.0.0.1 ping statistics --- 00:23:03.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.243 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:23:03.243 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:03.243 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:23:03.243 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:03.243 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:03.243 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:03.243 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:03.243 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:03.243 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:03.243 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:03.243 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:03.243 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:03.243 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:03.243 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:03.243 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3887702 00:23:03.243 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3887702 00:23:03.243 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:03.243 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3887702 ']' 00:23:03.243 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:03.243 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:03.243 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:03.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:03.243 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:03.243 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:03.504 [2024-11-07 13:28:11.322759] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:23:03.504 [2024-11-07 13:28:11.322868] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:03.504 [2024-11-07 13:28:11.488843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.765 [2024-11-07 13:28:11.604846] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:03.765 [2024-11-07 13:28:11.604922] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:03.765 [2024-11-07 13:28:11.604935] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:03.765 [2024-11-07 13:28:11.604949] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:03.765 [2024-11-07 13:28:11.604963] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:03.765 [2024-11-07 13:28:11.606473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:04.337 13:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:04.337 13:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:04.337 13:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:04.337 13:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:04.337 13:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.337 13:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:04.337 13:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:23:04.337 13:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:04.337 true 00:23:04.337 13:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:04.337 13:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:23:04.597 13:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:23:04.597 13:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:23:04.597 13:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:04.857 13:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:04.857 13:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:23:05.118 13:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:23:05.118 13:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:23:05.118 13:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:05.118 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:05.118 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:23:05.378 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:23:05.378 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:23:05.378 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:05.378 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:23:05.639 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:23:05.639 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:23:05.639 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:05.639 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:05.639 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:23:05.899 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:23:05.899 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:23:05.899 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:06.161 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:06.161 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:23:06.421 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:23:06.421 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:23:06.421 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:06.422 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:06.422 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:06.422 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:06.422 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:23:06.422 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:23:06.422 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:06.422 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:06.422 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:06.422 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:06.422 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:06.422 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:06.422 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:23:06.422 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:23:06.422 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:06.422 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:06.422 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:06.422 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.4PwVgsX2nf 00:23:06.422 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:23:06.422 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.uWpQUGgLgJ 00:23:06.422 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:06.422 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:06.422 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.4PwVgsX2nf 00:23:06.422 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.uWpQUGgLgJ 00:23:06.422 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:06.682 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:06.942 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.4PwVgsX2nf 00:23:06.942 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.4PwVgsX2nf 00:23:06.942 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:07.202 [2024-11-07 13:28:14.970816] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:07.202 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:07.202 13:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:07.462 [2024-11-07 13:28:15.287587] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:07.462 [2024-11-07 13:28:15.287843] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:07.462 13:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:07.721 malloc0 00:23:07.721 13:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:07.722 13:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.4PwVgsX2nf 00:23:07.982 13:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:07.982 13:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.4PwVgsX2nf 00:23:20.209 Initializing NVMe Controllers 00:23:20.209 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:20.209 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:20.209 Initialization complete. Launching workers. 00:23:20.209 ======================================================== 00:23:20.209 Latency(us) 00:23:20.209 Device Information : IOPS MiB/s Average min max 00:23:20.209 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15372.17 60.05 4163.49 1585.24 5097.99 00:23:20.209 ======================================================== 00:23:20.209 Total : 15372.17 60.05 4163.49 1585.24 5097.99 00:23:20.209 00:23:20.209 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4PwVgsX2nf 00:23:20.209 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:20.209 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:20.209 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:20.209 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.4PwVgsX2nf 00:23:20.209 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:20.209 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3890407 00:23:20.209 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:20.209 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3890407 /var/tmp/bdevperf.sock 00:23:20.209 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:20.209 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3890407 ']' 00:23:20.210 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:20.210 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:20.210 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:20.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:20.210 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:20.210 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.210 [2024-11-07 13:28:26.269230] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:23:20.210 [2024-11-07 13:28:26.269346] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3890407 ] 00:23:20.210 [2024-11-07 13:28:26.386329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.210 [2024-11-07 13:28:26.460436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:20.210 13:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:20.210 13:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:20.210 13:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.4PwVgsX2nf 00:23:20.210 13:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:20.210 [2024-11-07 13:28:27.329826] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:20.210 TLSTESTn1 00:23:20.210 13:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:20.210 Running I/O for 10 seconds... 00:23:21.722 5174.00 IOPS, 20.21 MiB/s [2024-11-07T12:28:30.669Z] 5121.00 IOPS, 20.00 MiB/s [2024-11-07T12:28:31.610Z] 4816.67 IOPS, 18.82 MiB/s [2024-11-07T12:28:32.549Z] 4634.25 IOPS, 18.10 MiB/s [2024-11-07T12:28:33.929Z] 4636.40 IOPS, 18.11 MiB/s [2024-11-07T12:28:34.869Z] 4631.17 IOPS, 18.09 MiB/s [2024-11-07T12:28:35.809Z] 4734.71 IOPS, 18.49 MiB/s [2024-11-07T12:28:36.748Z] 4757.38 IOPS, 18.58 MiB/s [2024-11-07T12:28:37.687Z] 4750.78 IOPS, 18.56 MiB/s [2024-11-07T12:28:37.687Z] 4766.10 IOPS, 18.62 MiB/s 00:23:29.680 Latency(us) 00:23:29.680 [2024-11-07T12:28:37.687Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.680 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:29.680 Verification LBA range: start 0x0 length 0x2000 00:23:29.680 TLSTESTn1 : 10.07 4746.75 18.54 0.00 0.00 26878.79 7154.35 66846.72 00:23:29.680 [2024-11-07T12:28:37.687Z] =================================================================================================================== 00:23:29.680 [2024-11-07T12:28:37.687Z] Total : 4746.75 18.54 0.00 0.00 26878.79 7154.35 66846.72 00:23:29.680 { 00:23:29.680 "results": [ 00:23:29.680 { 00:23:29.680 "job": "TLSTESTn1", 00:23:29.680 "core_mask": "0x4", 00:23:29.680 "workload": "verify", 00:23:29.680 "status": "finished", 00:23:29.680 "verify_range": { 00:23:29.680 "start": 0, 00:23:29.680 "length": 8192 00:23:29.680 }, 00:23:29.680 "queue_depth": 128, 00:23:29.680 "io_size": 4096, 00:23:29.680 "runtime": 10.067315, 00:23:29.680 "iops": 4746.747270746967, 00:23:29.680 "mibps": 18.54198152635534, 00:23:29.680 "io_failed": 0, 00:23:29.680 "io_timeout": 0, 00:23:29.680 "avg_latency_us": 26878.79022272445, 00:23:29.680 "min_latency_us": 7154.346666666666, 00:23:29.680 "max_latency_us": 66846.72 00:23:29.680 } 00:23:29.680 ], 00:23:29.680 "core_count": 1 00:23:29.680 } 00:23:29.680 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:29.680 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3890407 00:23:29.680 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3890407 ']' 00:23:29.680 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3890407 00:23:29.680 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:29.680 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:29.680 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3890407 00:23:29.940 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:29.940 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:29.940 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3890407' 00:23:29.940 killing process with pid 3890407 00:23:29.940 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3890407 00:23:29.940 Received shutdown signal, test time was about 10.000000 seconds 00:23:29.940 00:23:29.940 Latency(us) 00:23:29.940 [2024-11-07T12:28:37.947Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.940 [2024-11-07T12:28:37.947Z] =================================================================================================================== 00:23:29.940 [2024-11-07T12:28:37.947Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:29.940 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3890407 00:23:30.199 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uWpQUGgLgJ 00:23:30.199 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:30.199 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uWpQUGgLgJ 00:23:30.199 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:30.200 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:30.200 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:30.200 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:30.200 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uWpQUGgLgJ 00:23:30.200 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:30.200 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:30.200 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:30.200 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.uWpQUGgLgJ 00:23:30.200 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:30.200 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3892715 00:23:30.200 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:30.200 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3892715 /var/tmp/bdevperf.sock 00:23:30.200 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:30.200 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3892715 ']' 00:23:30.200 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:30.200 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:30.200 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:30.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:30.200 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:30.200 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.459 [2024-11-07 13:28:38.235684] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:23:30.459 [2024-11-07 13:28:38.235799] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3892715 ] 00:23:30.459 [2024-11-07 13:28:38.351834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.459 [2024-11-07 13:28:38.425941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:31.028 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:31.028 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:31.028 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.uWpQUGgLgJ 00:23:31.288 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:31.548 [2024-11-07 13:28:39.315706] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:31.548 [2024-11-07 13:28:39.327653] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:31.548 [2024-11-07 13:28:39.327936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (107): Transport endpoint is not connected 00:23:31.548 [2024-11-07 13:28:39.328924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:23:31.548 [2024-11-07 13:28:39.329919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:31.548 [2024-11-07 13:28:39.329940] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:31.548 [2024-11-07 13:28:39.329950] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:31.548 [2024-11-07 13:28:39.329967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:31.548 request: 00:23:31.548 { 00:23:31.548 "name": "TLSTEST", 00:23:31.548 "trtype": "tcp", 00:23:31.548 "traddr": "10.0.0.2", 00:23:31.548 "adrfam": "ipv4", 00:23:31.548 "trsvcid": "4420", 00:23:31.548 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.548 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:31.548 "prchk_reftag": false, 00:23:31.548 "prchk_guard": false, 00:23:31.548 "hdgst": false, 00:23:31.548 "ddgst": false, 00:23:31.548 "psk": "key0", 00:23:31.548 "allow_unrecognized_csi": false, 00:23:31.548 "method": "bdev_nvme_attach_controller", 00:23:31.548 "req_id": 1 00:23:31.548 } 00:23:31.548 Got JSON-RPC error response 00:23:31.548 response: 00:23:31.548 { 00:23:31.548 "code": -5, 00:23:31.548 "message": "Input/output error" 00:23:31.548 } 00:23:31.548 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3892715 00:23:31.548 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3892715 ']' 00:23:31.548 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3892715 00:23:31.548 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:31.548 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:31.548 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3892715 00:23:31.548 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:31.548 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:31.548 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3892715' 00:23:31.548 killing process with pid 3892715 00:23:31.548 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3892715 00:23:31.548 Received shutdown signal, test time was about 10.000000 seconds 00:23:31.548 00:23:31.548 Latency(us) 00:23:31.548 [2024-11-07T12:28:39.555Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.548 [2024-11-07T12:28:39.555Z] =================================================================================================================== 00:23:31.548 [2024-11-07T12:28:39.555Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:31.548 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3892715 00:23:32.117 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:32.117 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:32.117 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:32.117 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:32.118 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:32.118 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.4PwVgsX2nf 00:23:32.118 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:32.118 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.4PwVgsX2nf 00:23:32.118 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:32.118 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:32.118 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:32.118 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:32.118 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.4PwVgsX2nf 00:23:32.118 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:32.118 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:32.118 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:32.118 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.4PwVgsX2nf 00:23:32.118 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:32.118 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3893049 00:23:32.118 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:32.118 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3893049 /var/tmp/bdevperf.sock 00:23:32.118 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:32.118 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3893049 ']' 00:23:32.118 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:32.118 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:32.118 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:32.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:32.118 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:32.118 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.118 [2024-11-07 13:28:39.936807] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:23:32.118 [2024-11-07 13:28:39.936933] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3893049 ] 00:23:32.118 [2024-11-07 13:28:40.056810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.378 [2024-11-07 13:28:40.137885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:32.950 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:32.950 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:32.950 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.4PwVgsX2nf 00:23:32.950 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:33.211 [2024-11-07 13:28:41.051639] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:33.211 [2024-11-07 13:28:41.061646] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:33.211 [2024-11-07 13:28:41.061673] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:33.211 [2024-11-07 13:28:41.061709] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:33.211 [2024-11-07 13:28:41.061912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (107): Transport endpoint is not connected 00:23:33.211 [2024-11-07 13:28:41.062899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:23:33.211 [2024-11-07 13:28:41.063896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:33.211 [2024-11-07 13:28:41.063914] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:33.211 [2024-11-07 13:28:41.063924] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:33.211 [2024-11-07 13:28:41.063942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:33.211 request: 00:23:33.211 { 00:23:33.211 "name": "TLSTEST", 00:23:33.211 "trtype": "tcp", 00:23:33.211 "traddr": "10.0.0.2", 00:23:33.211 "adrfam": "ipv4", 00:23:33.211 "trsvcid": "4420", 00:23:33.211 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.211 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:33.211 "prchk_reftag": false, 00:23:33.211 "prchk_guard": false, 00:23:33.211 "hdgst": false, 00:23:33.211 "ddgst": false, 00:23:33.211 "psk": "key0", 00:23:33.211 "allow_unrecognized_csi": false, 00:23:33.211 "method": "bdev_nvme_attach_controller", 00:23:33.211 "req_id": 1 00:23:33.211 } 00:23:33.211 Got JSON-RPC error response 00:23:33.211 response: 00:23:33.211 { 00:23:33.211 "code": -5, 00:23:33.211 "message": "Input/output error" 00:23:33.211 } 00:23:33.211 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3893049 00:23:33.211 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3893049 ']' 00:23:33.211 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3893049 00:23:33.211 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:33.211 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:33.211 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3893049 00:23:33.211 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:33.211 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:33.211 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3893049' 00:23:33.211 killing process with pid 3893049 00:23:33.211 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3893049 00:23:33.211 Received shutdown signal, test time was about 10.000000 seconds 00:23:33.211 00:23:33.211 Latency(us) 00:23:33.211 [2024-11-07T12:28:41.218Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.211 [2024-11-07T12:28:41.218Z] =================================================================================================================== 00:23:33.211 [2024-11-07T12:28:41.218Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:33.211 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3893049 00:23:33.782 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:33.782 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:33.782 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:33.782 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:33.782 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:33.782 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.4PwVgsX2nf 00:23:33.782 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:33.782 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.4PwVgsX2nf 00:23:33.782 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:33.782 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:33.782 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:33.782 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:33.782 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.4PwVgsX2nf 00:23:33.782 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:33.782 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:33.782 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:33.782 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.4PwVgsX2nf 00:23:33.782 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:33.782 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3893395 00:23:33.782 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:33.782 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3893395 /var/tmp/bdevperf.sock 00:23:33.782 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:33.782 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3893395 ']' 00:23:33.782 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:33.782 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:33.782 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:33.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:33.782 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:33.782 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.782 [2024-11-07 13:28:41.673289] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:23:33.782 [2024-11-07 13:28:41.673398] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3893395 ] 00:23:34.043 [2024-11-07 13:28:41.792721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.043 [2024-11-07 13:28:41.866021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:34.614 13:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:34.614 13:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:34.614 13:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.4PwVgsX2nf 00:23:34.874 13:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:34.874 [2024-11-07 13:28:42.791405] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:34.874 [2024-11-07 13:28:42.798426] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:34.874 [2024-11-07 13:28:42.798453] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:34.874 [2024-11-07 13:28:42.798484] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:34.874 [2024-11-07 13:28:42.798777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (107): Transport endpoint is not connected 00:23:34.874 [2024-11-07 13:28:42.799763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:23:34.874 [2024-11-07 13:28:42.800763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:23:34.874 [2024-11-07 13:28:42.800778] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:34.874 [2024-11-07 13:28:42.800790] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:34.874 [2024-11-07 13:28:42.800806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:23:34.874 request: 00:23:34.874 { 00:23:34.874 "name": "TLSTEST", 00:23:34.874 "trtype": "tcp", 00:23:34.875 "traddr": "10.0.0.2", 00:23:34.875 "adrfam": "ipv4", 00:23:34.875 "trsvcid": "4420", 00:23:34.875 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:34.875 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:34.875 "prchk_reftag": false, 00:23:34.875 "prchk_guard": false, 00:23:34.875 "hdgst": false, 00:23:34.875 "ddgst": false, 00:23:34.875 "psk": "key0", 00:23:34.875 "allow_unrecognized_csi": false, 00:23:34.875 "method": "bdev_nvme_attach_controller", 00:23:34.875 "req_id": 1 00:23:34.875 } 00:23:34.875 Got JSON-RPC error response 00:23:34.875 response: 00:23:34.875 { 00:23:34.875 "code": -5, 00:23:34.875 "message": "Input/output error" 00:23:34.875 } 00:23:34.875 13:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3893395 00:23:34.875 13:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3893395 ']' 00:23:34.875 13:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3893395 00:23:34.875 13:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:34.875 13:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:34.875 13:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3893395 00:23:35.136 13:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:35.136 13:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:35.136 13:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3893395' 00:23:35.136 killing process with pid 3893395 00:23:35.136 13:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3893395 00:23:35.136 Received shutdown signal, test time was about 10.000000 seconds 00:23:35.136 00:23:35.136 Latency(us) 00:23:35.136 [2024-11-07T12:28:43.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.136 [2024-11-07T12:28:43.143Z] =================================================================================================================== 00:23:35.136 [2024-11-07T12:28:43.143Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:35.136 13:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3893395 00:23:35.397 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:35.397 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:35.397 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:35.397 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:35.397 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:35.397 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:35.397 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:35.397 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:35.397 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:35.397 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:35.397 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:35.397 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:35.397 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:35.397 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:35.397 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:35.397 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:35.397 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:35.397 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:35.397 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3893739 00:23:35.397 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:35.397 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3893739 /var/tmp/bdevperf.sock 00:23:35.397 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:35.397 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3893739 ']' 00:23:35.397 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:35.397 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:35.397 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:35.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:35.397 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:35.397 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.658 [2024-11-07 13:28:43.410466] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:23:35.658 [2024-11-07 13:28:43.410574] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3893739 ] 00:23:35.658 [2024-11-07 13:28:43.528817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.658 [2024-11-07 13:28:43.602201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:36.230 13:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:36.230 13:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:36.230 13:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:36.490 [2024-11-07 13:28:44.379366] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:36.490 [2024-11-07 13:28:44.379402] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:36.490 request: 00:23:36.490 { 00:23:36.490 "name": "key0", 00:23:36.490 "path": "", 00:23:36.490 "method": "keyring_file_add_key", 00:23:36.490 "req_id": 1 00:23:36.490 } 00:23:36.490 Got JSON-RPC error response 00:23:36.490 response: 00:23:36.490 { 00:23:36.490 "code": -1, 00:23:36.490 "message": "Operation not permitted" 00:23:36.490 } 00:23:36.490 13:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:36.750 [2024-11-07 13:28:44.547886] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:36.750 [2024-11-07 13:28:44.547924] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:36.750 request: 00:23:36.750 { 00:23:36.750 "name": "TLSTEST", 00:23:36.750 "trtype": "tcp", 00:23:36.750 "traddr": "10.0.0.2", 00:23:36.750 "adrfam": "ipv4", 00:23:36.751 "trsvcid": "4420", 00:23:36.751 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.751 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:36.751 "prchk_reftag": false, 00:23:36.751 "prchk_guard": false, 00:23:36.751 "hdgst": false, 00:23:36.751 "ddgst": false, 00:23:36.751 "psk": "key0", 00:23:36.751 "allow_unrecognized_csi": false, 00:23:36.751 "method": "bdev_nvme_attach_controller", 00:23:36.751 "req_id": 1 00:23:36.751 } 00:23:36.751 Got JSON-RPC error response 00:23:36.751 response: 00:23:36.751 { 00:23:36.751 "code": -126, 00:23:36.751 "message": "Required key not available" 00:23:36.751 } 00:23:36.751 13:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3893739 00:23:36.751 13:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3893739 ']' 00:23:36.751 13:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3893739 00:23:36.751 13:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:36.751 13:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:36.751 13:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3893739 00:23:36.751 13:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:36.751 13:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:36.751 13:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3893739' 00:23:36.751 killing process with pid 3893739 00:23:36.751 13:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3893739 00:23:36.751 Received shutdown signal, test time was about 10.000000 seconds 00:23:36.751 00:23:36.751 Latency(us) 00:23:36.751 [2024-11-07T12:28:44.758Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.751 [2024-11-07T12:28:44.758Z] =================================================================================================================== 00:23:36.751 [2024-11-07T12:28:44.758Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:36.751 13:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3893739 00:23:37.321 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:37.321 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:37.321 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:37.321 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:37.321 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:37.321 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3887702 00:23:37.321 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3887702 ']' 00:23:37.321 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3887702 00:23:37.321 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:37.321 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:37.321 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3887702 00:23:37.321 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:37.322 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:37.322 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3887702' 00:23:37.322 killing process with pid 3887702 00:23:37.322 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3887702 00:23:37.322 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3887702 00:23:37.892 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:37.892 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:37.892 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:37.892 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:37.892 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:37.892 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:23:37.892 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:37.892 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:37.892 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:37.892 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.2MGxW7SPjs 00:23:37.892 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:37.892 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.2MGxW7SPjs 00:23:37.892 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:37.892 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:37.892 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:37.892 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.892 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3894334 00:23:37.892 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3894334 00:23:37.892 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:37.892 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3894334 ']' 00:23:37.892 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:37.892 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:37.892 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:37.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:37.892 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:37.893 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.153 [2024-11-07 13:28:45.967692] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:23:38.153 [2024-11-07 13:28:45.967815] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:38.153 [2024-11-07 13:28:46.131053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.412 [2024-11-07 13:28:46.204129] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:38.412 [2024-11-07 13:28:46.204170] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:38.412 [2024-11-07 13:28:46.204178] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:38.412 [2024-11-07 13:28:46.204187] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:38.412 [2024-11-07 13:28:46.204197] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:38.412 [2024-11-07 13:28:46.205153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:38.983 13:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:38.983 13:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:38.983 13:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:38.983 13:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:38.983 13:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.983 13:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:38.983 13:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.2MGxW7SPjs 00:23:38.983 13:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.2MGxW7SPjs 00:23:38.983 13:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:38.983 [2024-11-07 13:28:46.919314] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:38.983 13:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:39.243 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:39.243 [2024-11-07 13:28:47.240111] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:39.243 [2024-11-07 13:28:47.240356] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.502 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:39.502 malloc0 00:23:39.502 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:39.762 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.2MGxW7SPjs 00:23:39.762 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:40.023 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2MGxW7SPjs 00:23:40.023 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:40.023 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:40.023 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:40.023 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.2MGxW7SPjs 00:23:40.023 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:40.023 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:40.023 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3894767 00:23:40.023 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:40.023 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3894767 /var/tmp/bdevperf.sock 00:23:40.023 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3894767 ']' 00:23:40.023 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:40.023 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:40.023 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:40.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:40.023 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:40.023 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.023 [2024-11-07 13:28:47.972608] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:23:40.023 [2024-11-07 13:28:47.972717] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3894767 ] 00:23:40.283 [2024-11-07 13:28:48.092470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.283 [2024-11-07 13:28:48.165528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:40.853 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:40.853 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:40.853 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2MGxW7SPjs 00:23:41.114 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:41.114 [2024-11-07 13:28:49.091147] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:41.374 TLSTESTn1 00:23:41.374 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:41.374 Running I/O for 10 seconds... 00:23:43.704 5146.00 IOPS, 20.10 MiB/s [2024-11-07T12:28:52.407Z] 4892.50 IOPS, 19.11 MiB/s [2024-11-07T12:28:53.442Z] 4917.33 IOPS, 19.21 MiB/s [2024-11-07T12:28:54.383Z] 4924.75 IOPS, 19.24 MiB/s [2024-11-07T12:28:55.323Z] 4852.60 IOPS, 18.96 MiB/s [2024-11-07T12:28:56.709Z] 4784.50 IOPS, 18.69 MiB/s [2024-11-07T12:28:57.650Z] 4751.57 IOPS, 18.56 MiB/s [2024-11-07T12:28:58.591Z] 4849.75 IOPS, 18.94 MiB/s [2024-11-07T12:28:59.535Z] 4759.00 IOPS, 18.59 MiB/s [2024-11-07T12:28:59.535Z] 4672.90 IOPS, 18.25 MiB/s 00:23:51.528 Latency(us) 00:23:51.528 [2024-11-07T12:28:59.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.528 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:51.528 Verification LBA range: start 0x0 length 0x2000 00:23:51.528 TLSTESTn1 : 10.06 4659.31 18.20 0.00 0.00 27384.78 5242.88 52647.25 00:23:51.528 [2024-11-07T12:28:59.535Z] =================================================================================================================== 00:23:51.528 [2024-11-07T12:28:59.535Z] Total : 4659.31 18.20 0.00 0.00 27384.78 5242.88 52647.25 00:23:51.528 { 00:23:51.528 "results": [ 00:23:51.528 { 00:23:51.528 "job": "TLSTESTn1", 00:23:51.528 "core_mask": "0x4", 00:23:51.528 "workload": "verify", 00:23:51.528 "status": "finished", 00:23:51.528 "verify_range": { 00:23:51.528 "start": 0, 00:23:51.528 "length": 8192 00:23:51.528 }, 00:23:51.528 "queue_depth": 128, 00:23:51.528 "io_size": 4096, 00:23:51.528 "runtime": 10.056424, 00:23:51.528 "iops": 4659.310307520845, 00:23:51.528 "mibps": 18.2004308887533, 00:23:51.528 "io_failed": 0, 00:23:51.528 "io_timeout": 0, 00:23:51.528 "avg_latency_us": 27384.77966649593, 00:23:51.528 "min_latency_us": 5242.88, 00:23:51.528 "max_latency_us": 52647.253333333334 00:23:51.528 } 00:23:51.528 ], 00:23:51.528 "core_count": 1 00:23:51.528 } 00:23:51.528 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:51.528 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3894767 00:23:51.528 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3894767 ']' 00:23:51.528 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3894767 00:23:51.528 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:51.528 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:51.528 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3894767 00:23:51.528 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:51.528 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:51.528 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3894767' 00:23:51.528 killing process with pid 3894767 00:23:51.528 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3894767 00:23:51.528 Received shutdown signal, test time was about 10.000000 seconds 00:23:51.528 00:23:51.528 Latency(us) 00:23:51.528 [2024-11-07T12:28:59.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.528 [2024-11-07T12:28:59.535Z] =================================================================================================================== 00:23:51.528 [2024-11-07T12:28:59.535Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:51.528 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3894767 00:23:52.102 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.2MGxW7SPjs 00:23:52.102 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2MGxW7SPjs 00:23:52.102 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:52.102 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2MGxW7SPjs 00:23:52.102 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:52.102 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:52.102 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:52.102 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:52.102 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2MGxW7SPjs 00:23:52.102 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:52.102 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:52.102 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:52.102 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.2MGxW7SPjs 00:23:52.102 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:52.102 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3897084 00:23:52.102 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:52.102 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3897084 /var/tmp/bdevperf.sock 00:23:52.102 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:52.102 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3897084 ']' 00:23:52.102 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:52.102 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:52.102 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:52.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:52.102 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:52.102 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.102 [2024-11-07 13:29:00.004349] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:23:52.102 [2024-11-07 13:29:00.004461] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3897084 ] 00:23:52.363 [2024-11-07 13:29:00.124292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.363 [2024-11-07 13:29:00.198715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:52.934 13:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:52.934 13:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:52.934 13:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2MGxW7SPjs 00:23:52.934 [2024-11-07 13:29:00.928614] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.2MGxW7SPjs': 0100666 00:23:52.934 [2024-11-07 13:29:00.928652] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:52.934 request: 00:23:52.934 { 00:23:52.934 "name": "key0", 00:23:52.934 "path": "/tmp/tmp.2MGxW7SPjs", 00:23:52.934 "method": "keyring_file_add_key", 00:23:52.934 "req_id": 1 00:23:52.934 } 00:23:52.934 Got JSON-RPC error response 00:23:52.934 response: 00:23:52.934 { 00:23:52.934 "code": -1, 00:23:52.934 "message": "Operation not permitted" 00:23:52.934 } 00:23:53.196 13:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:53.196 [2024-11-07 13:29:01.105147] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:53.196 [2024-11-07 13:29:01.105182] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:53.196 request: 00:23:53.196 { 00:23:53.196 "name": "TLSTEST", 00:23:53.196 "trtype": "tcp", 00:23:53.196 "traddr": "10.0.0.2", 00:23:53.196 "adrfam": "ipv4", 00:23:53.196 "trsvcid": "4420", 00:23:53.196 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.196 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:53.196 "prchk_reftag": false, 00:23:53.196 "prchk_guard": false, 00:23:53.196 "hdgst": false, 00:23:53.196 "ddgst": false, 00:23:53.196 "psk": "key0", 00:23:53.196 "allow_unrecognized_csi": false, 00:23:53.196 "method": "bdev_nvme_attach_controller", 00:23:53.196 "req_id": 1 00:23:53.196 } 00:23:53.196 Got JSON-RPC error response 00:23:53.196 response: 00:23:53.196 { 00:23:53.196 "code": -126, 00:23:53.196 "message": "Required key not available" 00:23:53.196 } 00:23:53.196 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3897084 00:23:53.196 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3897084 ']' 00:23:53.196 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3897084 00:23:53.196 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:53.196 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:53.196 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3897084 00:23:53.457 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:53.457 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:53.457 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3897084' 00:23:53.457 killing process with pid 3897084 00:23:53.457 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3897084 00:23:53.457 Received shutdown signal, test time was about 10.000000 seconds 00:23:53.457 00:23:53.457 Latency(us) 00:23:53.457 [2024-11-07T12:29:01.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.457 [2024-11-07T12:29:01.464Z] =================================================================================================================== 00:23:53.457 [2024-11-07T12:29:01.464Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:53.457 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3897084 00:23:53.780 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:53.780 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:53.780 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:53.780 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:53.780 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:53.780 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3894334 00:23:53.780 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3894334 ']' 00:23:53.780 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3894334 00:23:53.780 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:53.780 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:53.780 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3894334 00:23:53.780 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:53.780 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:53.780 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3894334' 00:23:53.780 killing process with pid 3894334 00:23:53.780 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3894334 00:23:53.780 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3894334 00:23:54.350 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:54.350 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:54.350 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:54.350 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.350 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3897436 00:23:54.350 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3897436 00:23:54.350 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:54.350 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3897436 ']' 00:23:54.350 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.350 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:54.350 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.350 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:54.350 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.611 [2024-11-07 13:29:02.424047] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:23:54.611 [2024-11-07 13:29:02.424154] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:54.611 [2024-11-07 13:29:02.581262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.871 [2024-11-07 13:29:02.654033] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:54.871 [2024-11-07 13:29:02.654077] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:54.871 [2024-11-07 13:29:02.654085] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:54.871 [2024-11-07 13:29:02.654094] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:54.871 [2024-11-07 13:29:02.654104] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:54.871 [2024-11-07 13:29:02.655067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:55.443 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:55.443 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:55.443 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:55.443 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:55.443 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.443 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:55.443 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.2MGxW7SPjs 00:23:55.443 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:55.443 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.2MGxW7SPjs 00:23:55.443 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:23:55.443 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:55.443 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:23:55.443 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:55.443 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.2MGxW7SPjs 00:23:55.443 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.2MGxW7SPjs 00:23:55.443 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:55.443 [2024-11-07 13:29:03.373099] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:55.443 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:55.703 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:55.963 [2024-11-07 13:29:03.709942] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:55.963 [2024-11-07 13:29:03.710197] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:55.963 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:55.963 malloc0 00:23:55.963 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:56.223 13:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.2MGxW7SPjs 00:23:56.223 [2024-11-07 13:29:04.207089] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.2MGxW7SPjs': 0100666 00:23:56.223 [2024-11-07 13:29:04.207125] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:56.223 request: 00:23:56.223 { 00:23:56.223 "name": "key0", 00:23:56.223 "path": "/tmp/tmp.2MGxW7SPjs", 00:23:56.223 "method": "keyring_file_add_key", 00:23:56.223 "req_id": 1 00:23:56.223 } 00:23:56.223 Got JSON-RPC error response 00:23:56.223 response: 00:23:56.223 { 00:23:56.223 "code": -1, 00:23:56.223 "message": "Operation not permitted" 00:23:56.223 } 00:23:56.223 13:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:56.484 [2024-11-07 13:29:04.375537] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:56.484 [2024-11-07 13:29:04.375581] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:56.484 request: 00:23:56.484 { 00:23:56.484 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.484 "host": "nqn.2016-06.io.spdk:host1", 00:23:56.484 "psk": "key0", 00:23:56.484 "method": "nvmf_subsystem_add_host", 00:23:56.484 "req_id": 1 00:23:56.485 } 00:23:56.485 Got JSON-RPC error response 00:23:56.485 response: 00:23:56.485 { 00:23:56.485 "code": -32603, 00:23:56.485 "message": "Internal error" 00:23:56.485 } 00:23:56.485 13:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:56.485 13:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:56.485 13:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:56.485 13:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:56.485 13:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3897436 00:23:56.485 13:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3897436 ']' 00:23:56.485 13:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3897436 00:23:56.485 13:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:56.485 13:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:56.485 13:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3897436 00:23:56.485 13:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:56.485 13:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:56.485 13:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3897436' 00:23:56.485 killing process with pid 3897436 00:23:56.485 13:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3897436 00:23:56.485 13:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3897436 00:23:57.150 13:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.2MGxW7SPjs 00:23:57.150 13:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:57.150 13:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:57.150 13:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:57.150 13:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.150 13:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3898125 00:23:57.150 13:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3898125 00:23:57.150 13:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:57.150 13:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3898125 ']' 00:23:57.150 13:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.150 13:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:57.150 13:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.150 13:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:57.150 13:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.410 [2024-11-07 13:29:05.164648] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:23:57.410 [2024-11-07 13:29:05.164770] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.410 [2024-11-07 13:29:05.320608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.410 [2024-11-07 13:29:05.395796] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:57.410 [2024-11-07 13:29:05.395835] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:57.410 [2024-11-07 13:29:05.395843] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:57.410 [2024-11-07 13:29:05.395852] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:57.410 [2024-11-07 13:29:05.395860] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:57.410 [2024-11-07 13:29:05.396822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.980 13:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:57.980 13:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:57.980 13:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:57.980 13:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:57.980 13:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.980 13:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:57.980 13:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.2MGxW7SPjs 00:23:57.980 13:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.2MGxW7SPjs 00:23:57.980 13:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:58.239 [2024-11-07 13:29:06.130165] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:58.239 13:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:58.499 13:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:58.499 [2024-11-07 13:29:06.467009] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:58.499 [2024-11-07 13:29:06.467263] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:58.499 13:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:58.758 malloc0 00:23:58.758 13:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:59.018 13:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.2MGxW7SPjs 00:23:59.018 13:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:59.278 13:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3898483 00:23:59.278 13:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:59.278 13:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:59.278 13:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3898483 /var/tmp/bdevperf.sock 00:23:59.278 13:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3898483 ']' 00:23:59.278 13:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:59.278 13:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:59.278 13:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:59.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:59.278 13:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:59.278 13:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:59.278 [2024-11-07 13:29:07.266332] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:23:59.278 [2024-11-07 13:29:07.266448] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3898483 ] 00:23:59.537 [2024-11-07 13:29:07.382286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.537 [2024-11-07 13:29:07.455329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:00.108 13:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:00.108 13:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:00.108 13:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2MGxW7SPjs 00:24:00.368 13:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:00.368 [2024-11-07 13:29:08.320390] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:00.627 TLSTESTn1 00:24:00.627 13:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:00.888 13:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:24:00.888 "subsystems": [ 00:24:00.888 { 00:24:00.888 "subsystem": "keyring", 00:24:00.888 "config": [ 00:24:00.888 { 00:24:00.888 "method": "keyring_file_add_key", 00:24:00.888 "params": { 00:24:00.888 "name": "key0", 00:24:00.888 "path": "/tmp/tmp.2MGxW7SPjs" 00:24:00.888 } 00:24:00.888 } 00:24:00.888 ] 00:24:00.888 }, 00:24:00.888 { 00:24:00.888 "subsystem": "iobuf", 00:24:00.888 "config": [ 00:24:00.888 { 00:24:00.888 "method": "iobuf_set_options", 00:24:00.888 "params": { 00:24:00.888 "small_pool_count": 8192, 00:24:00.888 "large_pool_count": 1024, 00:24:00.888 "small_bufsize": 8192, 00:24:00.888 "large_bufsize": 135168, 00:24:00.888 "enable_numa": false 00:24:00.888 } 00:24:00.888 } 00:24:00.888 ] 00:24:00.888 }, 00:24:00.888 { 00:24:00.888 "subsystem": "sock", 00:24:00.888 "config": [ 00:24:00.888 { 00:24:00.888 "method": "sock_set_default_impl", 00:24:00.888 "params": { 00:24:00.888 "impl_name": "posix" 00:24:00.888 } 00:24:00.888 }, 00:24:00.888 { 00:24:00.888 "method": "sock_impl_set_options", 00:24:00.888 "params": { 00:24:00.888 "impl_name": "ssl", 00:24:00.888 "recv_buf_size": 4096, 00:24:00.888 "send_buf_size": 4096, 00:24:00.888 "enable_recv_pipe": true, 00:24:00.888 "enable_quickack": false, 00:24:00.888 "enable_placement_id": 0, 00:24:00.888 "enable_zerocopy_send_server": true, 00:24:00.888 "enable_zerocopy_send_client": false, 00:24:00.888 "zerocopy_threshold": 0, 00:24:00.888 "tls_version": 0, 00:24:00.888 "enable_ktls": false 00:24:00.888 } 00:24:00.888 }, 00:24:00.888 { 00:24:00.888 "method": "sock_impl_set_options", 00:24:00.888 "params": { 00:24:00.888 "impl_name": "posix", 00:24:00.888 "recv_buf_size": 2097152, 00:24:00.888 "send_buf_size": 2097152, 00:24:00.888 "enable_recv_pipe": true, 00:24:00.888 "enable_quickack": false, 00:24:00.888 "enable_placement_id": 0, 00:24:00.888 "enable_zerocopy_send_server": true, 00:24:00.888 "enable_zerocopy_send_client": false, 00:24:00.888 "zerocopy_threshold": 0, 00:24:00.888 "tls_version": 0, 00:24:00.888 "enable_ktls": false 00:24:00.888 } 00:24:00.888 } 00:24:00.888 ] 00:24:00.888 }, 00:24:00.888 { 00:24:00.888 "subsystem": "vmd", 00:24:00.888 "config": [] 00:24:00.888 }, 00:24:00.888 { 00:24:00.888 "subsystem": "accel", 00:24:00.888 "config": [ 00:24:00.888 { 00:24:00.888 "method": "accel_set_options", 00:24:00.888 "params": { 00:24:00.888 "small_cache_size": 128, 00:24:00.888 "large_cache_size": 16, 00:24:00.888 "task_count": 2048, 00:24:00.888 "sequence_count": 2048, 00:24:00.888 "buf_count": 2048 00:24:00.888 } 00:24:00.888 } 00:24:00.888 ] 00:24:00.888 }, 00:24:00.888 { 00:24:00.888 "subsystem": "bdev", 00:24:00.888 "config": [ 00:24:00.888 { 00:24:00.888 "method": "bdev_set_options", 00:24:00.888 "params": { 00:24:00.888 "bdev_io_pool_size": 65535, 00:24:00.888 "bdev_io_cache_size": 256, 00:24:00.888 "bdev_auto_examine": true, 00:24:00.888 "iobuf_small_cache_size": 128, 00:24:00.888 "iobuf_large_cache_size": 16 00:24:00.888 } 00:24:00.888 }, 00:24:00.888 { 00:24:00.888 "method": "bdev_raid_set_options", 00:24:00.888 "params": { 00:24:00.888 "process_window_size_kb": 1024, 00:24:00.888 "process_max_bandwidth_mb_sec": 0 00:24:00.888 } 00:24:00.888 }, 00:24:00.888 { 00:24:00.888 "method": "bdev_iscsi_set_options", 00:24:00.888 "params": { 00:24:00.888 "timeout_sec": 30 00:24:00.888 } 00:24:00.888 }, 00:24:00.888 { 00:24:00.888 "method": "bdev_nvme_set_options", 00:24:00.888 "params": { 00:24:00.888 "action_on_timeout": "none", 00:24:00.888 "timeout_us": 0, 00:24:00.888 "timeout_admin_us": 0, 00:24:00.888 "keep_alive_timeout_ms": 10000, 00:24:00.888 "arbitration_burst": 0, 00:24:00.888 "low_priority_weight": 0, 00:24:00.888 "medium_priority_weight": 0, 00:24:00.888 "high_priority_weight": 0, 00:24:00.888 "nvme_adminq_poll_period_us": 10000, 00:24:00.888 "nvme_ioq_poll_period_us": 0, 00:24:00.888 "io_queue_requests": 0, 00:24:00.888 "delay_cmd_submit": true, 00:24:00.888 "transport_retry_count": 4, 00:24:00.888 "bdev_retry_count": 3, 00:24:00.888 "transport_ack_timeout": 0, 00:24:00.888 "ctrlr_loss_timeout_sec": 0, 00:24:00.888 "reconnect_delay_sec": 0, 00:24:00.888 "fast_io_fail_timeout_sec": 0, 00:24:00.888 "disable_auto_failback": false, 00:24:00.888 "generate_uuids": false, 00:24:00.888 "transport_tos": 0, 00:24:00.888 "nvme_error_stat": false, 00:24:00.888 "rdma_srq_size": 0, 00:24:00.888 "io_path_stat": false, 00:24:00.888 "allow_accel_sequence": false, 00:24:00.888 "rdma_max_cq_size": 0, 00:24:00.888 "rdma_cm_event_timeout_ms": 0, 00:24:00.888 "dhchap_digests": [ 00:24:00.888 "sha256", 00:24:00.888 "sha384", 00:24:00.888 "sha512" 00:24:00.888 ], 00:24:00.888 "dhchap_dhgroups": [ 00:24:00.888 "null", 00:24:00.888 "ffdhe2048", 00:24:00.888 "ffdhe3072", 00:24:00.888 "ffdhe4096", 00:24:00.888 "ffdhe6144", 00:24:00.888 "ffdhe8192" 00:24:00.888 ] 00:24:00.888 } 00:24:00.888 }, 00:24:00.888 { 00:24:00.888 "method": "bdev_nvme_set_hotplug", 00:24:00.888 "params": { 00:24:00.888 "period_us": 100000, 00:24:00.888 "enable": false 00:24:00.888 } 00:24:00.888 }, 00:24:00.888 { 00:24:00.888 "method": "bdev_malloc_create", 00:24:00.888 "params": { 00:24:00.888 "name": "malloc0", 00:24:00.888 "num_blocks": 8192, 00:24:00.888 "block_size": 4096, 00:24:00.888 "physical_block_size": 4096, 00:24:00.888 "uuid": "ffcea043-4549-4b19-abde-b03fd5ce64da", 00:24:00.888 "optimal_io_boundary": 0, 00:24:00.888 "md_size": 0, 00:24:00.888 "dif_type": 0, 00:24:00.888 "dif_is_head_of_md": false, 00:24:00.888 "dif_pi_format": 0 00:24:00.888 } 00:24:00.888 }, 00:24:00.888 { 00:24:00.888 "method": "bdev_wait_for_examine" 00:24:00.888 } 00:24:00.888 ] 00:24:00.888 }, 00:24:00.888 { 00:24:00.888 "subsystem": "nbd", 00:24:00.888 "config": [] 00:24:00.888 }, 00:24:00.888 { 00:24:00.888 "subsystem": "scheduler", 00:24:00.888 "config": [ 00:24:00.888 { 00:24:00.888 "method": "framework_set_scheduler", 00:24:00.888 "params": { 00:24:00.888 "name": "static" 00:24:00.888 } 00:24:00.888 } 00:24:00.888 ] 00:24:00.888 }, 00:24:00.888 { 00:24:00.888 "subsystem": "nvmf", 00:24:00.888 "config": [ 00:24:00.888 { 00:24:00.888 "method": "nvmf_set_config", 00:24:00.888 "params": { 00:24:00.888 "discovery_filter": "match_any", 00:24:00.888 "admin_cmd_passthru": { 00:24:00.888 "identify_ctrlr": false 00:24:00.888 }, 00:24:00.888 "dhchap_digests": [ 00:24:00.888 "sha256", 00:24:00.888 "sha384", 00:24:00.888 "sha512" 00:24:00.888 ], 00:24:00.888 "dhchap_dhgroups": [ 00:24:00.888 "null", 00:24:00.888 "ffdhe2048", 00:24:00.888 "ffdhe3072", 00:24:00.888 "ffdhe4096", 00:24:00.888 "ffdhe6144", 00:24:00.888 "ffdhe8192" 00:24:00.888 ] 00:24:00.888 } 00:24:00.888 }, 00:24:00.888 { 00:24:00.888 "method": "nvmf_set_max_subsystems", 00:24:00.888 "params": { 00:24:00.888 "max_subsystems": 1024 00:24:00.888 } 00:24:00.888 }, 00:24:00.888 { 00:24:00.888 "method": "nvmf_set_crdt", 00:24:00.888 "params": { 00:24:00.888 "crdt1": 0, 00:24:00.888 "crdt2": 0, 00:24:00.888 "crdt3": 0 00:24:00.888 } 00:24:00.888 }, 00:24:00.888 { 00:24:00.888 "method": "nvmf_create_transport", 00:24:00.888 "params": { 00:24:00.888 "trtype": "TCP", 00:24:00.888 "max_queue_depth": 128, 00:24:00.888 "max_io_qpairs_per_ctrlr": 127, 00:24:00.888 "in_capsule_data_size": 4096, 00:24:00.888 "max_io_size": 131072, 00:24:00.888 "io_unit_size": 131072, 00:24:00.888 "max_aq_depth": 128, 00:24:00.888 "num_shared_buffers": 511, 00:24:00.888 "buf_cache_size": 4294967295, 00:24:00.888 "dif_insert_or_strip": false, 00:24:00.888 "zcopy": false, 00:24:00.888 "c2h_success": false, 00:24:00.888 "sock_priority": 0, 00:24:00.888 "abort_timeout_sec": 1, 00:24:00.888 "ack_timeout": 0, 00:24:00.888 "data_wr_pool_size": 0 00:24:00.888 } 00:24:00.888 }, 00:24:00.888 { 00:24:00.888 "method": "nvmf_create_subsystem", 00:24:00.888 "params": { 00:24:00.888 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.888 "allow_any_host": false, 00:24:00.888 "serial_number": "SPDK00000000000001", 00:24:00.888 "model_number": "SPDK bdev Controller", 00:24:00.888 "max_namespaces": 10, 00:24:00.888 "min_cntlid": 1, 00:24:00.888 "max_cntlid": 65519, 00:24:00.888 "ana_reporting": false 00:24:00.888 } 00:24:00.888 }, 00:24:00.888 { 00:24:00.888 "method": "nvmf_subsystem_add_host", 00:24:00.888 "params": { 00:24:00.888 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.889 "host": "nqn.2016-06.io.spdk:host1", 00:24:00.889 "psk": "key0" 00:24:00.889 } 00:24:00.889 }, 00:24:00.889 { 00:24:00.889 "method": "nvmf_subsystem_add_ns", 00:24:00.889 "params": { 00:24:00.889 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.889 "namespace": { 00:24:00.889 "nsid": 1, 00:24:00.889 "bdev_name": "malloc0", 00:24:00.889 "nguid": "FFCEA04345494B19ABDEB03FD5CE64DA", 00:24:00.889 "uuid": "ffcea043-4549-4b19-abde-b03fd5ce64da", 00:24:00.889 "no_auto_visible": false 00:24:00.889 } 00:24:00.889 } 00:24:00.889 }, 00:24:00.889 { 00:24:00.889 "method": "nvmf_subsystem_add_listener", 00:24:00.889 "params": { 00:24:00.889 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.889 "listen_address": { 00:24:00.889 "trtype": "TCP", 00:24:00.889 "adrfam": "IPv4", 00:24:00.889 "traddr": "10.0.0.2", 00:24:00.889 "trsvcid": "4420" 00:24:00.889 }, 00:24:00.889 "secure_channel": true 00:24:00.889 } 00:24:00.889 } 00:24:00.889 ] 00:24:00.889 } 00:24:00.889 ] 00:24:00.889 }' 00:24:00.889 13:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:01.149 13:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:24:01.149 "subsystems": [ 00:24:01.149 { 00:24:01.149 "subsystem": "keyring", 00:24:01.149 "config": [ 00:24:01.149 { 00:24:01.149 "method": "keyring_file_add_key", 00:24:01.149 "params": { 00:24:01.149 "name": "key0", 00:24:01.149 "path": "/tmp/tmp.2MGxW7SPjs" 00:24:01.149 } 00:24:01.149 } 00:24:01.149 ] 00:24:01.149 }, 00:24:01.149 { 00:24:01.149 "subsystem": "iobuf", 00:24:01.149 "config": [ 00:24:01.149 { 00:24:01.149 "method": "iobuf_set_options", 00:24:01.149 "params": { 00:24:01.149 "small_pool_count": 8192, 00:24:01.149 "large_pool_count": 1024, 00:24:01.149 "small_bufsize": 8192, 00:24:01.149 "large_bufsize": 135168, 00:24:01.149 "enable_numa": false 00:24:01.149 } 00:24:01.149 } 00:24:01.149 ] 00:24:01.149 }, 00:24:01.149 { 00:24:01.149 "subsystem": "sock", 00:24:01.149 "config": [ 00:24:01.149 { 00:24:01.149 "method": "sock_set_default_impl", 00:24:01.149 "params": { 00:24:01.149 "impl_name": "posix" 00:24:01.149 } 00:24:01.149 }, 00:24:01.149 { 00:24:01.149 "method": "sock_impl_set_options", 00:24:01.149 "params": { 00:24:01.149 "impl_name": "ssl", 00:24:01.149 "recv_buf_size": 4096, 00:24:01.149 "send_buf_size": 4096, 00:24:01.149 "enable_recv_pipe": true, 00:24:01.149 "enable_quickack": false, 00:24:01.149 "enable_placement_id": 0, 00:24:01.149 "enable_zerocopy_send_server": true, 00:24:01.149 "enable_zerocopy_send_client": false, 00:24:01.149 "zerocopy_threshold": 0, 00:24:01.149 "tls_version": 0, 00:24:01.149 "enable_ktls": false 00:24:01.149 } 00:24:01.149 }, 00:24:01.149 { 00:24:01.149 "method": "sock_impl_set_options", 00:24:01.149 "params": { 00:24:01.149 "impl_name": "posix", 00:24:01.149 "recv_buf_size": 2097152, 00:24:01.149 "send_buf_size": 2097152, 00:24:01.149 "enable_recv_pipe": true, 00:24:01.149 "enable_quickack": false, 00:24:01.149 "enable_placement_id": 0, 00:24:01.149 "enable_zerocopy_send_server": true, 00:24:01.149 "enable_zerocopy_send_client": false, 00:24:01.149 "zerocopy_threshold": 0, 00:24:01.149 "tls_version": 0, 00:24:01.149 "enable_ktls": false 00:24:01.149 } 00:24:01.149 } 00:24:01.149 ] 00:24:01.149 }, 00:24:01.149 { 00:24:01.149 "subsystem": "vmd", 00:24:01.149 "config": [] 00:24:01.149 }, 00:24:01.149 { 00:24:01.149 "subsystem": "accel", 00:24:01.149 "config": [ 00:24:01.149 { 00:24:01.149 "method": "accel_set_options", 00:24:01.150 "params": { 00:24:01.150 "small_cache_size": 128, 00:24:01.150 "large_cache_size": 16, 00:24:01.150 "task_count": 2048, 00:24:01.150 "sequence_count": 2048, 00:24:01.150 "buf_count": 2048 00:24:01.150 } 00:24:01.150 } 00:24:01.150 ] 00:24:01.150 }, 00:24:01.150 { 00:24:01.150 "subsystem": "bdev", 00:24:01.150 "config": [ 00:24:01.150 { 00:24:01.150 "method": "bdev_set_options", 00:24:01.150 "params": { 00:24:01.150 "bdev_io_pool_size": 65535, 00:24:01.150 "bdev_io_cache_size": 256, 00:24:01.150 "bdev_auto_examine": true, 00:24:01.150 "iobuf_small_cache_size": 128, 00:24:01.150 "iobuf_large_cache_size": 16 00:24:01.150 } 00:24:01.150 }, 00:24:01.150 { 00:24:01.150 "method": "bdev_raid_set_options", 00:24:01.150 "params": { 00:24:01.150 "process_window_size_kb": 1024, 00:24:01.150 "process_max_bandwidth_mb_sec": 0 00:24:01.150 } 00:24:01.150 }, 00:24:01.150 { 00:24:01.150 "method": "bdev_iscsi_set_options", 00:24:01.150 "params": { 00:24:01.150 "timeout_sec": 30 00:24:01.150 } 00:24:01.150 }, 00:24:01.150 { 00:24:01.150 "method": "bdev_nvme_set_options", 00:24:01.150 "params": { 00:24:01.150 "action_on_timeout": "none", 00:24:01.150 "timeout_us": 0, 00:24:01.150 "timeout_admin_us": 0, 00:24:01.150 "keep_alive_timeout_ms": 10000, 00:24:01.150 "arbitration_burst": 0, 00:24:01.150 "low_priority_weight": 0, 00:24:01.150 "medium_priority_weight": 0, 00:24:01.150 "high_priority_weight": 0, 00:24:01.150 "nvme_adminq_poll_period_us": 10000, 00:24:01.150 "nvme_ioq_poll_period_us": 0, 00:24:01.150 "io_queue_requests": 512, 00:24:01.150 "delay_cmd_submit": true, 00:24:01.150 "transport_retry_count": 4, 00:24:01.150 "bdev_retry_count": 3, 00:24:01.150 "transport_ack_timeout": 0, 00:24:01.150 "ctrlr_loss_timeout_sec": 0, 00:24:01.150 "reconnect_delay_sec": 0, 00:24:01.150 "fast_io_fail_timeout_sec": 0, 00:24:01.150 "disable_auto_failback": false, 00:24:01.150 "generate_uuids": false, 00:24:01.150 "transport_tos": 0, 00:24:01.150 "nvme_error_stat": false, 00:24:01.150 "rdma_srq_size": 0, 00:24:01.150 "io_path_stat": false, 00:24:01.150 "allow_accel_sequence": false, 00:24:01.150 "rdma_max_cq_size": 0, 00:24:01.150 "rdma_cm_event_timeout_ms": 0, 00:24:01.150 "dhchap_digests": [ 00:24:01.150 "sha256", 00:24:01.150 "sha384", 00:24:01.150 "sha512" 00:24:01.150 ], 00:24:01.150 "dhchap_dhgroups": [ 00:24:01.150 "null", 00:24:01.150 "ffdhe2048", 00:24:01.150 "ffdhe3072", 00:24:01.150 "ffdhe4096", 00:24:01.150 "ffdhe6144", 00:24:01.150 "ffdhe8192" 00:24:01.150 ] 00:24:01.150 } 00:24:01.150 }, 00:24:01.150 { 00:24:01.150 "method": "bdev_nvme_attach_controller", 00:24:01.150 "params": { 00:24:01.150 "name": "TLSTEST", 00:24:01.150 "trtype": "TCP", 00:24:01.150 "adrfam": "IPv4", 00:24:01.150 "traddr": "10.0.0.2", 00:24:01.150 "trsvcid": "4420", 00:24:01.150 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.150 "prchk_reftag": false, 00:24:01.150 "prchk_guard": false, 00:24:01.150 "ctrlr_loss_timeout_sec": 0, 00:24:01.150 "reconnect_delay_sec": 0, 00:24:01.150 "fast_io_fail_timeout_sec": 0, 00:24:01.150 "psk": "key0", 00:24:01.150 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:01.150 "hdgst": false, 00:24:01.150 "ddgst": false, 00:24:01.150 "multipath": "multipath" 00:24:01.150 } 00:24:01.150 }, 00:24:01.150 { 00:24:01.150 "method": "bdev_nvme_set_hotplug", 00:24:01.150 "params": { 00:24:01.150 "period_us": 100000, 00:24:01.150 "enable": false 00:24:01.150 } 00:24:01.150 }, 00:24:01.150 { 00:24:01.150 "method": "bdev_wait_for_examine" 00:24:01.150 } 00:24:01.150 ] 00:24:01.150 }, 00:24:01.150 { 00:24:01.150 "subsystem": "nbd", 00:24:01.150 "config": [] 00:24:01.150 } 00:24:01.150 ] 00:24:01.150 }' 00:24:01.150 13:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3898483 00:24:01.150 13:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3898483 ']' 00:24:01.150 13:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3898483 00:24:01.150 13:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:01.150 13:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:01.150 13:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3898483 00:24:01.150 13:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:01.150 13:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:01.150 13:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3898483' 00:24:01.150 killing process with pid 3898483 00:24:01.150 13:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3898483 00:24:01.150 Received shutdown signal, test time was about 10.000000 seconds 00:24:01.150 00:24:01.150 Latency(us) 00:24:01.150 [2024-11-07T12:29:09.157Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.150 [2024-11-07T12:29:09.157Z] =================================================================================================================== 00:24:01.150 [2024-11-07T12:29:09.157Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:01.150 13:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3898483 00:24:01.721 13:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3898125 00:24:01.722 13:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3898125 ']' 00:24:01.722 13:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3898125 00:24:01.722 13:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:01.722 13:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:01.722 13:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3898125 00:24:01.722 13:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:01.722 13:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:01.722 13:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3898125' 00:24:01.722 killing process with pid 3898125 00:24:01.722 13:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3898125 00:24:01.722 13:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3898125 00:24:02.293 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:02.293 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:02.293 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:02.293 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:02.293 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:24:02.293 "subsystems": [ 00:24:02.293 { 00:24:02.293 "subsystem": "keyring", 00:24:02.293 "config": [ 00:24:02.293 { 00:24:02.293 "method": "keyring_file_add_key", 00:24:02.293 "params": { 00:24:02.293 "name": "key0", 00:24:02.293 "path": "/tmp/tmp.2MGxW7SPjs" 00:24:02.293 } 00:24:02.293 } 00:24:02.293 ] 00:24:02.293 }, 00:24:02.293 { 00:24:02.293 "subsystem": "iobuf", 00:24:02.293 "config": [ 00:24:02.293 { 00:24:02.293 "method": "iobuf_set_options", 00:24:02.293 "params": { 00:24:02.293 "small_pool_count": 8192, 00:24:02.293 "large_pool_count": 1024, 00:24:02.293 "small_bufsize": 8192, 00:24:02.293 "large_bufsize": 135168, 00:24:02.293 "enable_numa": false 00:24:02.293 } 00:24:02.293 } 00:24:02.293 ] 00:24:02.293 }, 00:24:02.293 { 00:24:02.293 "subsystem": "sock", 00:24:02.293 "config": [ 00:24:02.293 { 00:24:02.293 "method": "sock_set_default_impl", 00:24:02.293 "params": { 00:24:02.293 "impl_name": "posix" 00:24:02.293 } 00:24:02.293 }, 00:24:02.293 { 00:24:02.293 "method": "sock_impl_set_options", 00:24:02.293 "params": { 00:24:02.293 "impl_name": "ssl", 00:24:02.293 "recv_buf_size": 4096, 00:24:02.293 "send_buf_size": 4096, 00:24:02.293 "enable_recv_pipe": true, 00:24:02.293 "enable_quickack": false, 00:24:02.293 "enable_placement_id": 0, 00:24:02.293 "enable_zerocopy_send_server": true, 00:24:02.294 "enable_zerocopy_send_client": false, 00:24:02.294 "zerocopy_threshold": 0, 00:24:02.294 "tls_version": 0, 00:24:02.294 "enable_ktls": false 00:24:02.294 } 00:24:02.294 }, 00:24:02.294 { 00:24:02.294 "method": "sock_impl_set_options", 00:24:02.294 "params": { 00:24:02.294 "impl_name": "posix", 00:24:02.294 "recv_buf_size": 2097152, 00:24:02.294 "send_buf_size": 2097152, 00:24:02.294 "enable_recv_pipe": true, 00:24:02.294 "enable_quickack": false, 00:24:02.294 "enable_placement_id": 0, 00:24:02.294 "enable_zerocopy_send_server": true, 00:24:02.294 "enable_zerocopy_send_client": false, 00:24:02.294 "zerocopy_threshold": 0, 00:24:02.294 "tls_version": 0, 00:24:02.294 "enable_ktls": false 00:24:02.294 } 00:24:02.294 } 00:24:02.294 ] 00:24:02.294 }, 00:24:02.294 { 00:24:02.294 "subsystem": "vmd", 00:24:02.294 "config": [] 00:24:02.294 }, 00:24:02.294 { 00:24:02.294 "subsystem": "accel", 00:24:02.294 "config": [ 00:24:02.294 { 00:24:02.294 "method": "accel_set_options", 00:24:02.294 "params": { 00:24:02.294 "small_cache_size": 128, 00:24:02.294 "large_cache_size": 16, 00:24:02.294 "task_count": 2048, 00:24:02.294 "sequence_count": 2048, 00:24:02.294 "buf_count": 2048 00:24:02.294 } 00:24:02.294 } 00:24:02.294 ] 00:24:02.294 }, 00:24:02.294 { 00:24:02.294 "subsystem": "bdev", 00:24:02.294 "config": [ 00:24:02.294 { 00:24:02.294 "method": "bdev_set_options", 00:24:02.294 "params": { 00:24:02.294 "bdev_io_pool_size": 65535, 00:24:02.294 "bdev_io_cache_size": 256, 00:24:02.294 "bdev_auto_examine": true, 00:24:02.294 "iobuf_small_cache_size": 128, 00:24:02.294 "iobuf_large_cache_size": 16 00:24:02.294 } 00:24:02.294 }, 00:24:02.294 { 00:24:02.294 "method": "bdev_raid_set_options", 00:24:02.294 "params": { 00:24:02.294 "process_window_size_kb": 1024, 00:24:02.294 "process_max_bandwidth_mb_sec": 0 00:24:02.294 } 00:24:02.294 }, 00:24:02.294 { 00:24:02.294 "method": "bdev_iscsi_set_options", 00:24:02.294 "params": { 00:24:02.294 "timeout_sec": 30 00:24:02.294 } 00:24:02.294 }, 00:24:02.294 { 00:24:02.294 "method": "bdev_nvme_set_options", 00:24:02.294 "params": { 00:24:02.294 "action_on_timeout": "none", 00:24:02.294 "timeout_us": 0, 00:24:02.294 "timeout_admin_us": 0, 00:24:02.294 "keep_alive_timeout_ms": 10000, 00:24:02.294 "arbitration_burst": 0, 00:24:02.294 "low_priority_weight": 0, 00:24:02.294 "medium_priority_weight": 0, 00:24:02.294 "high_priority_weight": 0, 00:24:02.294 "nvme_adminq_poll_period_us": 10000, 00:24:02.294 "nvme_ioq_poll_period_us": 0, 00:24:02.294 "io_queue_requests": 0, 00:24:02.294 "delay_cmd_submit": true, 00:24:02.294 "transport_retry_count": 4, 00:24:02.294 "bdev_retry_count": 3, 00:24:02.294 "transport_ack_timeout": 0, 00:24:02.294 "ctrlr_loss_timeout_sec": 0, 00:24:02.294 "reconnect_delay_sec": 0, 00:24:02.294 "fast_io_fail_timeout_sec": 0, 00:24:02.294 "disable_auto_failback": false, 00:24:02.294 "generate_uuids": false, 00:24:02.294 "transport_tos": 0, 00:24:02.294 "nvme_error_stat": false, 00:24:02.294 "rdma_srq_size": 0, 00:24:02.294 "io_path_stat": false, 00:24:02.294 "allow_accel_sequence": false, 00:24:02.294 "rdma_max_cq_size": 0, 00:24:02.294 "rdma_cm_event_timeout_ms": 0, 00:24:02.294 "dhchap_digests": [ 00:24:02.294 "sha256", 00:24:02.294 "sha384", 00:24:02.294 "sha512" 00:24:02.294 ], 00:24:02.294 "dhchap_dhgroups": [ 00:24:02.294 "null", 00:24:02.294 "ffdhe2048", 00:24:02.294 "ffdhe3072", 00:24:02.294 "ffdhe4096", 00:24:02.294 "ffdhe6144", 00:24:02.294 "ffdhe8192" 00:24:02.294 ] 00:24:02.294 } 00:24:02.294 }, 00:24:02.294 { 00:24:02.294 "method": "bdev_nvme_set_hotplug", 00:24:02.294 "params": { 00:24:02.294 "period_us": 100000, 00:24:02.294 "enable": false 00:24:02.294 } 00:24:02.294 }, 00:24:02.294 { 00:24:02.294 "method": "bdev_malloc_create", 00:24:02.294 "params": { 00:24:02.294 "name": "malloc0", 00:24:02.294 "num_blocks": 8192, 00:24:02.294 "block_size": 4096, 00:24:02.294 "physical_block_size": 4096, 00:24:02.294 "uuid": "ffcea043-4549-4b19-abde-b03fd5ce64da", 00:24:02.294 "optimal_io_boundary": 0, 00:24:02.294 "md_size": 0, 00:24:02.294 "dif_type": 0, 00:24:02.294 "dif_is_head_of_md": false, 00:24:02.294 "dif_pi_format": 0 00:24:02.294 } 00:24:02.294 }, 00:24:02.294 { 00:24:02.294 "method": "bdev_wait_for_examine" 00:24:02.294 } 00:24:02.294 ] 00:24:02.294 }, 00:24:02.294 { 00:24:02.294 "subsystem": "nbd", 00:24:02.294 "config": [] 00:24:02.294 }, 00:24:02.294 { 00:24:02.294 "subsystem": "scheduler", 00:24:02.294 "config": [ 00:24:02.294 { 00:24:02.294 "method": "framework_set_scheduler", 00:24:02.294 "params": { 00:24:02.294 "name": "static" 00:24:02.294 } 00:24:02.294 } 00:24:02.294 ] 00:24:02.294 }, 00:24:02.294 { 00:24:02.294 "subsystem": "nvmf", 00:24:02.294 "config": [ 00:24:02.294 { 00:24:02.294 "method": "nvmf_set_config", 00:24:02.294 "params": { 00:24:02.294 "discovery_filter": "match_any", 00:24:02.294 "admin_cmd_passthru": { 00:24:02.294 "identify_ctrlr": false 00:24:02.294 }, 00:24:02.294 "dhchap_digests": [ 00:24:02.294 "sha256", 00:24:02.294 "sha384", 00:24:02.294 "sha512" 00:24:02.294 ], 00:24:02.294 "dhchap_dhgroups": [ 00:24:02.294 "null", 00:24:02.294 "ffdhe2048", 00:24:02.294 "ffdhe3072", 00:24:02.294 "ffdhe4096", 00:24:02.294 "ffdhe6144", 00:24:02.294 "ffdhe8192" 00:24:02.294 ] 00:24:02.294 } 00:24:02.294 }, 00:24:02.294 { 00:24:02.294 "method": "nvmf_set_max_subsystems", 00:24:02.294 "params": { 00:24:02.294 "max_subsystems": 1024 00:24:02.294 } 00:24:02.294 }, 00:24:02.294 { 00:24:02.294 "method": "nvmf_set_crdt", 00:24:02.294 "params": { 00:24:02.294 "crdt1": 0, 00:24:02.294 "crdt2": 0, 00:24:02.294 "crdt3": 0 00:24:02.294 } 00:24:02.294 }, 00:24:02.294 { 00:24:02.294 "method": "nvmf_create_transport", 00:24:02.294 "params": { 00:24:02.294 "trtype": "TCP", 00:24:02.294 "max_queue_depth": 128, 00:24:02.294 "max_io_qpairs_per_ctrlr": 127, 00:24:02.294 "in_capsule_data_size": 4096, 00:24:02.294 "max_io_size": 131072, 00:24:02.294 "io_unit_size": 131072, 00:24:02.294 "max_aq_depth": 128, 00:24:02.294 "num_shared_buffers": 511, 00:24:02.294 "buf_cache_size": 4294967295, 00:24:02.294 "dif_insert_or_strip": false, 00:24:02.294 "zcopy": false, 00:24:02.294 "c2h_success": false, 00:24:02.294 "sock_priority": 0, 00:24:02.294 "abort_timeout_sec": 1, 00:24:02.294 "ack_timeout": 0, 00:24:02.294 "data_wr_pool_size": 0 00:24:02.294 } 00:24:02.294 }, 00:24:02.294 { 00:24:02.294 "method": "nvmf_create_subsystem", 00:24:02.294 "params": { 00:24:02.294 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.294 "allow_any_host": false, 00:24:02.294 "serial_number": "SPDK00000000000001", 00:24:02.294 "model_number": "SPDK bdev Controller", 00:24:02.294 "max_namespaces": 10, 00:24:02.294 "min_cntlid": 1, 00:24:02.294 "max_cntlid": 65519, 00:24:02.294 "ana_reporting": false 00:24:02.294 } 00:24:02.294 }, 00:24:02.294 { 00:24:02.294 "method": "nvmf_subsystem_add_host", 00:24:02.294 "params": { 00:24:02.294 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.294 "host": "nqn.2016-06.io.spdk:host1", 00:24:02.294 "psk": "key0" 00:24:02.294 } 00:24:02.294 }, 00:24:02.294 { 00:24:02.294 "method": "nvmf_subsystem_add_ns", 00:24:02.294 "params": { 00:24:02.294 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.294 "namespace": { 00:24:02.294 "nsid": 1, 00:24:02.294 "bdev_name": "malloc0", 00:24:02.295 "nguid": "FFCEA04345494B19ABDEB03FD5CE64DA", 00:24:02.295 "uuid": "ffcea043-4549-4b19-abde-b03fd5ce64da", 00:24:02.295 "no_auto_visible": false 00:24:02.295 } 00:24:02.295 } 00:24:02.295 }, 00:24:02.295 { 00:24:02.295 "method": "nvmf_subsystem_add_listener", 00:24:02.295 "params": { 00:24:02.295 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.295 "listen_address": { 00:24:02.295 "trtype": "TCP", 00:24:02.295 "adrfam": "IPv4", 00:24:02.295 "traddr": "10.0.0.2", 00:24:02.295 "trsvcid": "4420" 00:24:02.295 }, 00:24:02.295 "secure_channel": true 00:24:02.295 } 00:24:02.295 } 00:24:02.295 ] 00:24:02.295 } 00:24:02.295 ] 00:24:02.295 }' 00:24:02.295 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3899073 00:24:02.295 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3899073 00:24:02.295 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:02.295 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3899073 ']' 00:24:02.295 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:02.295 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:02.295 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:02.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:02.295 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:02.295 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:02.295 [2024-11-07 13:29:10.203105] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:24:02.295 [2024-11-07 13:29:10.203224] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:02.555 [2024-11-07 13:29:10.358164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.555 [2024-11-07 13:29:10.431048] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:02.555 [2024-11-07 13:29:10.431087] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:02.555 [2024-11-07 13:29:10.431096] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:02.555 [2024-11-07 13:29:10.431104] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:02.555 [2024-11-07 13:29:10.431112] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:02.555 [2024-11-07 13:29:10.432076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:02.814 [2024-11-07 13:29:10.765767] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:02.814 [2024-11-07 13:29:10.797796] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:02.814 [2024-11-07 13:29:10.798052] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:03.076 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:03.076 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:03.076 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:03.076 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:03.076 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.076 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:03.076 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3899192 00:24:03.076 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3899192 /var/tmp/bdevperf.sock 00:24:03.076 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3899192 ']' 00:24:03.076 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:03.076 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:03.076 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:03.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:03.076 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:03.076 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:03.076 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.076 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:24:03.076 "subsystems": [ 00:24:03.076 { 00:24:03.076 "subsystem": "keyring", 00:24:03.076 "config": [ 00:24:03.076 { 00:24:03.076 "method": "keyring_file_add_key", 00:24:03.076 "params": { 00:24:03.076 "name": "key0", 00:24:03.076 "path": "/tmp/tmp.2MGxW7SPjs" 00:24:03.076 } 00:24:03.076 } 00:24:03.076 ] 00:24:03.076 }, 00:24:03.076 { 00:24:03.076 "subsystem": "iobuf", 00:24:03.076 "config": [ 00:24:03.076 { 00:24:03.076 "method": "iobuf_set_options", 00:24:03.076 "params": { 00:24:03.076 "small_pool_count": 8192, 00:24:03.076 "large_pool_count": 1024, 00:24:03.076 "small_bufsize": 8192, 00:24:03.076 "large_bufsize": 135168, 00:24:03.076 "enable_numa": false 00:24:03.076 } 00:24:03.076 } 00:24:03.076 ] 00:24:03.076 }, 00:24:03.076 { 00:24:03.076 "subsystem": "sock", 00:24:03.076 "config": [ 00:24:03.076 { 00:24:03.076 "method": "sock_set_default_impl", 00:24:03.076 "params": { 00:24:03.076 "impl_name": "posix" 00:24:03.076 } 00:24:03.076 }, 00:24:03.076 { 00:24:03.076 "method": "sock_impl_set_options", 00:24:03.076 "params": { 00:24:03.076 "impl_name": "ssl", 00:24:03.076 "recv_buf_size": 4096, 00:24:03.076 "send_buf_size": 4096, 00:24:03.076 "enable_recv_pipe": true, 00:24:03.076 "enable_quickack": false, 00:24:03.076 "enable_placement_id": 0, 00:24:03.076 "enable_zerocopy_send_server": true, 00:24:03.076 "enable_zerocopy_send_client": false, 00:24:03.076 "zerocopy_threshold": 0, 00:24:03.076 "tls_version": 0, 00:24:03.076 "enable_ktls": false 00:24:03.076 } 00:24:03.076 }, 00:24:03.076 { 00:24:03.076 "method": "sock_impl_set_options", 00:24:03.076 "params": { 00:24:03.076 "impl_name": "posix", 00:24:03.076 "recv_buf_size": 2097152, 00:24:03.076 "send_buf_size": 2097152, 00:24:03.076 "enable_recv_pipe": true, 00:24:03.076 "enable_quickack": false, 00:24:03.076 "enable_placement_id": 0, 00:24:03.076 "enable_zerocopy_send_server": true, 00:24:03.076 "enable_zerocopy_send_client": false, 00:24:03.076 "zerocopy_threshold": 0, 00:24:03.076 "tls_version": 0, 00:24:03.076 "enable_ktls": false 00:24:03.076 } 00:24:03.076 } 00:24:03.076 ] 00:24:03.076 }, 00:24:03.076 { 00:24:03.076 "subsystem": "vmd", 00:24:03.076 "config": [] 00:24:03.076 }, 00:24:03.076 { 00:24:03.076 "subsystem": "accel", 00:24:03.076 "config": [ 00:24:03.076 { 00:24:03.076 "method": "accel_set_options", 00:24:03.076 "params": { 00:24:03.076 "small_cache_size": 128, 00:24:03.076 "large_cache_size": 16, 00:24:03.076 "task_count": 2048, 00:24:03.076 "sequence_count": 2048, 00:24:03.076 "buf_count": 2048 00:24:03.076 } 00:24:03.076 } 00:24:03.076 ] 00:24:03.076 }, 00:24:03.076 { 00:24:03.076 "subsystem": "bdev", 00:24:03.076 "config": [ 00:24:03.076 { 00:24:03.076 "method": "bdev_set_options", 00:24:03.076 "params": { 00:24:03.076 "bdev_io_pool_size": 65535, 00:24:03.076 "bdev_io_cache_size": 256, 00:24:03.076 "bdev_auto_examine": true, 00:24:03.076 "iobuf_small_cache_size": 128, 00:24:03.076 "iobuf_large_cache_size": 16 00:24:03.076 } 00:24:03.076 }, 00:24:03.076 { 00:24:03.076 "method": "bdev_raid_set_options", 00:24:03.076 "params": { 00:24:03.076 "process_window_size_kb": 1024, 00:24:03.076 "process_max_bandwidth_mb_sec": 0 00:24:03.076 } 00:24:03.076 }, 00:24:03.076 { 00:24:03.076 "method": "bdev_iscsi_set_options", 00:24:03.076 "params": { 00:24:03.076 "timeout_sec": 30 00:24:03.076 } 00:24:03.076 }, 00:24:03.076 { 00:24:03.076 "method": "bdev_nvme_set_options", 00:24:03.076 "params": { 00:24:03.076 "action_on_timeout": "none", 00:24:03.076 "timeout_us": 0, 00:24:03.076 "timeout_admin_us": 0, 00:24:03.076 "keep_alive_timeout_ms": 10000, 00:24:03.076 "arbitration_burst": 0, 00:24:03.076 "low_priority_weight": 0, 00:24:03.076 "medium_priority_weight": 0, 00:24:03.076 "high_priority_weight": 0, 00:24:03.076 "nvme_adminq_poll_period_us": 10000, 00:24:03.076 "nvme_ioq_poll_period_us": 0, 00:24:03.076 "io_queue_requests": 512, 00:24:03.076 "delay_cmd_submit": true, 00:24:03.076 "transport_retry_count": 4, 00:24:03.076 "bdev_retry_count": 3, 00:24:03.076 "transport_ack_timeout": 0, 00:24:03.076 "ctrlr_loss_timeout_sec": 0, 00:24:03.076 "reconnect_delay_sec": 0, 00:24:03.076 "fast_io_fail_timeout_sec": 0, 00:24:03.076 "disable_auto_failback": false, 00:24:03.076 "generate_uuids": false, 00:24:03.076 "transport_tos": 0, 00:24:03.076 "nvme_error_stat": false, 00:24:03.076 "rdma_srq_size": 0, 00:24:03.076 "io_path_stat": false, 00:24:03.076 "allow_accel_sequence": false, 00:24:03.076 "rdma_max_cq_size": 0, 00:24:03.076 "rdma_cm_event_timeout_ms": 0, 00:24:03.076 "dhchap_digests": [ 00:24:03.076 "sha256", 00:24:03.076 "sha384", 00:24:03.076 "sha512" 00:24:03.076 ], 00:24:03.076 "dhchap_dhgroups": [ 00:24:03.076 "null", 00:24:03.076 "ffdhe2048", 00:24:03.076 "ffdhe3072", 00:24:03.076 "ffdhe4096", 00:24:03.076 "ffdhe6144", 00:24:03.076 "ffdhe8192" 00:24:03.076 ] 00:24:03.076 } 00:24:03.076 }, 00:24:03.076 { 00:24:03.076 "method": "bdev_nvme_attach_controller", 00:24:03.076 "params": { 00:24:03.076 "name": "TLSTEST", 00:24:03.076 "trtype": "TCP", 00:24:03.076 "adrfam": "IPv4", 00:24:03.076 "traddr": "10.0.0.2", 00:24:03.076 "trsvcid": "4420", 00:24:03.076 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.076 "prchk_reftag": false, 00:24:03.076 "prchk_guard": false, 00:24:03.076 "ctrlr_loss_timeout_sec": 0, 00:24:03.076 "reconnect_delay_sec": 0, 00:24:03.076 "fast_io_fail_timeout_sec": 0, 00:24:03.076 "psk": "key0", 00:24:03.076 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:03.076 "hdgst": false, 00:24:03.076 "ddgst": false, 00:24:03.076 "multipath": "multipath" 00:24:03.076 } 00:24:03.076 }, 00:24:03.076 { 00:24:03.076 "method": "bdev_nvme_set_hotplug", 00:24:03.077 "params": { 00:24:03.077 "period_us": 100000, 00:24:03.077 "enable": false 00:24:03.077 } 00:24:03.077 }, 00:24:03.077 { 00:24:03.077 "method": "bdev_wait_for_examine" 00:24:03.077 } 00:24:03.077 ] 00:24:03.077 }, 00:24:03.077 { 00:24:03.077 "subsystem": "nbd", 00:24:03.077 "config": [] 00:24:03.077 } 00:24:03.077 ] 00:24:03.077 }' 00:24:03.077 [2024-11-07 13:29:11.075639] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:24:03.077 [2024-11-07 13:29:11.075752] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3899192 ] 00:24:03.336 [2024-11-07 13:29:11.191705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.336 [2024-11-07 13:29:11.265461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:03.595 [2024-11-07 13:29:11.522700] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:03.856 13:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:03.856 13:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:03.856 13:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:04.116 Running I/O for 10 seconds... 00:24:05.993 4343.00 IOPS, 16.96 MiB/s [2024-11-07T12:29:14.939Z] 4697.00 IOPS, 18.35 MiB/s [2024-11-07T12:29:16.322Z] 4793.67 IOPS, 18.73 MiB/s [2024-11-07T12:29:17.262Z] 4736.00 IOPS, 18.50 MiB/s [2024-11-07T12:29:18.202Z] 4705.80 IOPS, 18.38 MiB/s [2024-11-07T12:29:19.141Z] 4761.00 IOPS, 18.60 MiB/s [2024-11-07T12:29:20.081Z] 4751.00 IOPS, 18.56 MiB/s [2024-11-07T12:29:21.020Z] 4793.88 IOPS, 18.73 MiB/s [2024-11-07T12:29:21.959Z] 4812.00 IOPS, 18.80 MiB/s [2024-11-07T12:29:21.959Z] 4815.60 IOPS, 18.81 MiB/s 00:24:13.952 Latency(us) 00:24:13.952 [2024-11-07T12:29:21.959Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.952 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:13.952 Verification LBA range: start 0x0 length 0x2000 00:24:13.952 TLSTESTn1 : 10.02 4820.75 18.83 0.00 0.00 26516.37 6389.76 80827.73 00:24:13.952 [2024-11-07T12:29:21.959Z] =================================================================================================================== 00:24:13.952 [2024-11-07T12:29:21.959Z] Total : 4820.75 18.83 0.00 0.00 26516.37 6389.76 80827.73 00:24:13.952 { 00:24:13.952 "results": [ 00:24:13.952 { 00:24:13.952 "job": "TLSTESTn1", 00:24:13.952 "core_mask": "0x4", 00:24:13.952 "workload": "verify", 00:24:13.952 "status": "finished", 00:24:13.952 "verify_range": { 00:24:13.952 "start": 0, 00:24:13.952 "length": 8192 00:24:13.952 }, 00:24:13.952 "queue_depth": 128, 00:24:13.952 "io_size": 4096, 00:24:13.952 "runtime": 10.015874, 00:24:13.952 "iops": 4820.747545346517, 00:24:13.952 "mibps": 18.83104509900983, 00:24:13.952 "io_failed": 0, 00:24:13.952 "io_timeout": 0, 00:24:13.952 "avg_latency_us": 26516.370143318698, 00:24:13.952 "min_latency_us": 6389.76, 00:24:13.952 "max_latency_us": 80827.73333333334 00:24:13.952 } 00:24:13.952 ], 00:24:13.952 "core_count": 1 00:24:13.952 } 00:24:14.212 13:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:14.212 13:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3899192 00:24:14.212 13:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3899192 ']' 00:24:14.212 13:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3899192 00:24:14.212 13:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:14.212 13:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:14.212 13:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3899192 00:24:14.212 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:14.212 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:14.212 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3899192' 00:24:14.212 killing process with pid 3899192 00:24:14.212 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3899192 00:24:14.212 Received shutdown signal, test time was about 10.000000 seconds 00:24:14.212 00:24:14.212 Latency(us) 00:24:14.212 [2024-11-07T12:29:22.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.212 [2024-11-07T12:29:22.219Z] =================================================================================================================== 00:24:14.212 [2024-11-07T12:29:22.219Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:14.212 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3899192 00:24:14.782 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3899073 00:24:14.782 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3899073 ']' 00:24:14.782 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3899073 00:24:14.782 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:14.782 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:14.782 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3899073 00:24:14.782 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:14.782 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:14.782 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3899073' 00:24:14.782 killing process with pid 3899073 00:24:14.782 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3899073 00:24:14.782 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3899073 00:24:15.352 13:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:15.352 13:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:15.352 13:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:15.352 13:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:15.352 13:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3901504 00:24:15.352 13:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3901504 00:24:15.352 13:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:15.352 13:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3901504 ']' 00:24:15.353 13:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:15.353 13:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:15.353 13:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:15.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:15.353 13:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:15.353 13:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:15.353 [2024-11-07 13:29:23.286283] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:24:15.353 [2024-11-07 13:29:23.286404] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:15.613 [2024-11-07 13:29:23.442355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.613 [2024-11-07 13:29:23.538431] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:15.613 [2024-11-07 13:29:23.538476] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:15.613 [2024-11-07 13:29:23.538488] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:15.613 [2024-11-07 13:29:23.538499] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:15.613 [2024-11-07 13:29:23.538510] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:15.613 [2024-11-07 13:29:23.539700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.183 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:16.183 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:16.183 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:16.183 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:16.183 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:16.183 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:16.183 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.2MGxW7SPjs 00:24:16.183 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.2MGxW7SPjs 00:24:16.183 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:16.443 [2024-11-07 13:29:24.230149] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:16.443 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:16.443 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:16.702 [2024-11-07 13:29:24.550964] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:16.702 [2024-11-07 13:29:24.551236] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:16.702 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:16.962 malloc0 00:24:16.962 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:16.962 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.2MGxW7SPjs 00:24:17.222 13:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:17.482 13:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:17.482 13:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3901884 00:24:17.482 13:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:17.482 13:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3901884 /var/tmp/bdevperf.sock 00:24:17.482 13:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3901884 ']' 00:24:17.482 13:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:17.482 13:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:17.482 13:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:17.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:17.482 13:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:17.482 13:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:17.482 [2024-11-07 13:29:25.332569] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:24:17.482 [2024-11-07 13:29:25.332679] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3901884 ] 00:24:17.482 [2024-11-07 13:29:25.478893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.742 [2024-11-07 13:29:25.553382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:18.311 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:18.311 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:18.311 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2MGxW7SPjs 00:24:18.311 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:18.572 [2024-11-07 13:29:26.439502] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:18.572 nvme0n1 00:24:18.572 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:18.831 Running I/O for 1 seconds... 00:24:19.772 3630.00 IOPS, 14.18 MiB/s 00:24:19.772 Latency(us) 00:24:19.772 [2024-11-07T12:29:27.779Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.772 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:19.772 Verification LBA range: start 0x0 length 0x2000 00:24:19.772 nvme0n1 : 1.02 3682.20 14.38 0.00 0.00 34420.77 5133.65 105731.41 00:24:19.772 [2024-11-07T12:29:27.779Z] =================================================================================================================== 00:24:19.772 [2024-11-07T12:29:27.779Z] Total : 3682.20 14.38 0.00 0.00 34420.77 5133.65 105731.41 00:24:19.772 { 00:24:19.772 "results": [ 00:24:19.772 { 00:24:19.772 "job": "nvme0n1", 00:24:19.772 "core_mask": "0x2", 00:24:19.772 "workload": "verify", 00:24:19.772 "status": "finished", 00:24:19.772 "verify_range": { 00:24:19.772 "start": 0, 00:24:19.772 "length": 8192 00:24:19.772 }, 00:24:19.772 "queue_depth": 128, 00:24:19.772 "io_size": 4096, 00:24:19.772 "runtime": 1.020586, 00:24:19.772 "iops": 3682.1982664861166, 00:24:19.772 "mibps": 14.383586978461393, 00:24:19.772 "io_failed": 0, 00:24:19.772 "io_timeout": 0, 00:24:19.772 "avg_latency_us": 34420.76906155756, 00:24:19.772 "min_latency_us": 5133.653333333334, 00:24:19.772 "max_latency_us": 105731.41333333333 00:24:19.772 } 00:24:19.772 ], 00:24:19.772 "core_count": 1 00:24:19.772 } 00:24:19.772 13:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3901884 00:24:19.772 13:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3901884 ']' 00:24:19.772 13:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3901884 00:24:19.772 13:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:19.772 13:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:19.772 13:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3901884 00:24:19.772 13:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:19.772 13:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:19.772 13:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3901884' 00:24:19.772 killing process with pid 3901884 00:24:19.772 13:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3901884 00:24:19.772 Received shutdown signal, test time was about 1.000000 seconds 00:24:19.772 00:24:19.772 Latency(us) 00:24:19.772 [2024-11-07T12:29:27.779Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.772 [2024-11-07T12:29:27.779Z] =================================================================================================================== 00:24:19.772 [2024-11-07T12:29:27.779Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:19.772 13:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3901884 00:24:20.342 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3901504 00:24:20.342 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3901504 ']' 00:24:20.342 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3901504 00:24:20.342 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:20.342 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:20.342 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3901504 00:24:20.342 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:20.342 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:20.342 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3901504' 00:24:20.342 killing process with pid 3901504 00:24:20.342 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3901504 00:24:20.342 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3901504 00:24:21.282 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:21.282 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:21.282 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:21.282 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:21.282 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3902698 00:24:21.282 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3902698 00:24:21.282 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:21.282 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3902698 ']' 00:24:21.282 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.282 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:21.282 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:21.282 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:21.282 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:21.282 [2024-11-07 13:29:29.180844] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:24:21.282 [2024-11-07 13:29:29.180966] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:21.543 [2024-11-07 13:29:29.337516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.543 [2024-11-07 13:29:29.433894] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:21.543 [2024-11-07 13:29:29.433941] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:21.543 [2024-11-07 13:29:29.433953] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:21.543 [2024-11-07 13:29:29.433964] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:21.543 [2024-11-07 13:29:29.433975] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:21.543 [2024-11-07 13:29:29.435213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.112 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:22.112 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:22.112 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:22.112 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:22.112 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:22.112 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:22.112 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:22.112 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.112 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:22.112 [2024-11-07 13:29:29.985190] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:22.112 malloc0 00:24:22.112 [2024-11-07 13:29:30.032406] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:22.112 [2024-11-07 13:29:30.032689] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:22.112 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.112 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3902904 00:24:22.112 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3902904 /var/tmp/bdevperf.sock 00:24:22.112 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:22.112 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3902904 ']' 00:24:22.112 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:22.112 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:22.112 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:22.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:22.112 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:22.112 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:22.371 [2024-11-07 13:29:30.140825] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:24:22.371 [2024-11-07 13:29:30.140942] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3902904 ] 00:24:22.371 [2024-11-07 13:29:30.282034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.371 [2024-11-07 13:29:30.355496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:22.941 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:22.941 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:22.941 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2MGxW7SPjs 00:24:23.201 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:23.461 [2024-11-07 13:29:31.206233] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:23.461 nvme0n1 00:24:23.461 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:23.461 Running I/O for 1 seconds... 00:24:24.842 4405.00 IOPS, 17.21 MiB/s 00:24:24.842 Latency(us) 00:24:24.842 [2024-11-07T12:29:32.849Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.842 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:24.842 Verification LBA range: start 0x0 length 0x2000 00:24:24.842 nvme0n1 : 1.06 4296.54 16.78 0.00 0.00 29125.94 5160.96 48278.19 00:24:24.842 [2024-11-07T12:29:32.849Z] =================================================================================================================== 00:24:24.842 [2024-11-07T12:29:32.849Z] Total : 4296.54 16.78 0.00 0.00 29125.94 5160.96 48278.19 00:24:24.842 { 00:24:24.842 "results": [ 00:24:24.842 { 00:24:24.842 "job": "nvme0n1", 00:24:24.842 "core_mask": "0x2", 00:24:24.842 "workload": "verify", 00:24:24.842 "status": "finished", 00:24:24.842 "verify_range": { 00:24:24.842 "start": 0, 00:24:24.842 "length": 8192 00:24:24.842 }, 00:24:24.842 "queue_depth": 128, 00:24:24.842 "io_size": 4096, 00:24:24.842 "runtime": 1.055036, 00:24:24.842 "iops": 4296.535852805023, 00:24:24.842 "mibps": 16.78334317501962, 00:24:24.842 "io_failed": 0, 00:24:24.842 "io_timeout": 0, 00:24:24.842 "avg_latency_us": 29125.942460475035, 00:24:24.842 "min_latency_us": 5160.96, 00:24:24.842 "max_latency_us": 48278.18666666667 00:24:24.842 } 00:24:24.842 ], 00:24:24.842 "core_count": 1 00:24:24.842 } 00:24:24.842 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:24.842 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.842 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:24.842 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.842 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:24.842 "subsystems": [ 00:24:24.842 { 00:24:24.842 "subsystem": "keyring", 00:24:24.842 "config": [ 00:24:24.842 { 00:24:24.842 "method": "keyring_file_add_key", 00:24:24.842 "params": { 00:24:24.842 "name": "key0", 00:24:24.842 "path": "/tmp/tmp.2MGxW7SPjs" 00:24:24.842 } 00:24:24.842 } 00:24:24.842 ] 00:24:24.842 }, 00:24:24.842 { 00:24:24.842 "subsystem": "iobuf", 00:24:24.842 "config": [ 00:24:24.842 { 00:24:24.842 "method": "iobuf_set_options", 00:24:24.842 "params": { 00:24:24.842 "small_pool_count": 8192, 00:24:24.842 "large_pool_count": 1024, 00:24:24.842 "small_bufsize": 8192, 00:24:24.842 "large_bufsize": 135168, 00:24:24.842 "enable_numa": false 00:24:24.842 } 00:24:24.842 } 00:24:24.842 ] 00:24:24.842 }, 00:24:24.842 { 00:24:24.842 "subsystem": "sock", 00:24:24.842 "config": [ 00:24:24.842 { 00:24:24.842 "method": "sock_set_default_impl", 00:24:24.842 "params": { 00:24:24.842 "impl_name": "posix" 00:24:24.842 } 00:24:24.842 }, 00:24:24.842 { 00:24:24.842 "method": "sock_impl_set_options", 00:24:24.842 "params": { 00:24:24.842 "impl_name": "ssl", 00:24:24.842 "recv_buf_size": 4096, 00:24:24.842 "send_buf_size": 4096, 00:24:24.842 "enable_recv_pipe": true, 00:24:24.842 "enable_quickack": false, 00:24:24.842 "enable_placement_id": 0, 00:24:24.842 "enable_zerocopy_send_server": true, 00:24:24.842 "enable_zerocopy_send_client": false, 00:24:24.842 "zerocopy_threshold": 0, 00:24:24.842 "tls_version": 0, 00:24:24.842 "enable_ktls": false 00:24:24.842 } 00:24:24.842 }, 00:24:24.842 { 00:24:24.842 "method": "sock_impl_set_options", 00:24:24.842 "params": { 00:24:24.842 "impl_name": "posix", 00:24:24.842 "recv_buf_size": 2097152, 00:24:24.842 "send_buf_size": 2097152, 00:24:24.842 "enable_recv_pipe": true, 00:24:24.842 "enable_quickack": false, 00:24:24.842 "enable_placement_id": 0, 00:24:24.842 "enable_zerocopy_send_server": true, 00:24:24.842 "enable_zerocopy_send_client": false, 00:24:24.842 "zerocopy_threshold": 0, 00:24:24.842 "tls_version": 0, 00:24:24.842 "enable_ktls": false 00:24:24.842 } 00:24:24.842 } 00:24:24.842 ] 00:24:24.842 }, 00:24:24.842 { 00:24:24.842 "subsystem": "vmd", 00:24:24.842 "config": [] 00:24:24.842 }, 00:24:24.842 { 00:24:24.842 "subsystem": "accel", 00:24:24.842 "config": [ 00:24:24.842 { 00:24:24.842 "method": "accel_set_options", 00:24:24.842 "params": { 00:24:24.842 "small_cache_size": 128, 00:24:24.842 "large_cache_size": 16, 00:24:24.842 "task_count": 2048, 00:24:24.842 "sequence_count": 2048, 00:24:24.842 "buf_count": 2048 00:24:24.842 } 00:24:24.842 } 00:24:24.842 ] 00:24:24.842 }, 00:24:24.842 { 00:24:24.842 "subsystem": "bdev", 00:24:24.842 "config": [ 00:24:24.842 { 00:24:24.842 "method": "bdev_set_options", 00:24:24.842 "params": { 00:24:24.842 "bdev_io_pool_size": 65535, 00:24:24.842 "bdev_io_cache_size": 256, 00:24:24.842 "bdev_auto_examine": true, 00:24:24.842 "iobuf_small_cache_size": 128, 00:24:24.842 "iobuf_large_cache_size": 16 00:24:24.842 } 00:24:24.842 }, 00:24:24.842 { 00:24:24.842 "method": "bdev_raid_set_options", 00:24:24.842 "params": { 00:24:24.842 "process_window_size_kb": 1024, 00:24:24.842 "process_max_bandwidth_mb_sec": 0 00:24:24.842 } 00:24:24.842 }, 00:24:24.842 { 00:24:24.842 "method": "bdev_iscsi_set_options", 00:24:24.842 "params": { 00:24:24.842 "timeout_sec": 30 00:24:24.842 } 00:24:24.842 }, 00:24:24.843 { 00:24:24.843 "method": "bdev_nvme_set_options", 00:24:24.843 "params": { 00:24:24.843 "action_on_timeout": "none", 00:24:24.843 "timeout_us": 0, 00:24:24.843 "timeout_admin_us": 0, 00:24:24.843 "keep_alive_timeout_ms": 10000, 00:24:24.843 "arbitration_burst": 0, 00:24:24.843 "low_priority_weight": 0, 00:24:24.843 "medium_priority_weight": 0, 00:24:24.843 "high_priority_weight": 0, 00:24:24.843 "nvme_adminq_poll_period_us": 10000, 00:24:24.843 "nvme_ioq_poll_period_us": 0, 00:24:24.843 "io_queue_requests": 0, 00:24:24.843 "delay_cmd_submit": true, 00:24:24.843 "transport_retry_count": 4, 00:24:24.843 "bdev_retry_count": 3, 00:24:24.843 "transport_ack_timeout": 0, 00:24:24.843 "ctrlr_loss_timeout_sec": 0, 00:24:24.843 "reconnect_delay_sec": 0, 00:24:24.843 "fast_io_fail_timeout_sec": 0, 00:24:24.843 "disable_auto_failback": false, 00:24:24.843 "generate_uuids": false, 00:24:24.843 "transport_tos": 0, 00:24:24.843 "nvme_error_stat": false, 00:24:24.843 "rdma_srq_size": 0, 00:24:24.843 "io_path_stat": false, 00:24:24.843 "allow_accel_sequence": false, 00:24:24.843 "rdma_max_cq_size": 0, 00:24:24.843 "rdma_cm_event_timeout_ms": 0, 00:24:24.843 "dhchap_digests": [ 00:24:24.843 "sha256", 00:24:24.843 "sha384", 00:24:24.843 "sha512" 00:24:24.843 ], 00:24:24.843 "dhchap_dhgroups": [ 00:24:24.843 "null", 00:24:24.843 "ffdhe2048", 00:24:24.843 "ffdhe3072", 00:24:24.843 "ffdhe4096", 00:24:24.843 "ffdhe6144", 00:24:24.843 "ffdhe8192" 00:24:24.843 ] 00:24:24.843 } 00:24:24.843 }, 00:24:24.843 { 00:24:24.843 "method": "bdev_nvme_set_hotplug", 00:24:24.843 "params": { 00:24:24.843 "period_us": 100000, 00:24:24.843 "enable": false 00:24:24.843 } 00:24:24.843 }, 00:24:24.843 { 00:24:24.843 "method": "bdev_malloc_create", 00:24:24.843 "params": { 00:24:24.843 "name": "malloc0", 00:24:24.843 "num_blocks": 8192, 00:24:24.843 "block_size": 4096, 00:24:24.843 "physical_block_size": 4096, 00:24:24.843 "uuid": "6a94342e-a67b-43ff-a519-866118a7ad96", 00:24:24.843 "optimal_io_boundary": 0, 00:24:24.843 "md_size": 0, 00:24:24.843 "dif_type": 0, 00:24:24.843 "dif_is_head_of_md": false, 00:24:24.843 "dif_pi_format": 0 00:24:24.843 } 00:24:24.843 }, 00:24:24.843 { 00:24:24.843 "method": "bdev_wait_for_examine" 00:24:24.843 } 00:24:24.843 ] 00:24:24.843 }, 00:24:24.843 { 00:24:24.843 "subsystem": "nbd", 00:24:24.843 "config": [] 00:24:24.843 }, 00:24:24.843 { 00:24:24.843 "subsystem": "scheduler", 00:24:24.843 "config": [ 00:24:24.843 { 00:24:24.843 "method": "framework_set_scheduler", 00:24:24.843 "params": { 00:24:24.843 "name": "static" 00:24:24.843 } 00:24:24.843 } 00:24:24.843 ] 00:24:24.843 }, 00:24:24.843 { 00:24:24.843 "subsystem": "nvmf", 00:24:24.843 "config": [ 00:24:24.843 { 00:24:24.843 "method": "nvmf_set_config", 00:24:24.843 "params": { 00:24:24.843 "discovery_filter": "match_any", 00:24:24.843 "admin_cmd_passthru": { 00:24:24.843 "identify_ctrlr": false 00:24:24.843 }, 00:24:24.843 "dhchap_digests": [ 00:24:24.843 "sha256", 00:24:24.843 "sha384", 00:24:24.843 "sha512" 00:24:24.843 ], 00:24:24.843 "dhchap_dhgroups": [ 00:24:24.843 "null", 00:24:24.843 "ffdhe2048", 00:24:24.843 "ffdhe3072", 00:24:24.843 "ffdhe4096", 00:24:24.843 "ffdhe6144", 00:24:24.843 "ffdhe8192" 00:24:24.843 ] 00:24:24.843 } 00:24:24.843 }, 00:24:24.843 { 00:24:24.843 "method": "nvmf_set_max_subsystems", 00:24:24.843 "params": { 00:24:24.843 "max_subsystems": 1024 00:24:24.843 } 00:24:24.843 }, 00:24:24.843 { 00:24:24.843 "method": "nvmf_set_crdt", 00:24:24.843 "params": { 00:24:24.843 "crdt1": 0, 00:24:24.843 "crdt2": 0, 00:24:24.843 "crdt3": 0 00:24:24.843 } 00:24:24.843 }, 00:24:24.843 { 00:24:24.843 "method": "nvmf_create_transport", 00:24:24.843 "params": { 00:24:24.843 "trtype": "TCP", 00:24:24.843 "max_queue_depth": 128, 00:24:24.843 "max_io_qpairs_per_ctrlr": 127, 00:24:24.843 "in_capsule_data_size": 4096, 00:24:24.843 "max_io_size": 131072, 00:24:24.843 "io_unit_size": 131072, 00:24:24.843 "max_aq_depth": 128, 00:24:24.843 "num_shared_buffers": 511, 00:24:24.843 "buf_cache_size": 4294967295, 00:24:24.843 "dif_insert_or_strip": false, 00:24:24.843 "zcopy": false, 00:24:24.843 "c2h_success": false, 00:24:24.843 "sock_priority": 0, 00:24:24.843 "abort_timeout_sec": 1, 00:24:24.843 "ack_timeout": 0, 00:24:24.843 "data_wr_pool_size": 0 00:24:24.843 } 00:24:24.843 }, 00:24:24.843 { 00:24:24.843 "method": "nvmf_create_subsystem", 00:24:24.843 "params": { 00:24:24.843 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:24.843 "allow_any_host": false, 00:24:24.843 "serial_number": "00000000000000000000", 00:24:24.843 "model_number": "SPDK bdev Controller", 00:24:24.843 "max_namespaces": 32, 00:24:24.843 "min_cntlid": 1, 00:24:24.843 "max_cntlid": 65519, 00:24:24.843 "ana_reporting": false 00:24:24.843 } 00:24:24.843 }, 00:24:24.843 { 00:24:24.843 "method": "nvmf_subsystem_add_host", 00:24:24.843 "params": { 00:24:24.843 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:24.843 "host": "nqn.2016-06.io.spdk:host1", 00:24:24.843 "psk": "key0" 00:24:24.843 } 00:24:24.843 }, 00:24:24.843 { 00:24:24.843 "method": "nvmf_subsystem_add_ns", 00:24:24.843 "params": { 00:24:24.843 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:24.843 "namespace": { 00:24:24.843 "nsid": 1, 00:24:24.843 "bdev_name": "malloc0", 00:24:24.843 "nguid": "6A94342EA67B43FFA519866118A7AD96", 00:24:24.843 "uuid": "6a94342e-a67b-43ff-a519-866118a7ad96", 00:24:24.843 "no_auto_visible": false 00:24:24.843 } 00:24:24.843 } 00:24:24.843 }, 00:24:24.843 { 00:24:24.843 "method": "nvmf_subsystem_add_listener", 00:24:24.843 "params": { 00:24:24.843 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:24.843 "listen_address": { 00:24:24.843 "trtype": "TCP", 00:24:24.843 "adrfam": "IPv4", 00:24:24.843 "traddr": "10.0.0.2", 00:24:24.843 "trsvcid": "4420" 00:24:24.843 }, 00:24:24.843 "secure_channel": false, 00:24:24.843 "sock_impl": "ssl" 00:24:24.843 } 00:24:24.843 } 00:24:24.843 ] 00:24:24.843 } 00:24:24.843 ] 00:24:24.843 }' 00:24:24.843 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:25.103 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:25.103 "subsystems": [ 00:24:25.103 { 00:24:25.103 "subsystem": "keyring", 00:24:25.103 "config": [ 00:24:25.103 { 00:24:25.103 "method": "keyring_file_add_key", 00:24:25.103 "params": { 00:24:25.103 "name": "key0", 00:24:25.103 "path": "/tmp/tmp.2MGxW7SPjs" 00:24:25.103 } 00:24:25.103 } 00:24:25.103 ] 00:24:25.103 }, 00:24:25.103 { 00:24:25.103 "subsystem": "iobuf", 00:24:25.103 "config": [ 00:24:25.103 { 00:24:25.103 "method": "iobuf_set_options", 00:24:25.103 "params": { 00:24:25.103 "small_pool_count": 8192, 00:24:25.103 "large_pool_count": 1024, 00:24:25.103 "small_bufsize": 8192, 00:24:25.103 "large_bufsize": 135168, 00:24:25.103 "enable_numa": false 00:24:25.103 } 00:24:25.103 } 00:24:25.103 ] 00:24:25.103 }, 00:24:25.103 { 00:24:25.103 "subsystem": "sock", 00:24:25.103 "config": [ 00:24:25.103 { 00:24:25.103 "method": "sock_set_default_impl", 00:24:25.103 "params": { 00:24:25.103 "impl_name": "posix" 00:24:25.103 } 00:24:25.103 }, 00:24:25.103 { 00:24:25.103 "method": "sock_impl_set_options", 00:24:25.103 "params": { 00:24:25.103 "impl_name": "ssl", 00:24:25.103 "recv_buf_size": 4096, 00:24:25.103 "send_buf_size": 4096, 00:24:25.103 "enable_recv_pipe": true, 00:24:25.104 "enable_quickack": false, 00:24:25.104 "enable_placement_id": 0, 00:24:25.104 "enable_zerocopy_send_server": true, 00:24:25.104 "enable_zerocopy_send_client": false, 00:24:25.104 "zerocopy_threshold": 0, 00:24:25.104 "tls_version": 0, 00:24:25.104 "enable_ktls": false 00:24:25.104 } 00:24:25.104 }, 00:24:25.104 { 00:24:25.104 "method": "sock_impl_set_options", 00:24:25.104 "params": { 00:24:25.104 "impl_name": "posix", 00:24:25.104 "recv_buf_size": 2097152, 00:24:25.104 "send_buf_size": 2097152, 00:24:25.104 "enable_recv_pipe": true, 00:24:25.104 "enable_quickack": false, 00:24:25.104 "enable_placement_id": 0, 00:24:25.104 "enable_zerocopy_send_server": true, 00:24:25.104 "enable_zerocopy_send_client": false, 00:24:25.104 "zerocopy_threshold": 0, 00:24:25.104 "tls_version": 0, 00:24:25.104 "enable_ktls": false 00:24:25.104 } 00:24:25.104 } 00:24:25.104 ] 00:24:25.104 }, 00:24:25.104 { 00:24:25.104 "subsystem": "vmd", 00:24:25.104 "config": [] 00:24:25.104 }, 00:24:25.104 { 00:24:25.104 "subsystem": "accel", 00:24:25.104 "config": [ 00:24:25.104 { 00:24:25.104 "method": "accel_set_options", 00:24:25.104 "params": { 00:24:25.104 "small_cache_size": 128, 00:24:25.104 "large_cache_size": 16, 00:24:25.104 "task_count": 2048, 00:24:25.104 "sequence_count": 2048, 00:24:25.104 "buf_count": 2048 00:24:25.104 } 00:24:25.104 } 00:24:25.104 ] 00:24:25.104 }, 00:24:25.104 { 00:24:25.104 "subsystem": "bdev", 00:24:25.104 "config": [ 00:24:25.104 { 00:24:25.104 "method": "bdev_set_options", 00:24:25.104 "params": { 00:24:25.104 "bdev_io_pool_size": 65535, 00:24:25.104 "bdev_io_cache_size": 256, 00:24:25.104 "bdev_auto_examine": true, 00:24:25.104 "iobuf_small_cache_size": 128, 00:24:25.104 "iobuf_large_cache_size": 16 00:24:25.104 } 00:24:25.104 }, 00:24:25.104 { 00:24:25.104 "method": "bdev_raid_set_options", 00:24:25.104 "params": { 00:24:25.104 "process_window_size_kb": 1024, 00:24:25.104 "process_max_bandwidth_mb_sec": 0 00:24:25.104 } 00:24:25.104 }, 00:24:25.104 { 00:24:25.104 "method": "bdev_iscsi_set_options", 00:24:25.104 "params": { 00:24:25.104 "timeout_sec": 30 00:24:25.104 } 00:24:25.104 }, 00:24:25.104 { 00:24:25.104 "method": "bdev_nvme_set_options", 00:24:25.104 "params": { 00:24:25.104 "action_on_timeout": "none", 00:24:25.104 "timeout_us": 0, 00:24:25.104 "timeout_admin_us": 0, 00:24:25.104 "keep_alive_timeout_ms": 10000, 00:24:25.104 "arbitration_burst": 0, 00:24:25.104 "low_priority_weight": 0, 00:24:25.104 "medium_priority_weight": 0, 00:24:25.104 "high_priority_weight": 0, 00:24:25.104 "nvme_adminq_poll_period_us": 10000, 00:24:25.104 "nvme_ioq_poll_period_us": 0, 00:24:25.104 "io_queue_requests": 512, 00:24:25.104 "delay_cmd_submit": true, 00:24:25.104 "transport_retry_count": 4, 00:24:25.104 "bdev_retry_count": 3, 00:24:25.104 "transport_ack_timeout": 0, 00:24:25.104 "ctrlr_loss_timeout_sec": 0, 00:24:25.104 "reconnect_delay_sec": 0, 00:24:25.104 "fast_io_fail_timeout_sec": 0, 00:24:25.104 "disable_auto_failback": false, 00:24:25.104 "generate_uuids": false, 00:24:25.104 "transport_tos": 0, 00:24:25.104 "nvme_error_stat": false, 00:24:25.104 "rdma_srq_size": 0, 00:24:25.104 "io_path_stat": false, 00:24:25.104 "allow_accel_sequence": false, 00:24:25.104 "rdma_max_cq_size": 0, 00:24:25.104 "rdma_cm_event_timeout_ms": 0, 00:24:25.104 "dhchap_digests": [ 00:24:25.104 "sha256", 00:24:25.104 "sha384", 00:24:25.104 "sha512" 00:24:25.104 ], 00:24:25.104 "dhchap_dhgroups": [ 00:24:25.104 "null", 00:24:25.104 "ffdhe2048", 00:24:25.104 "ffdhe3072", 00:24:25.104 "ffdhe4096", 00:24:25.104 "ffdhe6144", 00:24:25.104 "ffdhe8192" 00:24:25.104 ] 00:24:25.104 } 00:24:25.104 }, 00:24:25.104 { 00:24:25.104 "method": "bdev_nvme_attach_controller", 00:24:25.104 "params": { 00:24:25.104 "name": "nvme0", 00:24:25.104 "trtype": "TCP", 00:24:25.104 "adrfam": "IPv4", 00:24:25.104 "traddr": "10.0.0.2", 00:24:25.104 "trsvcid": "4420", 00:24:25.104 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.104 "prchk_reftag": false, 00:24:25.104 "prchk_guard": false, 00:24:25.104 "ctrlr_loss_timeout_sec": 0, 00:24:25.104 "reconnect_delay_sec": 0, 00:24:25.104 "fast_io_fail_timeout_sec": 0, 00:24:25.104 "psk": "key0", 00:24:25.104 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:25.104 "hdgst": false, 00:24:25.104 "ddgst": false, 00:24:25.104 "multipath": "multipath" 00:24:25.104 } 00:24:25.104 }, 00:24:25.104 { 00:24:25.104 "method": "bdev_nvme_set_hotplug", 00:24:25.104 "params": { 00:24:25.104 "period_us": 100000, 00:24:25.104 "enable": false 00:24:25.104 } 00:24:25.104 }, 00:24:25.104 { 00:24:25.104 "method": "bdev_enable_histogram", 00:24:25.104 "params": { 00:24:25.104 "name": "nvme0n1", 00:24:25.104 "enable": true 00:24:25.104 } 00:24:25.104 }, 00:24:25.104 { 00:24:25.104 "method": "bdev_wait_for_examine" 00:24:25.104 } 00:24:25.104 ] 00:24:25.104 }, 00:24:25.104 { 00:24:25.104 "subsystem": "nbd", 00:24:25.104 "config": [] 00:24:25.104 } 00:24:25.104 ] 00:24:25.104 }' 00:24:25.104 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3902904 00:24:25.104 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3902904 ']' 00:24:25.104 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3902904 00:24:25.104 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:25.104 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:25.104 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3902904 00:24:25.104 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:25.104 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:25.104 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3902904' 00:24:25.104 killing process with pid 3902904 00:24:25.104 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3902904 00:24:25.104 Received shutdown signal, test time was about 1.000000 seconds 00:24:25.104 00:24:25.104 Latency(us) 00:24:25.104 [2024-11-07T12:29:33.111Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.104 [2024-11-07T12:29:33.111Z] =================================================================================================================== 00:24:25.104 [2024-11-07T12:29:33.111Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:25.104 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3902904 00:24:25.364 13:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3902698 00:24:25.364 13:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3902698 ']' 00:24:25.364 13:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3902698 00:24:25.364 13:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:25.654 13:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:25.654 13:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3902698 00:24:25.654 13:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:25.654 13:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:25.654 13:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3902698' 00:24:25.654 killing process with pid 3902698 00:24:25.654 13:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3902698 00:24:25.654 13:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3902698 00:24:26.289 13:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:26.289 13:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:26.289 13:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:26.289 13:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:26.289 "subsystems": [ 00:24:26.289 { 00:24:26.289 "subsystem": "keyring", 00:24:26.289 "config": [ 00:24:26.289 { 00:24:26.289 "method": "keyring_file_add_key", 00:24:26.289 "params": { 00:24:26.289 "name": "key0", 00:24:26.289 "path": "/tmp/tmp.2MGxW7SPjs" 00:24:26.289 } 00:24:26.289 } 00:24:26.289 ] 00:24:26.289 }, 00:24:26.289 { 00:24:26.289 "subsystem": "iobuf", 00:24:26.289 "config": [ 00:24:26.289 { 00:24:26.289 "method": "iobuf_set_options", 00:24:26.289 "params": { 00:24:26.289 "small_pool_count": 8192, 00:24:26.289 "large_pool_count": 1024, 00:24:26.289 "small_bufsize": 8192, 00:24:26.289 "large_bufsize": 135168, 00:24:26.289 "enable_numa": false 00:24:26.289 } 00:24:26.289 } 00:24:26.289 ] 00:24:26.289 }, 00:24:26.289 { 00:24:26.289 "subsystem": "sock", 00:24:26.289 "config": [ 00:24:26.289 { 00:24:26.289 "method": "sock_set_default_impl", 00:24:26.289 "params": { 00:24:26.289 "impl_name": "posix" 00:24:26.289 } 00:24:26.289 }, 00:24:26.289 { 00:24:26.289 "method": "sock_impl_set_options", 00:24:26.289 "params": { 00:24:26.289 "impl_name": "ssl", 00:24:26.289 "recv_buf_size": 4096, 00:24:26.289 "send_buf_size": 4096, 00:24:26.289 "enable_recv_pipe": true, 00:24:26.289 "enable_quickack": false, 00:24:26.289 "enable_placement_id": 0, 00:24:26.289 "enable_zerocopy_send_server": true, 00:24:26.289 "enable_zerocopy_send_client": false, 00:24:26.289 "zerocopy_threshold": 0, 00:24:26.289 "tls_version": 0, 00:24:26.289 "enable_ktls": false 00:24:26.289 } 00:24:26.289 }, 00:24:26.289 { 00:24:26.289 "method": "sock_impl_set_options", 00:24:26.289 "params": { 00:24:26.289 "impl_name": "posix", 00:24:26.289 "recv_buf_size": 2097152, 00:24:26.289 "send_buf_size": 2097152, 00:24:26.289 "enable_recv_pipe": true, 00:24:26.289 "enable_quickack": false, 00:24:26.289 "enable_placement_id": 0, 00:24:26.289 "enable_zerocopy_send_server": true, 00:24:26.289 "enable_zerocopy_send_client": false, 00:24:26.289 "zerocopy_threshold": 0, 00:24:26.289 "tls_version": 0, 00:24:26.289 "enable_ktls": false 00:24:26.289 } 00:24:26.289 } 00:24:26.289 ] 00:24:26.289 }, 00:24:26.289 { 00:24:26.289 "subsystem": "vmd", 00:24:26.289 "config": [] 00:24:26.289 }, 00:24:26.289 { 00:24:26.289 "subsystem": "accel", 00:24:26.289 "config": [ 00:24:26.289 { 00:24:26.289 "method": "accel_set_options", 00:24:26.289 "params": { 00:24:26.289 "small_cache_size": 128, 00:24:26.289 "large_cache_size": 16, 00:24:26.289 "task_count": 2048, 00:24:26.289 "sequence_count": 2048, 00:24:26.289 "buf_count": 2048 00:24:26.289 } 00:24:26.289 } 00:24:26.289 ] 00:24:26.289 }, 00:24:26.289 { 00:24:26.289 "subsystem": "bdev", 00:24:26.289 "config": [ 00:24:26.289 { 00:24:26.289 "method": "bdev_set_options", 00:24:26.289 "params": { 00:24:26.289 "bdev_io_pool_size": 65535, 00:24:26.289 "bdev_io_cache_size": 256, 00:24:26.289 "bdev_auto_examine": true, 00:24:26.289 "iobuf_small_cache_size": 128, 00:24:26.289 "iobuf_large_cache_size": 16 00:24:26.289 } 00:24:26.289 }, 00:24:26.289 { 00:24:26.289 "method": "bdev_raid_set_options", 00:24:26.289 "params": { 00:24:26.289 "process_window_size_kb": 1024, 00:24:26.289 "process_max_bandwidth_mb_sec": 0 00:24:26.289 } 00:24:26.289 }, 00:24:26.289 { 00:24:26.289 "method": "bdev_iscsi_set_options", 00:24:26.289 "params": { 00:24:26.289 "timeout_sec": 30 00:24:26.289 } 00:24:26.289 }, 00:24:26.289 { 00:24:26.289 "method": "bdev_nvme_set_options", 00:24:26.289 "params": { 00:24:26.289 "action_on_timeout": "none", 00:24:26.289 "timeout_us": 0, 00:24:26.289 "timeout_admin_us": 0, 00:24:26.289 "keep_alive_timeout_ms": 10000, 00:24:26.289 "arbitration_burst": 0, 00:24:26.289 "low_priority_weight": 0, 00:24:26.289 "medium_priority_weight": 0, 00:24:26.289 "high_priority_weight": 0, 00:24:26.289 "nvme_adminq_poll_period_us": 10000, 00:24:26.289 "nvme_ioq_poll_period_us": 0, 00:24:26.289 "io_queue_requests": 0, 00:24:26.289 "delay_cmd_submit": true, 00:24:26.289 "transport_retry_count": 4, 00:24:26.289 "bdev_retry_count": 3, 00:24:26.289 "transport_ack_timeout": 0, 00:24:26.289 "ctrlr_loss_timeout_sec": 0, 00:24:26.289 "reconnect_delay_sec": 0, 00:24:26.289 "fast_io_fail_timeout_sec": 0, 00:24:26.289 "disable_auto_failback": false, 00:24:26.289 "generate_uuids": false, 00:24:26.289 "transport_tos": 0, 00:24:26.289 "nvme_error_stat": false, 00:24:26.289 "rdma_srq_size": 0, 00:24:26.289 "io_path_stat": false, 00:24:26.289 "allow_accel_sequence": false, 00:24:26.289 "rdma_max_cq_size": 0, 00:24:26.289 "rdma_cm_event_timeout_ms": 0, 00:24:26.289 "dhchap_digests": [ 00:24:26.289 "sha256", 00:24:26.289 "sha384", 00:24:26.289 "sha512" 00:24:26.289 ], 00:24:26.289 "dhchap_dhgroups": [ 00:24:26.289 "null", 00:24:26.289 "ffdhe2048", 00:24:26.289 "ffdhe3072", 00:24:26.289 "ffdhe4096", 00:24:26.289 "ffdhe6144", 00:24:26.289 "ffdhe8192" 00:24:26.289 ] 00:24:26.289 } 00:24:26.289 }, 00:24:26.289 { 00:24:26.289 "method": "bdev_nvme_set_hotplug", 00:24:26.289 "params": { 00:24:26.289 "period_us": 100000, 00:24:26.289 "enable": false 00:24:26.289 } 00:24:26.289 }, 00:24:26.289 { 00:24:26.289 "method": "bdev_malloc_create", 00:24:26.289 "params": { 00:24:26.289 "name": "malloc0", 00:24:26.289 "num_blocks": 8192, 00:24:26.289 "block_size": 4096, 00:24:26.289 "physical_block_size": 4096, 00:24:26.289 "uuid": "6a94342e-a67b-43ff-a519-866118a7ad96", 00:24:26.289 "optimal_io_boundary": 0, 00:24:26.289 "md_size": 0, 00:24:26.289 "dif_type": 0, 00:24:26.289 "dif_is_head_of_md": false, 00:24:26.289 "dif_pi_format": 0 00:24:26.289 } 00:24:26.289 }, 00:24:26.289 { 00:24:26.289 "method": "bdev_wait_for_examine" 00:24:26.289 } 00:24:26.289 ] 00:24:26.289 }, 00:24:26.289 { 00:24:26.289 "subsystem": "nbd", 00:24:26.289 "config": [] 00:24:26.289 }, 00:24:26.289 { 00:24:26.289 "subsystem": "scheduler", 00:24:26.289 "config": [ 00:24:26.289 { 00:24:26.289 "method": "framework_set_scheduler", 00:24:26.289 "params": { 00:24:26.289 "name": "static" 00:24:26.289 } 00:24:26.289 } 00:24:26.289 ] 00:24:26.289 }, 00:24:26.289 { 00:24:26.289 "subsystem": "nvmf", 00:24:26.289 "config": [ 00:24:26.289 { 00:24:26.289 "method": "nvmf_set_config", 00:24:26.289 "params": { 00:24:26.289 "discovery_filter": "match_any", 00:24:26.289 "admin_cmd_passthru": { 00:24:26.289 "identify_ctrlr": false 00:24:26.289 }, 00:24:26.289 "dhchap_digests": [ 00:24:26.289 "sha256", 00:24:26.289 "sha384", 00:24:26.289 "sha512" 00:24:26.289 ], 00:24:26.289 "dhchap_dhgroups": [ 00:24:26.289 "null", 00:24:26.289 "ffdhe2048", 00:24:26.289 "ffdhe3072", 00:24:26.289 "ffdhe4096", 00:24:26.289 "ffdhe6144", 00:24:26.289 "ffdhe8192" 00:24:26.289 ] 00:24:26.289 } 00:24:26.289 }, 00:24:26.289 { 00:24:26.289 "method": "nvmf_set_max_subsystems", 00:24:26.289 "params": { 00:24:26.289 "max_subsystems": 1024 00:24:26.289 } 00:24:26.289 }, 00:24:26.289 { 00:24:26.289 "method": "nvmf_set_crdt", 00:24:26.289 "params": { 00:24:26.289 "crdt1": 0, 00:24:26.289 "crdt2": 0, 00:24:26.289 "crdt3": 0 00:24:26.289 } 00:24:26.289 }, 00:24:26.289 { 00:24:26.289 "method": "nvmf_create_transport", 00:24:26.289 "params": { 00:24:26.289 "trtype": "TCP", 00:24:26.289 "max_queue_depth": 128, 00:24:26.289 "max_io_qpairs_per_ctrlr": 127, 00:24:26.289 "in_capsule_data_size": 4096, 00:24:26.289 "max_io_size": 131072, 00:24:26.289 "io_unit_size": 131072, 00:24:26.289 "max_aq_depth": 128, 00:24:26.289 "num_shared_buffers": 511, 00:24:26.289 "buf_cache_size": 4294967295, 00:24:26.289 "dif_insert_or_strip": false, 00:24:26.289 "zcopy": false, 00:24:26.289 "c2h_success": false, 00:24:26.289 "sock_priority": 0, 00:24:26.289 "abort_timeout_sec": 1, 00:24:26.289 "ack_timeout": 0, 00:24:26.289 "data_wr_pool_size": 0 00:24:26.289 } 00:24:26.289 }, 00:24:26.289 { 00:24:26.289 "method": "nvmf_create_subsystem", 00:24:26.289 "params": { 00:24:26.289 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:26.289 "allow_any_host": false, 00:24:26.289 "serial_number": "00000000000000000000", 00:24:26.290 "model_number": "SPDK bdev Controller", 00:24:26.290 "max_namespaces": 32, 00:24:26.290 "min_cntlid": 1, 00:24:26.290 "max_cntlid": 65519, 00:24:26.290 "ana_reporting": false 00:24:26.290 } 00:24:26.290 }, 00:24:26.290 { 00:24:26.290 "method": "nvmf_subsystem_add_host", 00:24:26.290 "params": { 00:24:26.290 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:26.290 "host": "nqn.2016-06.io.spdk:host1", 00:24:26.290 "psk": "key0" 00:24:26.290 } 00:24:26.290 }, 00:24:26.290 { 00:24:26.290 "method": "nvmf_subsystem_add_ns", 00:24:26.290 "params": { 00:24:26.290 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:26.290 "namespace": { 00:24:26.290 "nsid": 1, 00:24:26.290 "bdev_name": "malloc0", 00:24:26.290 "nguid": "6A94342EA67B43FFA519866118A7AD96", 00:24:26.290 "uuid": "6a94342e-a67b-43ff-a519-866118a7ad96", 00:24:26.290 "no_auto_visible": false 00:24:26.290 } 00:24:26.290 } 00:24:26.290 }, 00:24:26.290 { 00:24:26.290 "method": "nvmf_subsystem_add_listener", 00:24:26.290 "params": { 00:24:26.290 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:26.290 "listen_address": { 00:24:26.290 "trtype": "TCP", 00:24:26.290 "adrfam": "IPv4", 00:24:26.290 "traddr": "10.0.0.2", 00:24:26.290 "trsvcid": "4420" 00:24:26.290 }, 00:24:26.290 "secure_channel": false, 00:24:26.290 "sock_impl": "ssl" 00:24:26.290 } 00:24:26.290 } 00:24:26.290 ] 00:24:26.290 } 00:24:26.290 ] 00:24:26.290 }' 00:24:26.290 13:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:26.290 13:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3903706 00:24:26.290 13:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:26.290 13:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3903706 00:24:26.290 13:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3903706 ']' 00:24:26.290 13:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.290 13:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:26.290 13:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.290 13:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:26.290 13:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:26.549 [2024-11-07 13:29:34.368059] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:24:26.549 [2024-11-07 13:29:34.368182] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:26.549 [2024-11-07 13:29:34.526018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.810 [2024-11-07 13:29:34.625553] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:26.810 [2024-11-07 13:29:34.625596] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:26.810 [2024-11-07 13:29:34.625611] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:26.810 [2024-11-07 13:29:34.625623] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:26.810 [2024-11-07 13:29:34.625635] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:26.810 [2024-11-07 13:29:34.626882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.070 [2024-11-07 13:29:35.028274] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:27.070 [2024-11-07 13:29:35.060293] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:27.070 [2024-11-07 13:29:35.060559] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.330 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:27.330 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:27.330 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:27.330 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:27.330 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:27.330 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:27.330 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3903940 00:24:27.330 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3903940 /var/tmp/bdevperf.sock 00:24:27.330 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3903940 ']' 00:24:27.330 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:27.330 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:27.330 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:27.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:27.330 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:27.330 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:27.330 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:27.330 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:27.330 "subsystems": [ 00:24:27.330 { 00:24:27.330 "subsystem": "keyring", 00:24:27.330 "config": [ 00:24:27.330 { 00:24:27.330 "method": "keyring_file_add_key", 00:24:27.330 "params": { 00:24:27.330 "name": "key0", 00:24:27.330 "path": "/tmp/tmp.2MGxW7SPjs" 00:24:27.330 } 00:24:27.330 } 00:24:27.330 ] 00:24:27.330 }, 00:24:27.330 { 00:24:27.330 "subsystem": "iobuf", 00:24:27.330 "config": [ 00:24:27.330 { 00:24:27.330 "method": "iobuf_set_options", 00:24:27.330 "params": { 00:24:27.330 "small_pool_count": 8192, 00:24:27.330 "large_pool_count": 1024, 00:24:27.330 "small_bufsize": 8192, 00:24:27.330 "large_bufsize": 135168, 00:24:27.330 "enable_numa": false 00:24:27.330 } 00:24:27.330 } 00:24:27.330 ] 00:24:27.330 }, 00:24:27.330 { 00:24:27.330 "subsystem": "sock", 00:24:27.330 "config": [ 00:24:27.330 { 00:24:27.330 "method": "sock_set_default_impl", 00:24:27.330 "params": { 00:24:27.330 "impl_name": "posix" 00:24:27.330 } 00:24:27.330 }, 00:24:27.330 { 00:24:27.330 "method": "sock_impl_set_options", 00:24:27.330 "params": { 00:24:27.330 "impl_name": "ssl", 00:24:27.330 "recv_buf_size": 4096, 00:24:27.330 "send_buf_size": 4096, 00:24:27.330 "enable_recv_pipe": true, 00:24:27.330 "enable_quickack": false, 00:24:27.330 "enable_placement_id": 0, 00:24:27.330 "enable_zerocopy_send_server": true, 00:24:27.330 "enable_zerocopy_send_client": false, 00:24:27.330 "zerocopy_threshold": 0, 00:24:27.330 "tls_version": 0, 00:24:27.330 "enable_ktls": false 00:24:27.330 } 00:24:27.330 }, 00:24:27.330 { 00:24:27.330 "method": "sock_impl_set_options", 00:24:27.331 "params": { 00:24:27.331 "impl_name": "posix", 00:24:27.331 "recv_buf_size": 2097152, 00:24:27.331 "send_buf_size": 2097152, 00:24:27.331 "enable_recv_pipe": true, 00:24:27.331 "enable_quickack": false, 00:24:27.331 "enable_placement_id": 0, 00:24:27.331 "enable_zerocopy_send_server": true, 00:24:27.331 "enable_zerocopy_send_client": false, 00:24:27.331 "zerocopy_threshold": 0, 00:24:27.331 "tls_version": 0, 00:24:27.331 "enable_ktls": false 00:24:27.331 } 00:24:27.331 } 00:24:27.331 ] 00:24:27.331 }, 00:24:27.331 { 00:24:27.331 "subsystem": "vmd", 00:24:27.331 "config": [] 00:24:27.331 }, 00:24:27.331 { 00:24:27.331 "subsystem": "accel", 00:24:27.331 "config": [ 00:24:27.331 { 00:24:27.331 "method": "accel_set_options", 00:24:27.331 "params": { 00:24:27.331 "small_cache_size": 128, 00:24:27.331 "large_cache_size": 16, 00:24:27.331 "task_count": 2048, 00:24:27.331 "sequence_count": 2048, 00:24:27.331 "buf_count": 2048 00:24:27.331 } 00:24:27.331 } 00:24:27.331 ] 00:24:27.331 }, 00:24:27.331 { 00:24:27.331 "subsystem": "bdev", 00:24:27.331 "config": [ 00:24:27.331 { 00:24:27.331 "method": "bdev_set_options", 00:24:27.331 "params": { 00:24:27.331 "bdev_io_pool_size": 65535, 00:24:27.331 "bdev_io_cache_size": 256, 00:24:27.331 "bdev_auto_examine": true, 00:24:27.331 "iobuf_small_cache_size": 128, 00:24:27.331 "iobuf_large_cache_size": 16 00:24:27.331 } 00:24:27.331 }, 00:24:27.331 { 00:24:27.331 "method": "bdev_raid_set_options", 00:24:27.331 "params": { 00:24:27.331 "process_window_size_kb": 1024, 00:24:27.331 "process_max_bandwidth_mb_sec": 0 00:24:27.331 } 00:24:27.331 }, 00:24:27.331 { 00:24:27.331 "method": "bdev_iscsi_set_options", 00:24:27.331 "params": { 00:24:27.331 "timeout_sec": 30 00:24:27.331 } 00:24:27.331 }, 00:24:27.331 { 00:24:27.331 "method": "bdev_nvme_set_options", 00:24:27.331 "params": { 00:24:27.331 "action_on_timeout": "none", 00:24:27.331 "timeout_us": 0, 00:24:27.331 "timeout_admin_us": 0, 00:24:27.331 "keep_alive_timeout_ms": 10000, 00:24:27.331 "arbitration_burst": 0, 00:24:27.331 "low_priority_weight": 0, 00:24:27.331 "medium_priority_weight": 0, 00:24:27.331 "high_priority_weight": 0, 00:24:27.331 "nvme_adminq_poll_period_us": 10000, 00:24:27.331 "nvme_ioq_poll_period_us": 0, 00:24:27.331 "io_queue_requests": 512, 00:24:27.331 "delay_cmd_submit": true, 00:24:27.331 "transport_retry_count": 4, 00:24:27.331 "bdev_retry_count": 3, 00:24:27.331 "transport_ack_timeout": 0, 00:24:27.331 "ctrlr_loss_timeout_sec": 0, 00:24:27.331 "reconnect_delay_sec": 0, 00:24:27.331 "fast_io_fail_timeout_sec": 0, 00:24:27.331 "disable_auto_failback": false, 00:24:27.331 "generate_uuids": false, 00:24:27.331 "transport_tos": 0, 00:24:27.331 "nvme_error_stat": false, 00:24:27.331 "rdma_srq_size": 0, 00:24:27.331 "io_path_stat": false, 00:24:27.331 "allow_accel_sequence": false, 00:24:27.331 "rdma_max_cq_size": 0, 00:24:27.331 "rdma_cm_event_timeout_ms": 0, 00:24:27.331 "dhchap_digests": [ 00:24:27.331 "sha256", 00:24:27.331 "sha384", 00:24:27.331 "sha512" 00:24:27.331 ], 00:24:27.331 "dhchap_dhgroups": [ 00:24:27.331 "null", 00:24:27.331 "ffdhe2048", 00:24:27.331 "ffdhe3072", 00:24:27.331 "ffdhe4096", 00:24:27.331 "ffdhe6144", 00:24:27.331 "ffdhe8192" 00:24:27.331 ] 00:24:27.331 } 00:24:27.331 }, 00:24:27.331 { 00:24:27.331 "method": "bdev_nvme_attach_controller", 00:24:27.331 "params": { 00:24:27.331 "name": "nvme0", 00:24:27.331 "trtype": "TCP", 00:24:27.331 "adrfam": "IPv4", 00:24:27.331 "traddr": "10.0.0.2", 00:24:27.331 "trsvcid": "4420", 00:24:27.331 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:27.331 "prchk_reftag": false, 00:24:27.331 "prchk_guard": false, 00:24:27.331 "ctrlr_loss_timeout_sec": 0, 00:24:27.331 "reconnect_delay_sec": 0, 00:24:27.331 "fast_io_fail_timeout_sec": 0, 00:24:27.331 "psk": "key0", 00:24:27.331 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:27.331 "hdgst": false, 00:24:27.331 "ddgst": false, 00:24:27.331 "multipath": "multipath" 00:24:27.331 } 00:24:27.331 }, 00:24:27.331 { 00:24:27.331 "method": "bdev_nvme_set_hotplug", 00:24:27.331 "params": { 00:24:27.331 "period_us": 100000, 00:24:27.331 "enable": false 00:24:27.331 } 00:24:27.331 }, 00:24:27.331 { 00:24:27.331 "method": "bdev_enable_histogram", 00:24:27.331 "params": { 00:24:27.331 "name": "nvme0n1", 00:24:27.331 "enable": true 00:24:27.331 } 00:24:27.331 }, 00:24:27.331 { 00:24:27.331 "method": "bdev_wait_for_examine" 00:24:27.331 } 00:24:27.331 ] 00:24:27.331 }, 00:24:27.331 { 00:24:27.331 "subsystem": "nbd", 00:24:27.331 "config": [] 00:24:27.331 } 00:24:27.331 ] 00:24:27.331 }' 00:24:27.331 [2024-11-07 13:29:35.231341] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:24:27.331 [2024-11-07 13:29:35.231449] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3903940 ] 00:24:27.591 [2024-11-07 13:29:35.371663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.591 [2024-11-07 13:29:35.445443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:27.852 [2024-11-07 13:29:35.703493] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:28.112 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:28.112 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:28.112 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:28.112 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:28.372 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.372 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:28.372 Running I/O for 1 seconds... 00:24:29.312 3708.00 IOPS, 14.48 MiB/s 00:24:29.312 Latency(us) 00:24:29.312 [2024-11-07T12:29:37.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.312 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:29.312 Verification LBA range: start 0x0 length 0x2000 00:24:29.312 nvme0n1 : 1.02 3766.44 14.71 0.00 0.00 33674.05 5406.72 83449.17 00:24:29.312 [2024-11-07T12:29:37.319Z] =================================================================================================================== 00:24:29.312 [2024-11-07T12:29:37.319Z] Total : 3766.44 14.71 0.00 0.00 33674.05 5406.72 83449.17 00:24:29.312 { 00:24:29.312 "results": [ 00:24:29.312 { 00:24:29.312 "job": "nvme0n1", 00:24:29.312 "core_mask": "0x2", 00:24:29.312 "workload": "verify", 00:24:29.312 "status": "finished", 00:24:29.312 "verify_range": { 00:24:29.312 "start": 0, 00:24:29.312 "length": 8192 00:24:29.312 }, 00:24:29.312 "queue_depth": 128, 00:24:29.312 "io_size": 4096, 00:24:29.312 "runtime": 1.018468, 00:24:29.312 "iops": 3766.4413609460485, 00:24:29.312 "mibps": 14.712661566195502, 00:24:29.312 "io_failed": 0, 00:24:29.312 "io_timeout": 0, 00:24:29.312 "avg_latency_us": 33674.04602015989, 00:24:29.312 "min_latency_us": 5406.72, 00:24:29.312 "max_latency_us": 83449.17333333334 00:24:29.312 } 00:24:29.312 ], 00:24:29.312 "core_count": 1 00:24:29.312 } 00:24:29.312 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:29.312 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:29.313 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:29.313 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:24:29.313 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:24:29.313 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:24:29.313 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:29.313 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:24:29.313 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:24:29.313 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:24:29.313 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:29.313 nvmf_trace.0 00:24:29.573 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:24:29.573 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3903940 00:24:29.573 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3903940 ']' 00:24:29.573 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3903940 00:24:29.573 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:29.573 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:29.573 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3903940 00:24:29.573 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:29.573 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:29.573 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3903940' 00:24:29.573 killing process with pid 3903940 00:24:29.573 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3903940 00:24:29.573 Received shutdown signal, test time was about 1.000000 seconds 00:24:29.573 00:24:29.573 Latency(us) 00:24:29.573 [2024-11-07T12:29:37.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.573 [2024-11-07T12:29:37.580Z] =================================================================================================================== 00:24:29.573 [2024-11-07T12:29:37.580Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:29.573 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3903940 00:24:30.145 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:30.145 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:30.145 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:30.145 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:30.145 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:30.145 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:30.145 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:30.145 rmmod nvme_tcp 00:24:30.145 rmmod nvme_fabrics 00:24:30.145 rmmod nvme_keyring 00:24:30.145 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:30.145 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:30.145 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:30.145 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3903706 ']' 00:24:30.145 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3903706 00:24:30.145 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3903706 ']' 00:24:30.145 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3903706 00:24:30.145 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:30.145 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:30.145 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3903706 00:24:30.145 13:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:30.145 13:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:30.145 13:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3903706' 00:24:30.145 killing process with pid 3903706 00:24:30.145 13:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3903706 00:24:30.145 13:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3903706 00:24:31.087 13:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:31.087 13:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:31.087 13:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:31.087 13:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:31.087 13:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:24:31.087 13:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:31.087 13:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:24:31.087 13:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:31.087 13:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:31.087 13:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.087 13:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:31.087 13:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.996 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:32.996 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.4PwVgsX2nf /tmp/tmp.uWpQUGgLgJ /tmp/tmp.2MGxW7SPjs 00:24:32.996 00:24:32.996 real 1m38.380s 00:24:32.996 user 2m31.129s 00:24:32.996 sys 0m29.998s 00:24:32.996 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:32.996 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:32.996 ************************************ 00:24:32.996 END TEST nvmf_tls 00:24:32.996 ************************************ 00:24:32.996 13:29:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:32.996 13:29:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:32.996 13:29:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:32.996 13:29:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:32.996 ************************************ 00:24:32.996 START TEST nvmf_fips 00:24:32.996 ************************************ 00:24:32.996 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:33.258 * Looking for test storage... 00:24:33.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:33.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.258 --rc genhtml_branch_coverage=1 00:24:33.258 --rc genhtml_function_coverage=1 00:24:33.258 --rc genhtml_legend=1 00:24:33.258 --rc geninfo_all_blocks=1 00:24:33.258 --rc geninfo_unexecuted_blocks=1 00:24:33.258 00:24:33.258 ' 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:33.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.258 --rc genhtml_branch_coverage=1 00:24:33.258 --rc genhtml_function_coverage=1 00:24:33.258 --rc genhtml_legend=1 00:24:33.258 --rc geninfo_all_blocks=1 00:24:33.258 --rc geninfo_unexecuted_blocks=1 00:24:33.258 00:24:33.258 ' 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:33.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.258 --rc genhtml_branch_coverage=1 00:24:33.258 --rc genhtml_function_coverage=1 00:24:33.258 --rc genhtml_legend=1 00:24:33.258 --rc geninfo_all_blocks=1 00:24:33.258 --rc geninfo_unexecuted_blocks=1 00:24:33.258 00:24:33.258 ' 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:33.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.258 --rc genhtml_branch_coverage=1 00:24:33.258 --rc genhtml_function_coverage=1 00:24:33.258 --rc genhtml_legend=1 00:24:33.258 --rc geninfo_all_blocks=1 00:24:33.258 --rc geninfo_unexecuted_blocks=1 00:24:33.258 00:24:33.258 ' 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:33.258 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.259 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.259 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.259 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:33.259 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.259 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:33.259 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:33.259 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:33.259 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:33.259 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:33.259 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:33.259 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:33.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:33.259 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:33.259 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:33.259 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:33.259 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:33.259 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:33.259 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:33.259 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:33.259 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:33.259 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:33.259 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:33.259 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:33.259 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:33.259 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:33.259 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:33.259 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:33.259 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:33.259 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:33.259 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:33.259 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:33.259 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:33.259 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:33.259 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:33.259 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:33.259 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:33.259 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:33.259 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:33.259 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:33.259 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:33.259 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:33.259 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:24:33.520 Error setting digest 00:24:33.520 403297D4B27F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:33.520 403297D4B27F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:33.520 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:41.666 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:41.666 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:41.666 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:41.666 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:41.666 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:41.666 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:41.666 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:41.666 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:41.666 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:41.666 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:41.666 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:41.666 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:41.666 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:41.666 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:41.666 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:41.666 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:41.666 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:41.666 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:41.666 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:41.666 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:41.666 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:41.666 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:41.666 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:41.666 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:41.667 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:41.667 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:41.667 Found net devices under 0000:31:00.0: cvl_0_0 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:41.667 Found net devices under 0000:31:00.1: cvl_0_1 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:41.667 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:41.667 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:41.667 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:41.667 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:41.667 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:41.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:41.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:24:41.667 00:24:41.667 --- 10.0.0.2 ping statistics --- 00:24:41.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:41.667 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:24:41.667 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:41.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:41.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:24:41.667 00:24:41.667 --- 10.0.0.1 ping statistics --- 00:24:41.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:41.667 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:24:41.667 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:41.667 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:24:41.667 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:41.667 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:41.667 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:41.667 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:41.667 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:41.667 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:41.667 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:41.667 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:41.667 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:41.667 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:41.667 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:41.667 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3909286 00:24:41.667 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3909286 00:24:41.667 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:41.667 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 3909286 ']' 00:24:41.667 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:41.667 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:41.667 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:41.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:41.667 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:41.667 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:41.667 [2024-11-07 13:29:49.287331] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:24:41.668 [2024-11-07 13:29:49.287472] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:41.668 [2024-11-07 13:29:49.465184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.668 [2024-11-07 13:29:49.585475] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:41.668 [2024-11-07 13:29:49.585549] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:41.668 [2024-11-07 13:29:49.585562] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:41.668 [2024-11-07 13:29:49.585574] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:41.668 [2024-11-07 13:29:49.585584] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:41.668 [2024-11-07 13:29:49.587125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:42.239 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:42.239 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:24:42.239 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:42.239 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:42.239 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:42.239 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:42.239 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:42.239 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:42.239 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:42.239 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.NEA 00:24:42.239 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:42.239 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.NEA 00:24:42.239 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.NEA 00:24:42.239 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.NEA 00:24:42.239 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:42.239 [2024-11-07 13:29:50.213521] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:42.239 [2024-11-07 13:29:50.229497] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:42.239 [2024-11-07 13:29:50.229833] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:42.500 malloc0 00:24:42.500 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:42.500 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3909447 00:24:42.500 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3909447 /var/tmp/bdevperf.sock 00:24:42.500 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:42.501 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 3909447 ']' 00:24:42.501 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:42.501 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:42.501 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:42.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:42.501 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:42.501 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:42.501 [2024-11-07 13:29:50.428605] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:24:42.501 [2024-11-07 13:29:50.428748] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3909447 ] 00:24:42.762 [2024-11-07 13:29:50.560452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.762 [2024-11-07 13:29:50.635239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:43.333 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:43.333 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:24:43.333 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.NEA 00:24:43.593 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:43.593 [2024-11-07 13:29:51.497022] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:43.593 TLSTESTn1 00:24:43.854 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:43.854 Running I/O for 10 seconds... 00:24:45.735 4285.00 IOPS, 16.74 MiB/s [2024-11-07T12:29:55.124Z] 4603.00 IOPS, 17.98 MiB/s [2024-11-07T12:29:56.065Z] 4691.67 IOPS, 18.33 MiB/s [2024-11-07T12:29:57.005Z] 4790.00 IOPS, 18.71 MiB/s [2024-11-07T12:29:57.947Z] 4724.00 IOPS, 18.45 MiB/s [2024-11-07T12:29:58.887Z] 4768.67 IOPS, 18.63 MiB/s [2024-11-07T12:29:59.828Z] 4832.86 IOPS, 18.88 MiB/s [2024-11-07T12:30:00.768Z] 4791.50 IOPS, 18.72 MiB/s [2024-11-07T12:30:02.151Z] 4825.11 IOPS, 18.85 MiB/s [2024-11-07T12:30:02.151Z] 4844.40 IOPS, 18.92 MiB/s 00:24:54.144 Latency(us) 00:24:54.144 [2024-11-07T12:30:02.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.144 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:54.144 Verification LBA range: start 0x0 length 0x2000 00:24:54.144 TLSTESTn1 : 10.02 4845.31 18.93 0.00 0.00 26370.15 6253.23 58108.59 00:24:54.144 [2024-11-07T12:30:02.151Z] =================================================================================================================== 00:24:54.144 [2024-11-07T12:30:02.151Z] Total : 4845.31 18.93 0.00 0.00 26370.15 6253.23 58108.59 00:24:54.144 { 00:24:54.144 "results": [ 00:24:54.144 { 00:24:54.144 "job": "TLSTESTn1", 00:24:54.144 "core_mask": "0x4", 00:24:54.144 "workload": "verify", 00:24:54.144 "status": "finished", 00:24:54.144 "verify_range": { 00:24:54.144 "start": 0, 00:24:54.144 "length": 8192 00:24:54.144 }, 00:24:54.144 "queue_depth": 128, 00:24:54.144 "io_size": 4096, 00:24:54.144 "runtime": 10.024534, 00:24:54.144 "iops": 4845.312510287261, 00:24:54.144 "mibps": 18.927001993309613, 00:24:54.144 "io_failed": 0, 00:24:54.144 "io_timeout": 0, 00:24:54.144 "avg_latency_us": 26370.147844848885, 00:24:54.144 "min_latency_us": 6253.2266666666665, 00:24:54.144 "max_latency_us": 58108.58666666667 00:24:54.144 } 00:24:54.144 ], 00:24:54.144 "core_count": 1 00:24:54.144 } 00:24:54.144 13:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:54.144 13:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:54.144 13:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:24:54.144 13:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:24:54.144 13:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:24:54.144 13:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:54.144 13:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:24:54.144 13:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:24:54.144 13:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:24:54.144 13:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:54.144 nvmf_trace.0 00:24:54.144 13:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:24:54.144 13:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3909447 00:24:54.144 13:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 3909447 ']' 00:24:54.144 13:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 3909447 00:24:54.144 13:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:24:54.144 13:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:54.144 13:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3909447 00:24:54.144 13:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:54.144 13:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:54.144 13:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3909447' 00:24:54.144 killing process with pid 3909447 00:24:54.144 13:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 3909447 00:24:54.144 Received shutdown signal, test time was about 10.000000 seconds 00:24:54.144 00:24:54.144 Latency(us) 00:24:54.144 [2024-11-07T12:30:02.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.144 [2024-11-07T12:30:02.151Z] =================================================================================================================== 00:24:54.144 [2024-11-07T12:30:02.151Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:54.144 13:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 3909447 00:24:54.405 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:54.405 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:54.405 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:54.405 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:54.405 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:54.405 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:54.405 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:54.405 rmmod nvme_tcp 00:24:54.665 rmmod nvme_fabrics 00:24:54.665 rmmod nvme_keyring 00:24:54.665 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:54.665 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:54.665 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:54.665 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3909286 ']' 00:24:54.665 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3909286 00:24:54.665 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 3909286 ']' 00:24:54.665 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 3909286 00:24:54.665 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:24:54.665 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:54.665 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3909286 00:24:54.665 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:54.665 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:54.665 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3909286' 00:24:54.665 killing process with pid 3909286 00:24:54.665 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 3909286 00:24:54.665 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 3909286 00:24:55.235 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:55.235 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:55.235 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:55.235 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:55.235 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:24:55.235 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:55.235 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:24:55.235 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:55.235 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:55.235 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.235 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:55.235 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.NEA 00:24:57.781 00:24:57.781 real 0m24.253s 00:24:57.781 user 0m25.281s 00:24:57.781 sys 0m10.456s 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:57.781 ************************************ 00:24:57.781 END TEST nvmf_fips 00:24:57.781 ************************************ 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:57.781 ************************************ 00:24:57.781 START TEST nvmf_control_msg_list 00:24:57.781 ************************************ 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:57.781 * Looking for test storage... 00:24:57.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:57.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.781 --rc genhtml_branch_coverage=1 00:24:57.781 --rc genhtml_function_coverage=1 00:24:57.781 --rc genhtml_legend=1 00:24:57.781 --rc geninfo_all_blocks=1 00:24:57.781 --rc geninfo_unexecuted_blocks=1 00:24:57.781 00:24:57.781 ' 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:57.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.781 --rc genhtml_branch_coverage=1 00:24:57.781 --rc genhtml_function_coverage=1 00:24:57.781 --rc genhtml_legend=1 00:24:57.781 --rc geninfo_all_blocks=1 00:24:57.781 --rc geninfo_unexecuted_blocks=1 00:24:57.781 00:24:57.781 ' 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:57.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.781 --rc genhtml_branch_coverage=1 00:24:57.781 --rc genhtml_function_coverage=1 00:24:57.781 --rc genhtml_legend=1 00:24:57.781 --rc geninfo_all_blocks=1 00:24:57.781 --rc geninfo_unexecuted_blocks=1 00:24:57.781 00:24:57.781 ' 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:57.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.781 --rc genhtml_branch_coverage=1 00:24:57.781 --rc genhtml_function_coverage=1 00:24:57.781 --rc genhtml_legend=1 00:24:57.781 --rc geninfo_all_blocks=1 00:24:57.781 --rc geninfo_unexecuted_blocks=1 00:24:57.781 00:24:57.781 ' 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.781 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:57.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:57.782 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:05.920 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:05.920 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:25:05.920 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:05.920 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:05.920 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:05.920 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:05.920 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:05.920 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:25:05.920 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:05.920 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:25:05.920 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:25:05.920 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:25:05.920 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:25:05.920 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:25:05.920 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:25:05.920 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:05.920 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:05.920 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:05.920 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:05.920 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:05.920 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:05.920 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:05.920 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:05.920 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:05.921 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:05.921 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:05.921 Found net devices under 0000:31:00.0: cvl_0_0 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:05.921 Found net devices under 0000:31:00.1: cvl_0_1 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:05.921 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:05.921 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.665 ms 00:25:05.921 00:25:05.921 --- 10.0.0.2 ping statistics --- 00:25:05.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.921 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:05.921 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:05.921 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:25:05.921 00:25:05.921 --- 10.0.0.1 ping statistics --- 00:25:05.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.921 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:05.921 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:06.182 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:06.182 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:06.182 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:25:06.182 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:06.182 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:06.182 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:06.182 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3917164 00:25:06.182 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3917164 00:25:06.182 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:06.182 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 3917164 ']' 00:25:06.182 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:06.183 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:06.183 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:06.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:06.183 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:06.183 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:06.183 [2024-11-07 13:30:14.062028] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:25:06.183 [2024-11-07 13:30:14.062132] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:06.443 [2024-11-07 13:30:14.216873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.443 [2024-11-07 13:30:14.314185] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:06.443 [2024-11-07 13:30:14.314231] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:06.443 [2024-11-07 13:30:14.314242] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:06.443 [2024-11-07 13:30:14.314254] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:06.443 [2024-11-07 13:30:14.314265] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:06.443 [2024-11-07 13:30:14.315494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:07.014 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:07.015 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:25:07.015 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:07.015 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:07.015 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:07.015 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:07.015 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:07.015 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:07.015 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:25:07.015 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.015 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:07.015 [2024-11-07 13:30:14.865246] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:07.015 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.015 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:25:07.015 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.015 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:07.015 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.015 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:07.015 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.015 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:07.015 Malloc0 00:25:07.015 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.015 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:07.015 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.015 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:07.015 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.015 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:07.015 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.015 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:07.015 [2024-11-07 13:30:14.936220] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:07.015 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.015 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3917300 00:25:07.015 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:07.015 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3917302 00:25:07.015 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:07.015 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3917303 00:25:07.015 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3917300 00:25:07.015 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:07.275 [2024-11-07 13:30:15.047256] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:07.275 [2024-11-07 13:30:15.056938] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:07.275 [2024-11-07 13:30:15.076975] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:08.215 Initializing NVMe Controllers 00:25:08.215 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:08.215 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:25:08.215 Initialization complete. Launching workers. 00:25:08.215 ======================================================== 00:25:08.215 Latency(us) 00:25:08.215 Device Information : IOPS MiB/s Average min max 00:25:08.215 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 1568.00 6.12 637.75 250.45 1318.92 00:25:08.215 ======================================================== 00:25:08.215 Total : 1568.00 6.12 637.75 250.45 1318.92 00:25:08.215 00:25:08.215 Initializing NVMe Controllers 00:25:08.215 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:08.215 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:25:08.215 Initialization complete. Launching workers. 00:25:08.215 ======================================================== 00:25:08.215 Latency(us) 00:25:08.215 Device Information : IOPS MiB/s Average min max 00:25:08.215 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 1441.00 5.63 693.84 314.33 1271.51 00:25:08.215 ======================================================== 00:25:08.215 Total : 1441.00 5.63 693.84 314.33 1271.51 00:25:08.215 00:25:08.215 Initializing NVMe Controllers 00:25:08.215 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:08.215 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:25:08.215 Initialization complete. Launching workers. 00:25:08.215 ======================================================== 00:25:08.215 Latency(us) 00:25:08.215 Device Information : IOPS MiB/s Average min max 00:25:08.215 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 1463.00 5.71 683.56 188.00 940.12 00:25:08.215 ======================================================== 00:25:08.215 Total : 1463.00 5.71 683.56 188.00 940.12 00:25:08.215 00:25:08.475 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3917302 00:25:08.475 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3917303 00:25:08.475 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:08.475 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:25:08.475 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:08.475 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:25:08.475 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:08.475 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:25:08.475 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:08.475 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:08.475 rmmod nvme_tcp 00:25:08.475 rmmod nvme_fabrics 00:25:08.475 rmmod nvme_keyring 00:25:08.475 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:08.475 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:25:08.475 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:25:08.475 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3917164 ']' 00:25:08.475 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3917164 00:25:08.475 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 3917164 ']' 00:25:08.475 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 3917164 00:25:08.475 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:25:08.475 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:08.475 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3917164 00:25:08.475 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:08.475 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:08.475 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3917164' 00:25:08.475 killing process with pid 3917164 00:25:08.475 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 3917164 00:25:08.475 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 3917164 00:25:09.415 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:09.415 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:09.415 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:09.415 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:25:09.415 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:25:09.415 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:09.415 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:25:09.415 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:09.415 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:09.415 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.415 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:09.415 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.327 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:11.327 00:25:11.327 real 0m13.995s 00:25:11.327 user 0m8.895s 00:25:11.327 sys 0m7.246s 00:25:11.327 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:11.327 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:11.327 ************************************ 00:25:11.327 END TEST nvmf_control_msg_list 00:25:11.327 ************************************ 00:25:11.588 13:30:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:11.588 13:30:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:11.588 13:30:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:11.588 13:30:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:11.588 ************************************ 00:25:11.588 START TEST nvmf_wait_for_buf 00:25:11.588 ************************************ 00:25:11.588 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:11.588 * Looking for test storage... 00:25:11.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:11.588 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:11.588 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:25:11.588 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:11.588 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:11.588 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:11.588 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:11.588 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:11.588 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:25:11.588 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:25:11.588 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:25:11.588 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:25:11.588 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:25:11.588 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:25:11.588 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:11.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.589 --rc genhtml_branch_coverage=1 00:25:11.589 --rc genhtml_function_coverage=1 00:25:11.589 --rc genhtml_legend=1 00:25:11.589 --rc geninfo_all_blocks=1 00:25:11.589 --rc geninfo_unexecuted_blocks=1 00:25:11.589 00:25:11.589 ' 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:11.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.589 --rc genhtml_branch_coverage=1 00:25:11.589 --rc genhtml_function_coverage=1 00:25:11.589 --rc genhtml_legend=1 00:25:11.589 --rc geninfo_all_blocks=1 00:25:11.589 --rc geninfo_unexecuted_blocks=1 00:25:11.589 00:25:11.589 ' 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:11.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.589 --rc genhtml_branch_coverage=1 00:25:11.589 --rc genhtml_function_coverage=1 00:25:11.589 --rc genhtml_legend=1 00:25:11.589 --rc geninfo_all_blocks=1 00:25:11.589 --rc geninfo_unexecuted_blocks=1 00:25:11.589 00:25:11.589 ' 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:11.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.589 --rc genhtml_branch_coverage=1 00:25:11.589 --rc genhtml_function_coverage=1 00:25:11.589 --rc genhtml_legend=1 00:25:11.589 --rc geninfo_all_blocks=1 00:25:11.589 --rc geninfo_unexecuted_blocks=1 00:25:11.589 00:25:11.589 ' 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:11.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:11.589 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:19.733 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:19.733 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:19.733 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:19.733 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:19.733 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:19.733 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:19.733 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:19.733 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:25:19.733 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:19.733 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:25:19.733 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:19.734 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:19.734 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:19.734 Found net devices under 0000:31:00.0: cvl_0_0 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:19.734 Found net devices under 0000:31:00.1: cvl_0_1 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:19.734 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:19.734 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:19.734 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:19.734 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:19.734 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:19.734 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:19.734 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:19.734 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:19.734 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:19.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:19.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:25:19.734 00:25:19.734 --- 10.0.0.2 ping statistics --- 00:25:19.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.734 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:25:19.734 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:19.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:19.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:25:19.734 00:25:19.734 --- 10.0.0.1 ping statistics --- 00:25:19.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.734 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:25:19.734 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:19.735 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:25:19.735 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:19.735 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:19.735 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:19.735 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:19.735 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:19.735 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:19.735 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:19.735 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:25:19.735 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:19.735 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:19.735 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:19.735 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3922192 00:25:19.735 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3922192 00:25:19.735 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 3922192 ']' 00:25:19.735 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:19.735 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:19.735 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:19.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:19.735 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:19.735 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:19.735 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:19.735 [2024-11-07 13:30:27.397673] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:25:19.735 [2024-11-07 13:30:27.397808] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:19.735 [2024-11-07 13:30:27.559991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.735 [2024-11-07 13:30:27.656772] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:19.735 [2024-11-07 13:30:27.656817] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:19.735 [2024-11-07 13:30:27.656828] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:19.735 [2024-11-07 13:30:27.656840] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:19.735 [2024-11-07 13:30:27.656850] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:19.735 [2024-11-07 13:30:27.658109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.306 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:20.306 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:25:20.306 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:20.306 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:20.306 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:20.306 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:20.306 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:20.306 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:20.306 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:25:20.306 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.306 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:20.306 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.306 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:25:20.306 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.306 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:20.306 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.306 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:25:20.306 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.306 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:20.567 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.567 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:20.567 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.567 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:20.567 Malloc0 00:25:20.567 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.567 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:25:20.567 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.567 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:20.567 [2024-11-07 13:30:28.428636] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:20.567 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.567 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:20.567 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.567 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:20.567 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.567 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:20.567 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.567 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:20.567 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.567 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:20.567 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.567 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:20.567 [2024-11-07 13:30:28.452870] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:20.567 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.567 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:20.827 [2024-11-07 13:30:28.597979] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:22.224 Initializing NVMe Controllers 00:25:22.224 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:22.224 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:25:22.224 Initialization complete. Launching workers. 00:25:22.224 ======================================================== 00:25:22.224 Latency(us) 00:25:22.224 Device Information : IOPS MiB/s Average min max 00:25:22.224 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 25.00 3.12 165808.99 47778.91 191554.53 00:25:22.224 ======================================================== 00:25:22.224 Total : 25.00 3.12 165808.99 47778.91 191554.53 00:25:22.224 00:25:22.224 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:25:22.224 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:25:22.224 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.224 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:22.224 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.224 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=374 00:25:22.224 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 374 -eq 0 ]] 00:25:22.224 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:22.224 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:25:22.224 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:22.224 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:25:22.224 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:22.224 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:25:22.224 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:22.224 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:22.224 rmmod nvme_tcp 00:25:22.224 rmmod nvme_fabrics 00:25:22.224 rmmod nvme_keyring 00:25:22.224 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:22.224 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:25:22.224 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:25:22.224 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3922192 ']' 00:25:22.224 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3922192 00:25:22.224 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 3922192 ']' 00:25:22.224 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 3922192 00:25:22.224 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:25:22.224 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:22.224 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3922192 00:25:22.483 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:22.483 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:22.483 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3922192' 00:25:22.483 killing process with pid 3922192 00:25:22.483 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 3922192 00:25:22.483 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 3922192 00:25:23.054 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:23.054 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:23.054 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:23.054 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:25:23.054 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:25:23.054 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:23.054 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:25:23.054 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:23.054 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:23.054 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:23.054 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:23.054 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.674 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:25.674 00:25:25.674 real 0m13.722s 00:25:25.674 user 0m5.772s 00:25:25.674 sys 0m6.462s 00:25:25.674 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:25.674 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:25.674 ************************************ 00:25:25.674 END TEST nvmf_wait_for_buf 00:25:25.674 ************************************ 00:25:25.674 13:30:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:25:25.674 13:30:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:25.674 13:30:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:25.674 13:30:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:25.674 13:30:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:25.674 ************************************ 00:25:25.674 START TEST nvmf_fuzz 00:25:25.674 ************************************ 00:25:25.674 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:25.674 * Looking for test storage... 00:25:25.674 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:25.674 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:25.674 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:25:25.674 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:25.674 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:25.674 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:25.674 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:25.674 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:25.674 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:25:25.674 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:25:25.674 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:25:25.674 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:25:25.674 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:25:25.674 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:25:25.674 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:25:25.674 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:25.674 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:25.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.675 --rc genhtml_branch_coverage=1 00:25:25.675 --rc genhtml_function_coverage=1 00:25:25.675 --rc genhtml_legend=1 00:25:25.675 --rc geninfo_all_blocks=1 00:25:25.675 --rc geninfo_unexecuted_blocks=1 00:25:25.675 00:25:25.675 ' 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:25.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.675 --rc genhtml_branch_coverage=1 00:25:25.675 --rc genhtml_function_coverage=1 00:25:25.675 --rc genhtml_legend=1 00:25:25.675 --rc geninfo_all_blocks=1 00:25:25.675 --rc geninfo_unexecuted_blocks=1 00:25:25.675 00:25:25.675 ' 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:25.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.675 --rc genhtml_branch_coverage=1 00:25:25.675 --rc genhtml_function_coverage=1 00:25:25.675 --rc genhtml_legend=1 00:25:25.675 --rc geninfo_all_blocks=1 00:25:25.675 --rc geninfo_unexecuted_blocks=1 00:25:25.675 00:25:25.675 ' 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:25.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.675 --rc genhtml_branch_coverage=1 00:25:25.675 --rc genhtml_function_coverage=1 00:25:25.675 --rc genhtml_legend=1 00:25:25.675 --rc geninfo_all_blocks=1 00:25:25.675 --rc geninfo_unexecuted_blocks=1 00:25:25.675 00:25:25.675 ' 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:25.675 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:25.675 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:25.676 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:25.676 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:25.676 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:25.676 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:25.676 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:25.676 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:25.676 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:25.676 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:25.676 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:25.676 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.676 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:25.676 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:25.676 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:25:25.676 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:33.817 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:33.817 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:25:33.817 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:33.817 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:33.817 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:33.817 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:33.817 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:33.817 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:25:33.817 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:33.817 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:25:33.817 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:25:33.817 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:25:33.817 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:25:33.817 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:25:33.817 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:25:33.817 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:33.818 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:33.818 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:33.818 Found net devices under 0000:31:00.0: cvl_0_0 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:33.818 Found net devices under 0000:31:00.1: cvl_0_1 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:33.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:33.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.579 ms 00:25:33.818 00:25:33.818 --- 10.0.0.2 ping statistics --- 00:25:33.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.818 rtt min/avg/max/mdev = 0.579/0.579/0.579/0.000 ms 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:33.818 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:33.818 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.356 ms 00:25:33.818 00:25:33.818 --- 10.0.0.1 ping statistics --- 00:25:33.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.818 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:33.818 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:33.819 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:33.819 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:33.819 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3927528 00:25:33.819 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:33.819 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:33.819 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3927528 00:25:33.819 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@833 -- # '[' -z 3927528 ']' 00:25:33.819 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:33.819 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:33.819 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:33.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:33.819 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:33.819 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:34.389 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:34.389 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@866 -- # return 0 00:25:34.389 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:34.389 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.389 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:34.389 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.389 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:34.389 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.389 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:34.649 Malloc0 00:25:34.649 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.649 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:34.649 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.650 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:34.650 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.650 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:34.650 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.650 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:34.650 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.650 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:34.650 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.650 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:34.650 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.650 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:34.650 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:26:06.759 Fuzzing completed. Shutting down the fuzz application 00:26:06.759 00:26:06.759 Dumping successful admin opcodes: 00:26:06.759 8, 9, 10, 24, 00:26:06.759 Dumping successful io opcodes: 00:26:06.759 0, 9, 00:26:06.759 NS: 0x2000008efec0 I/O qp, Total commands completed: 804590, total successful commands: 4668, random_seed: 1650553280 00:26:06.759 NS: 0x2000008efec0 admin qp, Total commands completed: 100897, total successful commands: 831, random_seed: 3785336832 00:26:06.759 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:26:06.759 Fuzzing completed. Shutting down the fuzz application 00:26:06.759 00:26:06.759 Dumping successful admin opcodes: 00:26:06.759 24, 00:26:06.759 Dumping successful io opcodes: 00:26:06.759 00:26:06.759 NS: 0x2000008efec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3585850065 00:26:06.759 NS: 0x2000008efec0 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 3585950635 00:26:06.759 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:06.759 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.759 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:06.759 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.759 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:26:06.759 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:26:06.759 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:06.759 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:26:06.759 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:06.759 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:26:06.759 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:06.759 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:06.759 rmmod nvme_tcp 00:26:06.759 rmmod nvme_fabrics 00:26:06.759 rmmod nvme_keyring 00:26:07.023 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:07.023 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:26:07.023 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:26:07.023 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 3927528 ']' 00:26:07.023 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 3927528 00:26:07.023 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@952 -- # '[' -z 3927528 ']' 00:26:07.023 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # kill -0 3927528 00:26:07.023 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@957 -- # uname 00:26:07.023 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:07.023 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3927528 00:26:07.023 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:07.023 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:07.023 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3927528' 00:26:07.023 killing process with pid 3927528 00:26:07.023 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@971 -- # kill 3927528 00:26:07.023 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@976 -- # wait 3927528 00:26:07.964 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:07.964 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:07.964 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:07.964 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:26:07.964 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:26:07.964 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:07.964 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:26:07.964 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:07.964 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:07.964 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:07.964 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:07.964 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:09.875 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:09.875 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:26:09.875 00:26:09.875 real 0m44.736s 00:26:09.875 user 0m58.812s 00:26:09.875 sys 0m16.236s 00:26:09.875 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:09.875 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:09.875 ************************************ 00:26:09.875 END TEST nvmf_fuzz 00:26:09.875 ************************************ 00:26:10.139 13:31:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:10.139 13:31:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:10.139 13:31:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:10.139 13:31:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:10.139 ************************************ 00:26:10.139 START TEST nvmf_multiconnection 00:26:10.139 ************************************ 00:26:10.139 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:10.139 * Looking for test storage... 00:26:10.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:10.139 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:10.139 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # lcov --version 00:26:10.139 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:10.139 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:10.139 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:10.139 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:10.139 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:10.139 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:26:10.139 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:26:10.139 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:26:10.139 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:26:10.139 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:26:10.139 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:26:10.139 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:26:10.139 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:10.139 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:26:10.139 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:26:10.139 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:10.139 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:10.139 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:26:10.139 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:26:10.139 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:10.139 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:26:10.139 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:26:10.139 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:26:10.139 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:26:10.139 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:10.139 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:26:10.139 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:26:10.139 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:10.139 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:10.139 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:26:10.139 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:10.139 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:10.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.139 --rc genhtml_branch_coverage=1 00:26:10.139 --rc genhtml_function_coverage=1 00:26:10.139 --rc genhtml_legend=1 00:26:10.139 --rc geninfo_all_blocks=1 00:26:10.139 --rc geninfo_unexecuted_blocks=1 00:26:10.139 00:26:10.139 ' 00:26:10.139 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:10.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.139 --rc genhtml_branch_coverage=1 00:26:10.139 --rc genhtml_function_coverage=1 00:26:10.139 --rc genhtml_legend=1 00:26:10.139 --rc geninfo_all_blocks=1 00:26:10.140 --rc geninfo_unexecuted_blocks=1 00:26:10.140 00:26:10.140 ' 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:10.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.140 --rc genhtml_branch_coverage=1 00:26:10.140 --rc genhtml_function_coverage=1 00:26:10.140 --rc genhtml_legend=1 00:26:10.140 --rc geninfo_all_blocks=1 00:26:10.140 --rc geninfo_unexecuted_blocks=1 00:26:10.140 00:26:10.140 ' 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:10.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.140 --rc genhtml_branch_coverage=1 00:26:10.140 --rc genhtml_function_coverage=1 00:26:10.140 --rc genhtml_legend=1 00:26:10.140 --rc geninfo_all_blocks=1 00:26:10.140 --rc geninfo_unexecuted_blocks=1 00:26:10.140 00:26:10.140 ' 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:10.140 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:26:10.140 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:18.283 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:18.283 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:26:18.283 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:18.283 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:18.283 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:18.283 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:18.283 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:18.283 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:26:18.283 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:18.283 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:26:18.283 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:26:18.283 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:26:18.283 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:26:18.283 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:26:18.283 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:26:18.283 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:18.283 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:18.283 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:18.283 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:18.283 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:18.283 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:18.283 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:18.283 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:18.283 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:18.283 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:18.283 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:18.283 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:18.283 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:18.283 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:18.283 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:18.283 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:18.283 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:18.284 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:18.284 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:18.284 Found net devices under 0000:31:00.0: cvl_0_0 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:18.284 Found net devices under 0000:31:00.1: cvl_0_1 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:18.284 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:18.545 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:18.545 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:18.545 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:18.545 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:18.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:18.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.722 ms 00:26:18.545 00:26:18.545 --- 10.0.0.2 ping statistics --- 00:26:18.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.545 rtt min/avg/max/mdev = 0.722/0.722/0.722/0.000 ms 00:26:18.545 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:18.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:18.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:26:18.545 00:26:18.545 --- 10.0.0.1 ping statistics --- 00:26:18.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.545 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:26:18.545 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:18.545 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:26:18.545 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:18.545 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:18.545 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:18.545 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:18.545 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:18.545 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:18.545 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:18.545 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:26:18.545 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:18.545 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:18.545 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:18.545 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=3938738 00:26:18.545 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 3938738 00:26:18.545 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:18.545 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@833 -- # '[' -z 3938738 ']' 00:26:18.545 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:18.545 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:18.546 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:18.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:18.546 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:18.546 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:18.546 [2024-11-07 13:31:26.532470] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:26:18.546 [2024-11-07 13:31:26.532609] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:18.806 [2024-11-07 13:31:26.694242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:18.806 [2024-11-07 13:31:26.795366] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:18.806 [2024-11-07 13:31:26.795413] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:18.806 [2024-11-07 13:31:26.795425] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:18.806 [2024-11-07 13:31:26.795436] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:18.806 [2024-11-07 13:31:26.795445] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:18.806 [2024-11-07 13:31:26.797694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:18.806 [2024-11-07 13:31:26.797776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:18.806 [2024-11-07 13:31:26.797950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:18.806 [2024-11-07 13:31:26.798135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.377 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:19.377 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@866 -- # return 0 00:26:19.377 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:19.377 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:19.377 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:19.377 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:19.377 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:19.377 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.377 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:19.377 [2024-11-07 13:31:27.354683] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:19.377 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.377 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:26:19.377 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:19.377 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:19.637 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.637 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:19.637 Malloc1 00:26:19.637 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.637 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:26:19.637 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.637 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:19.637 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.637 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:19.637 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.637 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:19.637 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.637 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:19.637 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.637 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:19.637 [2024-11-07 13:31:27.470654] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:19.637 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.637 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:19.637 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:26:19.637 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.637 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:19.637 Malloc2 00:26:19.637 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.637 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:26:19.638 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.638 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:19.638 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.638 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:26:19.638 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.638 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:19.638 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.638 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:19.638 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.638 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:19.638 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.638 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:19.638 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:26:19.638 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.638 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:19.638 Malloc3 00:26:19.638 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.638 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:26:19.638 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.638 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:19.638 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.638 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:26:19.638 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.638 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:19.898 Malloc4 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:19.898 Malloc5 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.898 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:20.159 Malloc6 00:26:20.159 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.159 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:26:20.159 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.159 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:20.159 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.159 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:26:20.159 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.159 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:20.159 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.159 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:26:20.159 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.159 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:20.159 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.159 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:20.159 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:26:20.159 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.159 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:20.159 Malloc7 00:26:20.159 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.159 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:26:20.159 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.159 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:20.159 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.159 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:26:20.159 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.159 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:20.159 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.159 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:26:20.159 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.159 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:20.159 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.159 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:20.159 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:26:20.159 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.159 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:20.159 Malloc8 00:26:20.159 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.159 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:26:20.159 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.159 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:20.159 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.159 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:26:20.159 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.159 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:20.159 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.159 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:26:20.159 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.159 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:20.159 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.159 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:20.159 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:26:20.159 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.159 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:20.420 Malloc9 00:26:20.420 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.420 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:26:20.420 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.420 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:20.420 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.420 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:26:20.420 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.420 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:20.420 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.420 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:26:20.420 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.420 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:20.420 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.420 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:20.420 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:26:20.420 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.420 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:20.420 Malloc10 00:26:20.420 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.420 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:26:20.420 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.420 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:20.420 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.420 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:26:20.420 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.420 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:20.420 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.420 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:26:20.420 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.420 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:20.420 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.420 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:20.421 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:26:20.421 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.421 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:20.421 Malloc11 00:26:20.421 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.421 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:26:20.421 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.421 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:20.421 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.421 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:26:20.421 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.421 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:20.421 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.421 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:26:20.421 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.421 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:20.681 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.681 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:26:20.681 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:20.681 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:22.062 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:26:22.062 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:26:22.062 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:26:22.062 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:26:22.062 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:26:24.602 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:26:24.602 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:26:24.602 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK1 00:26:24.602 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:26:24.602 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:26:24.602 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:26:24.602 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:24.602 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:26:25.542 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:26:25.542 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:26:25.542 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:26:25.542 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:26:25.542 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:26:28.083 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:26:28.083 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:26:28.083 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK2 00:26:28.083 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:26:28.083 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:26:28.083 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:26:28.083 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:28.083 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:26:29.467 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:26:29.467 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:26:29.467 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:26:29.467 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:26:29.467 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:26:31.381 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:26:31.381 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:26:31.381 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK3 00:26:31.381 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:26:31.381 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:26:31.381 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:26:31.381 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:31.381 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:26:33.291 13:31:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:26:33.291 13:31:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:26:33.291 13:31:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:26:33.291 13:31:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:26:33.291 13:31:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:26:35.200 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:26:35.200 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:26:35.200 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK4 00:26:35.200 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:26:35.200 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:26:35.200 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:26:35.200 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:35.200 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:26:36.581 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:26:36.581 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:26:36.581 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:26:36.581 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:26:36.581 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:26:39.121 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:26:39.121 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:26:39.122 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK5 00:26:39.122 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:26:39.122 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:26:39.122 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:26:39.122 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:39.122 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:26:40.500 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:40.500 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:26:40.500 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:26:40.500 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:26:40.500 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:26:42.409 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:26:42.409 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:26:42.409 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK6 00:26:42.409 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:26:42.409 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:26:42.409 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:26:42.409 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:42.410 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:44.320 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:44.320 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:26:44.320 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:26:44.320 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:26:44.320 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:26:46.229 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:26:46.229 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:26:46.229 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK7 00:26:46.229 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:26:46.229 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:26:46.229 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:26:46.229 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:46.229 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:26:48.134 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:48.134 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:26:48.134 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:26:48.134 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:26:48.134 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:26:50.040 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:26:50.040 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:26:50.040 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK8 00:26:50.040 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:26:50.040 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:26:50.040 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:26:50.040 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:50.040 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:52.060 13:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:52.060 13:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:26:52.060 13:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:26:52.060 13:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:26:52.060 13:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:26:53.968 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:26:53.968 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:26:53.968 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK9 00:26:53.968 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:26:53.968 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:26:53.968 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:26:53.968 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:53.968 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:55.873 13:32:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:55.873 13:32:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:26:55.873 13:32:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:26:55.873 13:32:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:26:55.873 13:32:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:26:57.779 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:26:57.779 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:26:57.779 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK10 00:26:57.779 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:26:57.779 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:26:57.779 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:26:57.779 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:57.779 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:59.688 13:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:59.688 13:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:26:59.688 13:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:26:59.688 13:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:26:59.688 13:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:27:01.595 13:32:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:27:01.595 13:32:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:27:01.595 13:32:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK11 00:27:01.595 13:32:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:27:01.595 13:32:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:27:01.595 13:32:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:27:01.595 13:32:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:27:01.854 [global] 00:27:01.854 thread=1 00:27:01.854 invalidate=1 00:27:01.854 rw=read 00:27:01.854 time_based=1 00:27:01.854 runtime=10 00:27:01.854 ioengine=libaio 00:27:01.854 direct=1 00:27:01.854 bs=262144 00:27:01.854 iodepth=64 00:27:01.854 norandommap=1 00:27:01.854 numjobs=1 00:27:01.854 00:27:01.854 [job0] 00:27:01.854 filename=/dev/nvme0n1 00:27:01.854 [job1] 00:27:01.854 filename=/dev/nvme10n1 00:27:01.854 [job2] 00:27:01.854 filename=/dev/nvme1n1 00:27:01.854 [job3] 00:27:01.854 filename=/dev/nvme2n1 00:27:01.854 [job4] 00:27:01.854 filename=/dev/nvme3n1 00:27:01.854 [job5] 00:27:01.854 filename=/dev/nvme4n1 00:27:01.854 [job6] 00:27:01.854 filename=/dev/nvme5n1 00:27:01.854 [job7] 00:27:01.854 filename=/dev/nvme6n1 00:27:01.854 [job8] 00:27:01.854 filename=/dev/nvme7n1 00:27:01.854 [job9] 00:27:01.854 filename=/dev/nvme8n1 00:27:01.854 [job10] 00:27:01.854 filename=/dev/nvme9n1 00:27:01.854 Could not set queue depth (nvme0n1) 00:27:01.854 Could not set queue depth (nvme10n1) 00:27:01.854 Could not set queue depth (nvme1n1) 00:27:01.854 Could not set queue depth (nvme2n1) 00:27:01.854 Could not set queue depth (nvme3n1) 00:27:01.854 Could not set queue depth (nvme4n1) 00:27:01.854 Could not set queue depth (nvme5n1) 00:27:01.854 Could not set queue depth (nvme6n1) 00:27:01.854 Could not set queue depth (nvme7n1) 00:27:01.854 Could not set queue depth (nvme8n1) 00:27:01.854 Could not set queue depth (nvme9n1) 00:27:02.426 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:02.426 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:02.426 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:02.426 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:02.426 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:02.426 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:02.426 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:02.426 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:02.426 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:02.426 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:02.426 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:02.426 fio-3.35 00:27:02.426 Starting 11 threads 00:27:14.658 00:27:14.658 job0: (groupid=0, jobs=1): err= 0: pid=3947220: Thu Nov 7 13:32:20 2024 00:27:14.658 read: IOPS=373, BW=93.4MiB/s (98.0MB/s)(940MiB/10064msec) 00:27:14.658 slat (usec): min=11, max=69238, avg=2592.18, stdev=7474.69 00:27:14.658 clat (msec): min=17, max=381, avg=168.38, stdev=82.71 00:27:14.658 lat (msec): min=17, max=381, avg=170.97, stdev=83.86 00:27:14.658 clat percentiles (msec): 00:27:14.658 | 1.00th=[ 37], 5.00th=[ 41], 10.00th=[ 45], 20.00th=[ 52], 00:27:14.658 | 30.00th=[ 136], 40.00th=[ 169], 50.00th=[ 186], 60.00th=[ 203], 00:27:14.658 | 70.00th=[ 218], 80.00th=[ 234], 90.00th=[ 262], 95.00th=[ 288], 00:27:14.658 | 99.00th=[ 347], 99.50th=[ 359], 99.90th=[ 363], 99.95th=[ 380], 00:27:14.658 | 99.99th=[ 380] 00:27:14.658 bw ( KiB/s): min=52736, max=345600, per=12.40%, avg=94668.80, stdev=66241.86, samples=20 00:27:14.658 iops : min= 206, max= 1350, avg=369.80, stdev=258.76, samples=20 00:27:14.658 lat (msec) : 20=0.16%, 50=18.11%, 100=7.34%, 250=61.21%, 500=13.19% 00:27:14.658 cpu : usr=0.16%, sys=1.42%, ctx=616, majf=0, minf=4097 00:27:14.658 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:27:14.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.658 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:14.658 issued rwts: total=3761,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.658 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:14.658 job1: (groupid=0, jobs=1): err= 0: pid=3947227: Thu Nov 7 13:32:20 2024 00:27:14.658 read: IOPS=388, BW=97.2MiB/s (102MB/s)(977MiB/10051msec) 00:27:14.658 slat (usec): min=11, max=208363, avg=1744.72, stdev=8801.85 00:27:14.658 clat (usec): min=1597, max=757219, avg=162719.69, stdev=135197.24 00:27:14.658 lat (usec): min=1640, max=757276, avg=164464.40, stdev=136403.34 00:27:14.658 clat percentiles (msec): 00:27:14.658 | 1.00th=[ 15], 5.00th=[ 34], 10.00th=[ 55], 20.00th=[ 85], 00:27:14.658 | 30.00th=[ 104], 40.00th=[ 113], 50.00th=[ 121], 60.00th=[ 134], 00:27:14.658 | 70.00th=[ 144], 80.00th=[ 180], 90.00th=[ 388], 95.00th=[ 514], 00:27:14.658 | 99.00th=[ 625], 99.50th=[ 659], 99.90th=[ 667], 99.95th=[ 751], 00:27:14.658 | 99.99th=[ 760] 00:27:14.658 bw ( KiB/s): min=24576, max=238080, per=12.90%, avg=98432.00, stdev=56557.59, samples=20 00:27:14.658 iops : min= 96, max= 930, avg=384.50, stdev=220.93, samples=20 00:27:14.658 lat (msec) : 2=0.05%, 4=0.10%, 10=0.61%, 20=1.20%, 50=6.29% 00:27:14.658 lat (msec) : 100=18.73%, 250=58.34%, 500=8.90%, 750=5.71%, 1000=0.05% 00:27:14.658 cpu : usr=0.15%, sys=1.26%, ctx=816, majf=0, minf=4097 00:27:14.658 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:27:14.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.658 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:14.658 issued rwts: total=3908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.658 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:14.658 job2: (groupid=0, jobs=1): err= 0: pid=3947248: Thu Nov 7 13:32:20 2024 00:27:14.658 read: IOPS=193, BW=48.4MiB/s (50.8MB/s)(491MiB/10137msec) 00:27:14.658 slat (usec): min=12, max=317494, avg=4009.58, stdev=19950.12 00:27:14.658 clat (msec): min=10, max=1025, avg=325.87, stdev=204.51 00:27:14.658 lat (msec): min=10, max=1068, avg=329.88, stdev=207.20 00:27:14.658 clat percentiles (msec): 00:27:14.658 | 1.00th=[ 20], 5.00th=[ 91], 10.00th=[ 117], 20.00th=[ 161], 00:27:14.658 | 30.00th=[ 190], 40.00th=[ 224], 50.00th=[ 251], 60.00th=[ 288], 00:27:14.658 | 70.00th=[ 430], 80.00th=[ 550], 90.00th=[ 659], 95.00th=[ 718], 00:27:14.658 | 99.00th=[ 818], 99.50th=[ 827], 99.90th=[ 869], 99.95th=[ 1028], 00:27:14.658 | 99.99th=[ 1028] 00:27:14.658 bw ( KiB/s): min=12800, max=128000, per=6.37%, avg=48640.00, stdev=28486.13, samples=20 00:27:14.658 iops : min= 50, max= 500, avg=190.00, stdev=111.27, samples=20 00:27:14.658 lat (msec) : 20=1.02%, 50=0.97%, 100=4.23%, 250=43.74%, 500=24.69% 00:27:14.658 lat (msec) : 750=21.89%, 1000=3.41%, 2000=0.05% 00:27:14.658 cpu : usr=0.08%, sys=0.66%, ctx=356, majf=0, minf=3534 00:27:14.658 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:27:14.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.658 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:14.658 issued rwts: total=1964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.658 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:14.658 job3: (groupid=0, jobs=1): err= 0: pid=3947259: Thu Nov 7 13:32:20 2024 00:27:14.658 read: IOPS=137, BW=34.3MiB/s (36.0MB/s)(348MiB/10134msec) 00:27:14.658 slat (usec): min=11, max=194333, avg=5869.13, stdev=20956.47 00:27:14.658 clat (msec): min=15, max=961, avg=459.29, stdev=188.69 00:27:14.658 lat (msec): min=17, max=961, avg=465.16, stdev=191.98 00:27:14.658 clat percentiles (msec): 00:27:14.658 | 1.00th=[ 67], 5.00th=[ 174], 10.00th=[ 224], 20.00th=[ 288], 00:27:14.658 | 30.00th=[ 330], 40.00th=[ 393], 50.00th=[ 456], 60.00th=[ 514], 00:27:14.658 | 70.00th=[ 575], 80.00th=[ 634], 90.00th=[ 718], 95.00th=[ 760], 00:27:14.658 | 99.00th=[ 885], 99.50th=[ 936], 99.90th=[ 944], 99.95th=[ 961], 00:27:14.658 | 99.99th=[ 961] 00:27:14.658 bw ( KiB/s): min=16384, max=61440, per=4.45%, avg=33996.80, stdev=12745.88, samples=20 00:27:14.658 iops : min= 64, max= 240, avg=132.80, stdev=49.79, samples=20 00:27:14.658 lat (msec) : 20=0.22%, 50=0.50%, 100=0.50%, 250=12.72%, 500=43.39% 00:27:14.658 lat (msec) : 750=36.35%, 1000=6.32% 00:27:14.658 cpu : usr=0.04%, sys=0.53%, ctx=274, majf=0, minf=4097 00:27:14.658 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.5% 00:27:14.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.658 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:14.658 issued rwts: total=1392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.658 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:14.658 job4: (groupid=0, jobs=1): err= 0: pid=3947266: Thu Nov 7 13:32:20 2024 00:27:14.658 read: IOPS=385, BW=96.5MiB/s (101MB/s)(979MiB/10140msec) 00:27:14.658 slat (usec): min=9, max=425052, avg=1670.19, stdev=12132.57 00:27:14.658 clat (msec): min=11, max=974, avg=163.88, stdev=196.27 00:27:14.658 lat (msec): min=11, max=998, avg=165.55, stdev=197.94 00:27:14.658 clat percentiles (msec): 00:27:14.658 | 1.00th=[ 17], 5.00th=[ 29], 10.00th=[ 36], 20.00th=[ 41], 00:27:14.658 | 30.00th=[ 44], 40.00th=[ 47], 50.00th=[ 64], 60.00th=[ 112], 00:27:14.658 | 70.00th=[ 157], 80.00th=[ 284], 90.00th=[ 447], 95.00th=[ 609], 00:27:14.658 | 99.00th=[ 911], 99.50th=[ 936], 99.90th=[ 944], 99.95th=[ 978], 00:27:14.658 | 99.99th=[ 978] 00:27:14.658 bw ( KiB/s): min= 8704, max=369152, per=12.91%, avg=98560.00, stdev=105163.94, samples=20 00:27:14.658 iops : min= 34, max= 1442, avg=385.00, stdev=410.80, samples=20 00:27:14.658 lat (msec) : 20=2.04%, 50=44.23%, 100=10.78%, 250=19.16%, 500=15.00% 00:27:14.658 lat (msec) : 750=6.23%, 1000=2.55% 00:27:14.658 cpu : usr=0.14%, sys=1.25%, ctx=747, majf=0, minf=4097 00:27:14.658 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:27:14.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.658 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:14.658 issued rwts: total=3914,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.658 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:14.658 job5: (groupid=0, jobs=1): err= 0: pid=3947290: Thu Nov 7 13:32:20 2024 00:27:14.658 read: IOPS=373, BW=93.4MiB/s (98.0MB/s)(939MiB/10051msec) 00:27:14.658 slat (usec): min=7, max=111845, avg=2661.54, stdev=9196.76 00:27:14.658 clat (msec): min=12, max=752, avg=168.33, stdev=117.00 00:27:14.658 lat (msec): min=13, max=752, avg=171.00, stdev=118.74 00:27:14.658 clat percentiles (msec): 00:27:14.658 | 1.00th=[ 31], 5.00th=[ 37], 10.00th=[ 42], 20.00th=[ 70], 00:27:14.659 | 30.00th=[ 114], 40.00th=[ 140], 50.00th=[ 157], 60.00th=[ 169], 00:27:14.659 | 70.00th=[ 178], 80.00th=[ 197], 90.00th=[ 326], 95.00th=[ 443], 00:27:14.659 | 99.00th=[ 592], 99.50th=[ 659], 99.90th=[ 709], 99.95th=[ 751], 00:27:14.659 | 99.99th=[ 751] 00:27:14.659 bw ( KiB/s): min=23040, max=246784, per=12.38%, avg=94515.20, stdev=55277.13, samples=20 00:27:14.659 iops : min= 90, max= 964, avg=369.20, stdev=215.93, samples=20 00:27:14.659 lat (msec) : 20=0.27%, 50=13.90%, 100=13.47%, 250=56.39%, 500=13.79% 00:27:14.659 lat (msec) : 750=2.10%, 1000=0.08% 00:27:14.659 cpu : usr=0.20%, sys=1.30%, ctx=613, majf=0, minf=4097 00:27:14.659 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:27:14.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.659 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:14.659 issued rwts: total=3756,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.659 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:14.659 job6: (groupid=0, jobs=1): err= 0: pid=3947306: Thu Nov 7 13:32:20 2024 00:27:14.659 read: IOPS=147, BW=37.0MiB/s (38.8MB/s)(375MiB/10136msec) 00:27:14.659 slat (usec): min=12, max=315072, avg=4438.92, stdev=19848.55 00:27:14.659 clat (msec): min=27, max=974, avg=427.51, stdev=209.93 00:27:14.659 lat (msec): min=27, max=1089, avg=431.95, stdev=212.21 00:27:14.659 clat percentiles (msec): 00:27:14.659 | 1.00th=[ 42], 5.00th=[ 82], 10.00th=[ 148], 20.00th=[ 201], 00:27:14.659 | 30.00th=[ 317], 40.00th=[ 368], 50.00th=[ 430], 60.00th=[ 493], 00:27:14.659 | 70.00th=[ 550], 80.00th=[ 625], 90.00th=[ 726], 95.00th=[ 776], 00:27:14.659 | 99.00th=[ 860], 99.50th=[ 885], 99.90th=[ 902], 99.95th=[ 978], 00:27:14.659 | 99.99th=[ 978] 00:27:14.659 bw ( KiB/s): min=20992, max=76288, per=4.82%, avg=36761.60, stdev=14674.24, samples=20 00:27:14.659 iops : min= 82, max= 298, avg=143.60, stdev=57.32, samples=20 00:27:14.659 lat (msec) : 50=1.33%, 100=4.40%, 250=18.07%, 500=38.47%, 750=30.87% 00:27:14.659 lat (msec) : 1000=6.87% 00:27:14.659 cpu : usr=0.07%, sys=0.54%, ctx=311, majf=0, minf=4097 00:27:14.659 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.1%, >=64=95.8% 00:27:14.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.659 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:14.659 issued rwts: total=1500,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.659 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:14.659 job7: (groupid=0, jobs=1): err= 0: pid=3947317: Thu Nov 7 13:32:20 2024 00:27:14.659 read: IOPS=239, BW=59.8MiB/s (62.8MB/s)(607MiB/10143msec) 00:27:14.659 slat (usec): min=12, max=281205, avg=3658.83, stdev=16955.94 00:27:14.659 clat (msec): min=12, max=1001, avg=263.34, stdev=259.90 00:27:14.659 lat (msec): min=12, max=1050, avg=267.00, stdev=263.39 00:27:14.659 clat percentiles (msec): 00:27:14.659 | 1.00th=[ 29], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 37], 00:27:14.659 | 30.00th=[ 50], 40.00th=[ 89], 50.00th=[ 148], 60.00th=[ 211], 00:27:14.659 | 70.00th=[ 422], 80.00th=[ 542], 90.00th=[ 676], 95.00th=[ 743], 00:27:14.659 | 99.00th=[ 936], 99.50th=[ 986], 99.90th=[ 1003], 99.95th=[ 1003], 00:27:14.659 | 99.99th=[ 1003] 00:27:14.659 bw ( KiB/s): min=16896, max=358400, per=7.93%, avg=60518.40, stdev=82001.07, samples=20 00:27:14.659 iops : min= 66, max= 1400, avg=236.40, stdev=320.32, samples=20 00:27:14.659 lat (msec) : 20=0.08%, 50=30.44%, 100=12.40%, 250=19.44%, 500=14.09% 00:27:14.659 lat (msec) : 750=18.95%, 1000=4.37%, 2000=0.25% 00:27:14.659 cpu : usr=0.10%, sys=0.76%, ctx=409, majf=0, minf=4097 00:27:14.659 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:27:14.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.659 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:14.659 issued rwts: total=2428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.659 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:14.659 job8: (groupid=0, jobs=1): err= 0: pid=3947345: Thu Nov 7 13:32:20 2024 00:27:14.659 read: IOPS=244, BW=61.2MiB/s (64.2MB/s)(618MiB/10095msec) 00:27:14.659 slat (usec): min=6, max=599347, avg=2836.14, stdev=19325.54 00:27:14.659 clat (msec): min=11, max=924, avg=258.20, stdev=232.27 00:27:14.659 lat (msec): min=11, max=1317, avg=261.04, stdev=234.73 00:27:14.659 clat percentiles (msec): 00:27:14.659 | 1.00th=[ 27], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 48], 00:27:14.659 | 30.00th=[ 72], 40.00th=[ 121], 50.00th=[ 201], 60.00th=[ 255], 00:27:14.659 | 70.00th=[ 317], 80.00th=[ 468], 90.00th=[ 651], 95.00th=[ 726], 00:27:14.659 | 99.00th=[ 911], 99.50th=[ 919], 99.90th=[ 927], 99.95th=[ 927], 00:27:14.659 | 99.99th=[ 927] 00:27:14.659 bw ( KiB/s): min=14848, max=262144, per=8.07%, avg=61619.20, stdev=54779.41, samples=20 00:27:14.659 iops : min= 58, max= 1024, avg=240.70, stdev=213.98, samples=20 00:27:14.659 lat (msec) : 20=0.40%, 50=22.22%, 100=12.46%, 250=23.92%, 500=22.70% 00:27:14.659 lat (msec) : 750=14.89%, 1000=3.40% 00:27:14.659 cpu : usr=0.13%, sys=0.79%, ctx=513, majf=0, minf=4097 00:27:14.659 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:27:14.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.659 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:14.659 issued rwts: total=2471,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.659 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:14.659 job9: (groupid=0, jobs=1): err= 0: pid=3947358: Thu Nov 7 13:32:20 2024 00:27:14.659 read: IOPS=202, BW=50.6MiB/s (53.0MB/s)(513MiB/10149msec) 00:27:14.659 slat (usec): min=8, max=618223, avg=4374.34, stdev=23547.29 00:27:14.659 clat (msec): min=21, max=1039, avg=311.46, stdev=214.22 00:27:14.659 lat (msec): min=23, max=1277, avg=315.83, stdev=216.46 00:27:14.659 clat percentiles (msec): 00:27:14.659 | 1.00th=[ 39], 5.00th=[ 56], 10.00th=[ 81], 20.00th=[ 157], 00:27:14.659 | 30.00th=[ 190], 40.00th=[ 230], 50.00th=[ 268], 60.00th=[ 296], 00:27:14.659 | 70.00th=[ 326], 80.00th=[ 435], 90.00th=[ 651], 95.00th=[ 760], 00:27:14.659 | 99.00th=[ 978], 99.50th=[ 978], 99.90th=[ 1036], 99.95th=[ 1036], 00:27:14.659 | 99.99th=[ 1036] 00:27:14.659 bw ( KiB/s): min=12800, max=126976, per=7.02%, avg=53598.32, stdev=29163.06, samples=19 00:27:14.659 iops : min= 50, max= 496, avg=209.37, stdev=113.92, samples=19 00:27:14.659 lat (msec) : 50=3.41%, 100=9.50%, 250=32.29%, 500=37.26%, 750=12.18% 00:27:14.659 lat (msec) : 1000=5.16%, 2000=0.19% 00:27:14.659 cpu : usr=0.06%, sys=0.74%, ctx=366, majf=0, minf=4097 00:27:14.659 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:27:14.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.659 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:14.659 issued rwts: total=2053,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.659 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:14.659 job10: (groupid=0, jobs=1): err= 0: pid=3947369: Thu Nov 7 13:32:20 2024 00:27:14.659 read: IOPS=309, BW=77.3MiB/s (81.0MB/s)(778MiB/10061msec) 00:27:14.659 slat (usec): min=8, max=244031, avg=2957.13, stdev=9435.37 00:27:14.659 clat (msec): min=34, max=671, avg=203.90, stdev=83.38 00:27:14.659 lat (msec): min=34, max=790, avg=206.86, stdev=84.24 00:27:14.659 clat percentiles (msec): 00:27:14.659 | 1.00th=[ 50], 5.00th=[ 89], 10.00th=[ 133], 20.00th=[ 157], 00:27:14.659 | 30.00th=[ 171], 40.00th=[ 184], 50.00th=[ 194], 60.00th=[ 207], 00:27:14.659 | 70.00th=[ 222], 80.00th=[ 239], 90.00th=[ 275], 95.00th=[ 313], 00:27:14.659 | 99.00th=[ 625], 99.50th=[ 651], 99.90th=[ 651], 99.95th=[ 651], 00:27:14.659 | 99.99th=[ 676] 00:27:14.659 bw ( KiB/s): min=32256, max=111616, per=10.22%, avg=77977.60, stdev=20305.51, samples=20 00:27:14.659 iops : min= 126, max= 436, avg=304.60, stdev=79.32, samples=20 00:27:14.659 lat (msec) : 50=1.09%, 100=4.69%, 250=78.71%, 500=13.38%, 750=2.12% 00:27:14.659 cpu : usr=0.08%, sys=0.95%, ctx=557, majf=0, minf=4097 00:27:14.659 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:27:14.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.659 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:14.659 issued rwts: total=3110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.659 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:14.659 00:27:14.659 Run status group 0 (all jobs): 00:27:14.659 READ: bw=745MiB/s (782MB/s), 34.3MiB/s-97.2MiB/s (36.0MB/s-102MB/s), io=7564MiB (7932MB), run=10051-10149msec 00:27:14.659 00:27:14.659 Disk stats (read/write): 00:27:14.659 nvme0n1: ios=7275/0, merge=0/0, ticks=1223221/0, in_queue=1223221, util=96.41% 00:27:14.659 nvme10n1: ios=7450/0, merge=0/0, ticks=1235021/0, in_queue=1235021, util=96.58% 00:27:14.659 nvme1n1: ios=3838/0, merge=0/0, ticks=1232043/0, in_queue=1232043, util=97.15% 00:27:14.659 nvme2n1: ios=2693/0, merge=0/0, ticks=1229005/0, in_queue=1229005, util=97.30% 00:27:14.659 nvme3n1: ios=7725/0, merge=0/0, ticks=1202387/0, in_queue=1202387, util=97.45% 00:27:14.659 nvme4n1: ios=7166/0, merge=0/0, ticks=1223617/0, in_queue=1223617, util=97.87% 00:27:14.659 nvme5n1: ios=2901/0, merge=0/0, ticks=1224909/0, in_queue=1224909, util=98.05% 00:27:14.659 nvme6n1: ios=4738/0, merge=0/0, ticks=1213824/0, in_queue=1213824, util=98.24% 00:27:14.659 nvme7n1: ios=4518/0, merge=0/0, ticks=1205335/0, in_queue=1205335, util=98.81% 00:27:14.659 nvme8n1: ios=4005/0, merge=0/0, ticks=1204576/0, in_queue=1204576, util=99.08% 00:27:14.659 nvme9n1: ios=5931/0, merge=0/0, ticks=1223569/0, in_queue=1223569, util=99.17% 00:27:14.659 13:32:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:27:14.659 [global] 00:27:14.659 thread=1 00:27:14.659 invalidate=1 00:27:14.659 rw=randwrite 00:27:14.659 time_based=1 00:27:14.659 runtime=10 00:27:14.659 ioengine=libaio 00:27:14.659 direct=1 00:27:14.659 bs=262144 00:27:14.659 iodepth=64 00:27:14.659 norandommap=1 00:27:14.659 numjobs=1 00:27:14.659 00:27:14.659 [job0] 00:27:14.659 filename=/dev/nvme0n1 00:27:14.659 [job1] 00:27:14.659 filename=/dev/nvme10n1 00:27:14.659 [job2] 00:27:14.659 filename=/dev/nvme1n1 00:27:14.659 [job3] 00:27:14.659 filename=/dev/nvme2n1 00:27:14.659 [job4] 00:27:14.659 filename=/dev/nvme3n1 00:27:14.659 [job5] 00:27:14.659 filename=/dev/nvme4n1 00:27:14.659 [job6] 00:27:14.659 filename=/dev/nvme5n1 00:27:14.659 [job7] 00:27:14.659 filename=/dev/nvme6n1 00:27:14.659 [job8] 00:27:14.659 filename=/dev/nvme7n1 00:27:14.659 [job9] 00:27:14.659 filename=/dev/nvme8n1 00:27:14.659 [job10] 00:27:14.659 filename=/dev/nvme9n1 00:27:14.659 Could not set queue depth (nvme0n1) 00:27:14.660 Could not set queue depth (nvme10n1) 00:27:14.660 Could not set queue depth (nvme1n1) 00:27:14.660 Could not set queue depth (nvme2n1) 00:27:14.660 Could not set queue depth (nvme3n1) 00:27:14.660 Could not set queue depth (nvme4n1) 00:27:14.660 Could not set queue depth (nvme5n1) 00:27:14.660 Could not set queue depth (nvme6n1) 00:27:14.660 Could not set queue depth (nvme7n1) 00:27:14.660 Could not set queue depth (nvme8n1) 00:27:14.660 Could not set queue depth (nvme9n1) 00:27:14.660 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:14.660 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:14.660 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:14.660 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:14.660 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:14.660 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:14.660 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:14.660 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:14.660 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:14.660 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:14.660 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:14.660 fio-3.35 00:27:14.660 Starting 11 threads 00:27:24.645 00:27:24.645 job0: (groupid=0, jobs=1): err= 0: pid=3948751: Thu Nov 7 13:32:31 2024 00:27:24.645 write: IOPS=365, BW=91.4MiB/s (95.9MB/s)(932MiB/10193msec); 0 zone resets 00:27:24.645 slat (usec): min=19, max=90561, avg=2340.52, stdev=6179.39 00:27:24.645 clat (usec): min=1743, max=616784, avg=172618.30, stdev=127488.17 00:27:24.645 lat (msec): min=2, max=616, avg=174.96, stdev=129.14 00:27:24.645 clat percentiles (msec): 00:27:24.645 | 1.00th=[ 6], 5.00th=[ 14], 10.00th=[ 24], 20.00th=[ 68], 00:27:24.645 | 30.00th=[ 94], 40.00th=[ 124], 50.00th=[ 131], 60.00th=[ 153], 00:27:24.645 | 70.00th=[ 239], 80.00th=[ 317], 90.00th=[ 351], 95.00th=[ 418], 00:27:24.645 | 99.00th=[ 493], 99.50th=[ 506], 99.90th=[ 535], 99.95th=[ 542], 00:27:24.645 | 99.99th=[ 617] 00:27:24.645 bw ( KiB/s): min=32768, max=230400, per=9.08%, avg=93772.80, stdev=53117.32, samples=20 00:27:24.645 iops : min= 128, max= 900, avg=366.30, stdev=207.49, samples=20 00:27:24.645 lat (msec) : 2=0.03%, 4=0.24%, 10=2.90%, 20=5.15%, 50=9.04% 00:27:24.645 lat (msec) : 100=14.19%, 250=38.88%, 500=28.84%, 750=0.72% 00:27:24.645 cpu : usr=0.90%, sys=1.09%, ctx=1659, majf=0, minf=1 00:27:24.645 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:27:24.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:24.645 issued rwts: total=0,3727,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.646 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:24.646 job1: (groupid=0, jobs=1): err= 0: pid=3948763: Thu Nov 7 13:32:31 2024 00:27:24.646 write: IOPS=506, BW=127MiB/s (133MB/s)(1290MiB/10184msec); 0 zone resets 00:27:24.646 slat (usec): min=17, max=114568, avg=1630.72, stdev=4206.29 00:27:24.646 clat (usec): min=1457, max=492667, avg=124597.65, stdev=77903.23 00:27:24.646 lat (usec): min=1515, max=492711, avg=126228.38, stdev=78698.58 00:27:24.646 clat percentiles (msec): 00:27:24.646 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 19], 20.00th=[ 51], 00:27:24.646 | 30.00th=[ 105], 40.00th=[ 114], 50.00th=[ 122], 60.00th=[ 126], 00:27:24.646 | 70.00th=[ 136], 80.00th=[ 167], 90.00th=[ 222], 95.00th=[ 296], 00:27:24.646 | 99.00th=[ 372], 99.50th=[ 409], 99.90th=[ 472], 99.95th=[ 472], 00:27:24.646 | 99.99th=[ 493] 00:27:24.646 bw ( KiB/s): min=51712, max=271360, per=12.63%, avg=130508.80, stdev=60547.83, samples=20 00:27:24.646 iops : min= 202, max= 1060, avg=509.80, stdev=236.51, samples=20 00:27:24.646 lat (msec) : 2=0.21%, 4=3.14%, 10=2.42%, 20=5.52%, 50=8.74% 00:27:24.646 lat (msec) : 100=7.48%, 250=64.27%, 500=8.22% 00:27:24.646 cpu : usr=1.09%, sys=1.70%, ctx=2253, majf=0, minf=1 00:27:24.646 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:27:24.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:24.646 issued rwts: total=0,5161,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.646 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:24.646 job2: (groupid=0, jobs=1): err= 0: pid=3948764: Thu Nov 7 13:32:31 2024 00:27:24.646 write: IOPS=301, BW=75.3MiB/s (79.0MB/s)(767MiB/10186msec); 0 zone resets 00:27:24.646 slat (usec): min=24, max=226597, avg=2501.89, stdev=7396.82 00:27:24.646 clat (msec): min=13, max=553, avg=209.84, stdev=105.12 00:27:24.646 lat (msec): min=15, max=557, avg=212.34, stdev=106.62 00:27:24.646 clat percentiles (msec): 00:27:24.646 | 1.00th=[ 34], 5.00th=[ 48], 10.00th=[ 74], 20.00th=[ 124], 00:27:24.646 | 30.00th=[ 153], 40.00th=[ 171], 50.00th=[ 184], 60.00th=[ 213], 00:27:24.646 | 70.00th=[ 292], 80.00th=[ 317], 90.00th=[ 351], 95.00th=[ 380], 00:27:24.646 | 99.00th=[ 460], 99.50th=[ 498], 99.90th=[ 542], 99.95th=[ 550], 00:27:24.646 | 99.99th=[ 550] 00:27:24.646 bw ( KiB/s): min=40960, max=167936, per=7.45%, avg=76928.00, stdev=33893.87, samples=20 00:27:24.646 iops : min= 160, max= 656, avg=300.50, stdev=132.40, samples=20 00:27:24.646 lat (msec) : 20=0.23%, 50=5.77%, 100=9.91%, 250=49.54%, 500=34.13% 00:27:24.646 lat (msec) : 750=0.42% 00:27:24.646 cpu : usr=0.73%, sys=1.10%, ctx=1509, majf=0, minf=1 00:27:24.646 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=97.9% 00:27:24.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:24.646 issued rwts: total=0,3068,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.646 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:24.646 job3: (groupid=0, jobs=1): err= 0: pid=3948765: Thu Nov 7 13:32:31 2024 00:27:24.646 write: IOPS=381, BW=95.4MiB/s (100MB/s)(963MiB/10092msec); 0 zone resets 00:27:24.646 slat (usec): min=18, max=216031, avg=2563.16, stdev=6144.78 00:27:24.646 clat (msec): min=2, max=331, avg=165.09, stdev=66.35 00:27:24.646 lat (msec): min=3, max=331, avg=167.66, stdev=67.12 00:27:24.646 clat percentiles (msec): 00:27:24.646 | 1.00th=[ 26], 5.00th=[ 57], 10.00th=[ 84], 20.00th=[ 111], 00:27:24.646 | 30.00th=[ 118], 40.00th=[ 142], 50.00th=[ 171], 60.00th=[ 182], 00:27:24.646 | 70.00th=[ 197], 80.00th=[ 228], 90.00th=[ 255], 95.00th=[ 275], 00:27:24.646 | 99.00th=[ 317], 99.50th=[ 326], 99.90th=[ 330], 99.95th=[ 334], 00:27:24.646 | 99.99th=[ 334] 00:27:24.646 bw ( KiB/s): min=55296, max=193536, per=9.39%, avg=96972.80, stdev=35817.82, samples=20 00:27:24.646 iops : min= 216, max= 756, avg=378.80, stdev=139.91, samples=20 00:27:24.646 lat (msec) : 4=0.05%, 10=0.10%, 20=0.47%, 50=2.08%, 100=13.11% 00:27:24.646 lat (msec) : 250=72.58%, 500=11.61% 00:27:24.646 cpu : usr=0.80%, sys=1.08%, ctx=970, majf=0, minf=1 00:27:24.646 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:27:24.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:24.646 issued rwts: total=0,3851,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.646 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:24.646 job4: (groupid=0, jobs=1): err= 0: pid=3948766: Thu Nov 7 13:32:31 2024 00:27:24.646 write: IOPS=334, BW=83.6MiB/s (87.6MB/s)(844MiB/10094msec); 0 zone resets 00:27:24.646 slat (usec): min=23, max=58059, avg=2825.84, stdev=5681.67 00:27:24.646 clat (msec): min=19, max=500, avg=188.51, stdev=78.09 00:27:24.646 lat (msec): min=19, max=512, avg=191.34, stdev=79.11 00:27:24.646 clat percentiles (msec): 00:27:24.646 | 1.00th=[ 37], 5.00th=[ 66], 10.00th=[ 111], 20.00th=[ 118], 00:27:24.646 | 30.00th=[ 146], 40.00th=[ 174], 50.00th=[ 182], 60.00th=[ 194], 00:27:24.646 | 70.00th=[ 224], 80.00th=[ 247], 90.00th=[ 279], 95.00th=[ 321], 00:27:24.646 | 99.00th=[ 477], 99.50th=[ 489], 99.90th=[ 498], 99.95th=[ 498], 00:27:24.646 | 99.99th=[ 502] 00:27:24.646 bw ( KiB/s): min=38912, max=139264, per=8.21%, avg=84787.20, stdev=27498.59, samples=20 00:27:24.646 iops : min= 152, max= 544, avg=331.20, stdev=107.42, samples=20 00:27:24.646 lat (msec) : 20=0.12%, 50=2.96%, 100=4.03%, 250=74.40%, 500=18.46% 00:27:24.646 lat (msec) : 750=0.03% 00:27:24.646 cpu : usr=0.64%, sys=1.01%, ctx=1007, majf=0, minf=1 00:27:24.646 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:27:24.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:24.646 issued rwts: total=0,3375,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.646 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:24.646 job5: (groupid=0, jobs=1): err= 0: pid=3948767: Thu Nov 7 13:32:31 2024 00:27:24.646 write: IOPS=289, BW=72.3MiB/s (75.8MB/s)(735MiB/10159msec); 0 zone resets 00:27:24.646 slat (usec): min=25, max=203150, avg=3067.98, stdev=7616.94 00:27:24.646 clat (msec): min=7, max=507, avg=217.92, stdev=89.29 00:27:24.646 lat (msec): min=7, max=536, avg=220.98, stdev=90.45 00:27:24.646 clat percentiles (msec): 00:27:24.646 | 1.00th=[ 23], 5.00th=[ 89], 10.00th=[ 124], 20.00th=[ 132], 00:27:24.646 | 30.00th=[ 161], 40.00th=[ 192], 50.00th=[ 207], 60.00th=[ 241], 00:27:24.646 | 70.00th=[ 257], 80.00th=[ 292], 90.00th=[ 342], 95.00th=[ 372], 00:27:24.646 | 99.00th=[ 477], 99.50th=[ 489], 99.90th=[ 502], 99.95th=[ 506], 00:27:24.646 | 99.99th=[ 510] 00:27:24.646 bw ( KiB/s): min=46592, max=126976, per=7.13%, avg=73625.60, stdev=22805.32, samples=20 00:27:24.646 iops : min= 182, max= 496, avg=287.60, stdev=89.08, samples=20 00:27:24.646 lat (msec) : 10=0.07%, 20=0.65%, 50=1.77%, 100=3.30%, 250=59.99% 00:27:24.646 lat (msec) : 500=34.03%, 750=0.20% 00:27:24.646 cpu : usr=0.81%, sys=0.82%, ctx=1034, majf=0, minf=1 00:27:24.646 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:27:24.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:24.646 issued rwts: total=0,2939,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.646 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:24.646 job6: (groupid=0, jobs=1): err= 0: pid=3948768: Thu Nov 7 13:32:31 2024 00:27:24.646 write: IOPS=308, BW=77.1MiB/s (80.9MB/s)(784MiB/10159msec); 0 zone resets 00:27:24.646 slat (usec): min=16, max=170897, avg=2516.23, stdev=6784.85 00:27:24.646 clat (usec): min=1861, max=634854, avg=204158.26, stdev=95539.98 00:27:24.646 lat (usec): min=1922, max=634896, avg=206674.50, stdev=96523.43 00:27:24.646 clat percentiles (msec): 00:27:24.646 | 1.00th=[ 6], 5.00th=[ 36], 10.00th=[ 124], 20.00th=[ 132], 00:27:24.646 | 30.00th=[ 150], 40.00th=[ 169], 50.00th=[ 182], 60.00th=[ 228], 00:27:24.646 | 70.00th=[ 247], 80.00th=[ 264], 90.00th=[ 326], 95.00th=[ 376], 00:27:24.646 | 99.00th=[ 481], 99.50th=[ 502], 99.90th=[ 510], 99.95th=[ 634], 00:27:24.646 | 99.99th=[ 634] 00:27:24.646 bw ( KiB/s): min=39424, max=124928, per=7.61%, avg=78617.60, stdev=23438.36, samples=20 00:27:24.646 iops : min= 154, max= 488, avg=307.10, stdev=91.56, samples=20 00:27:24.646 lat (msec) : 2=0.03%, 4=0.64%, 10=1.75%, 20=2.30%, 50=1.28% 00:27:24.646 lat (msec) : 100=0.83%, 250=65.19%, 500=27.60%, 750=0.38% 00:27:24.646 cpu : usr=0.67%, sys=1.00%, ctx=1357, majf=0, minf=1 00:27:24.646 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:27:24.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:24.646 issued rwts: total=0,3134,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.646 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:24.646 job7: (groupid=0, jobs=1): err= 0: pid=3948769: Thu Nov 7 13:32:31 2024 00:27:24.646 write: IOPS=443, BW=111MiB/s (116MB/s)(1126MiB/10155msec); 0 zone resets 00:27:24.646 slat (usec): min=23, max=29637, avg=2216.91, stdev=4226.26 00:27:24.646 clat (msec): min=19, max=403, avg=141.98, stdev=57.67 00:27:24.646 lat (msec): min=19, max=403, avg=144.20, stdev=58.40 00:27:24.646 clat percentiles (msec): 00:27:24.646 | 1.00th=[ 68], 5.00th=[ 78], 10.00th=[ 81], 20.00th=[ 85], 00:27:24.646 | 30.00th=[ 105], 40.00th=[ 117], 50.00th=[ 124], 60.00th=[ 150], 00:27:24.646 | 70.00th=[ 178], 80.00th=[ 188], 90.00th=[ 232], 95.00th=[ 255], 00:27:24.646 | 99.00th=[ 288], 99.50th=[ 305], 99.90th=[ 384], 99.95th=[ 384], 00:27:24.646 | 99.99th=[ 405] 00:27:24.646 bw ( KiB/s): min=59392, max=197632, per=11.01%, avg=113715.20, stdev=41557.88, samples=20 00:27:24.646 iops : min= 232, max= 772, avg=444.20, stdev=162.34, samples=20 00:27:24.646 lat (msec) : 20=0.09%, 50=0.18%, 100=28.39%, 250=64.86%, 500=6.48% 00:27:24.646 cpu : usr=1.20%, sys=1.26%, ctx=1115, majf=0, minf=1 00:27:24.646 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:27:24.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:24.646 issued rwts: total=0,4505,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.646 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:24.646 job8: (groupid=0, jobs=1): err= 0: pid=3948770: Thu Nov 7 13:32:31 2024 00:27:24.646 write: IOPS=395, BW=98.8MiB/s (104MB/s)(997MiB/10092msec); 0 zone resets 00:27:24.646 slat (usec): min=22, max=565660, avg=1970.77, stdev=10655.76 00:27:24.646 clat (msec): min=4, max=974, avg=159.93, stdev=87.42 00:27:24.646 lat (msec): min=5, max=974, avg=161.90, stdev=88.22 00:27:24.646 clat percentiles (msec): 00:27:24.647 | 1.00th=[ 14], 5.00th=[ 41], 10.00th=[ 72], 20.00th=[ 107], 00:27:24.647 | 30.00th=[ 117], 40.00th=[ 133], 50.00th=[ 165], 60.00th=[ 178], 00:27:24.647 | 70.00th=[ 188], 80.00th=[ 201], 90.00th=[ 222], 95.00th=[ 279], 00:27:24.647 | 99.00th=[ 609], 99.50th=[ 634], 99.90th=[ 659], 99.95th=[ 978], 00:27:24.647 | 99.99th=[ 978] 00:27:24.647 bw ( KiB/s): min=28672, max=179200, per=9.73%, avg=100480.00, stdev=30966.87, samples=20 00:27:24.647 iops : min= 112, max= 700, avg=392.50, stdev=120.96, samples=20 00:27:24.647 lat (msec) : 10=0.43%, 20=1.93%, 50=3.56%, 100=10.71%, 250=77.11% 00:27:24.647 lat (msec) : 500=4.69%, 750=1.50%, 1000=0.08% 00:27:24.647 cpu : usr=0.93%, sys=1.29%, ctx=1756, majf=0, minf=1 00:27:24.647 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:27:24.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:24.647 issued rwts: total=0,3988,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.647 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:24.647 job9: (groupid=0, jobs=1): err= 0: pid=3948771: Thu Nov 7 13:32:31 2024 00:27:24.647 write: IOPS=315, BW=78.9MiB/s (82.7MB/s)(804MiB/10189msec); 0 zone resets 00:27:24.647 slat (usec): min=28, max=56021, avg=2718.82, stdev=6069.46 00:27:24.647 clat (msec): min=11, max=520, avg=199.96, stdev=105.70 00:27:24.647 lat (msec): min=11, max=520, avg=202.67, stdev=106.81 00:27:24.647 clat percentiles (msec): 00:27:24.647 | 1.00th=[ 69], 5.00th=[ 105], 10.00th=[ 109], 20.00th=[ 118], 00:27:24.647 | 30.00th=[ 127], 40.00th=[ 131], 50.00th=[ 142], 60.00th=[ 171], 00:27:24.647 | 70.00th=[ 275], 80.00th=[ 317], 90.00th=[ 359], 95.00th=[ 384], 00:27:24.647 | 99.00th=[ 489], 99.50th=[ 498], 99.90th=[ 510], 99.95th=[ 518], 00:27:24.647 | 99.99th=[ 523] 00:27:24.647 bw ( KiB/s): min=33792, max=145920, per=7.81%, avg=80716.80, stdev=37420.56, samples=20 00:27:24.647 iops : min= 132, max= 570, avg=315.30, stdev=146.17, samples=20 00:27:24.647 lat (msec) : 20=0.12%, 50=0.37%, 100=3.33%, 250=64.52%, 500=31.16% 00:27:24.647 lat (msec) : 750=0.50% 00:27:24.647 cpu : usr=0.69%, sys=1.09%, ctx=1095, majf=0, minf=1 00:27:24.647 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.0% 00:27:24.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:24.647 issued rwts: total=0,3216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.647 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:24.647 job10: (groupid=0, jobs=1): err= 0: pid=3948772: Thu Nov 7 13:32:31 2024 00:27:24.647 write: IOPS=409, BW=102MiB/s (107MB/s)(1042MiB/10161msec); 0 zone resets 00:27:24.647 slat (usec): min=19, max=163995, avg=2186.49, stdev=5265.66 00:27:24.647 clat (msec): min=17, max=553, avg=153.84, stdev=86.48 00:27:24.647 lat (msec): min=17, max=553, avg=156.02, stdev=87.49 00:27:24.647 clat percentiles (msec): 00:27:24.647 | 1.00th=[ 28], 5.00th=[ 75], 10.00th=[ 79], 20.00th=[ 83], 00:27:24.647 | 30.00th=[ 108], 40.00th=[ 120], 50.00th=[ 125], 60.00th=[ 159], 00:27:24.647 | 70.00th=[ 178], 80.00th=[ 190], 90.00th=[ 257], 95.00th=[ 338], 00:27:24.647 | 99.00th=[ 460], 99.50th=[ 523], 99.90th=[ 550], 99.95th=[ 550], 00:27:24.647 | 99.99th=[ 550] 00:27:24.647 bw ( KiB/s): min=34816, max=205312, per=10.17%, avg=105011.20, stdev=46941.42, samples=20 00:27:24.647 iops : min= 136, max= 802, avg=410.20, stdev=183.36, samples=20 00:27:24.647 lat (msec) : 20=0.10%, 50=2.40%, 100=26.26%, 250=59.24%, 500=11.33% 00:27:24.647 lat (msec) : 750=0.67% 00:27:24.647 cpu : usr=0.92%, sys=1.18%, ctx=1303, majf=0, minf=1 00:27:24.647 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:27:24.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:24.647 issued rwts: total=0,4166,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.647 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:24.647 00:27:24.647 Run status group 0 (all jobs): 00:27:24.647 WRITE: bw=1009MiB/s (1058MB/s), 72.3MiB/s-127MiB/s (75.8MB/s-133MB/s), io=10.0GiB (10.8GB), run=10092-10193msec 00:27:24.647 00:27:24.647 Disk stats (read/write): 00:27:24.647 nvme0n1: ios=49/7365, merge=0/0, ticks=273/1223436, in_queue=1223709, util=97.13% 00:27:24.647 nvme10n1: ios=45/10236, merge=0/0, ticks=992/1214608, in_queue=1215600, util=100.00% 00:27:24.647 nvme1n1: ios=45/6053, merge=0/0, ticks=1589/1225891, in_queue=1227480, util=100.00% 00:27:24.647 nvme2n1: ios=46/7693, merge=0/0, ticks=1470/1198600, in_queue=1200070, util=100.00% 00:27:24.647 nvme3n1: ios=42/6738, merge=0/0, ticks=1734/1230979, in_queue=1232713, util=100.00% 00:27:24.647 nvme4n1: ios=44/5803, merge=0/0, ticks=2032/1217341, in_queue=1219373, util=99.88% 00:27:24.647 nvme5n1: ios=44/6194, merge=0/0, ticks=2988/1224031, in_queue=1227019, util=100.00% 00:27:24.647 nvme6n1: ios=41/8932, merge=0/0, ticks=1027/1219153, in_queue=1220180, util=100.00% 00:27:24.647 nvme7n1: ios=42/7968, merge=0/0, ticks=3002/1157489, in_queue=1160491, util=100.00% 00:27:24.647 nvme8n1: ios=0/6347, merge=0/0, ticks=0/1221873, in_queue=1221873, util=98.92% 00:27:24.647 nvme9n1: ios=43/8255, merge=0/0, ticks=1014/1224494, in_queue=1225508, util=99.92% 00:27:24.647 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:27:24.647 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:27:24.647 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:24.647 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:24.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:24.647 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:27:24.647 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:27:24.647 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:27:24.647 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK1 00:27:24.647 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:27:24.647 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK1 00:27:24.647 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:27:24.647 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:24.647 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.647 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.647 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.647 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:24.647 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:27:25.219 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:27:25.219 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:27:25.219 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:27:25.219 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:27:25.219 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK2 00:27:25.219 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:27:25.219 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK2 00:27:25.219 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:27:25.219 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:25.219 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.219 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:25.219 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.219 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:25.219 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:27:25.790 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:27:25.790 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:27:25.790 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:27:25.790 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:27:25.790 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK3 00:27:25.790 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK3 00:27:25.790 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:27:25.790 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:27:25.790 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:25.790 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.790 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:25.790 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.790 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:25.790 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:27:26.049 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:27:26.049 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:27:26.049 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:27:26.049 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:27:26.049 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK4 00:27:26.049 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:27:26.049 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK4 00:27:26.049 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:27:26.049 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:27:26.049 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.049 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:26.049 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.049 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:26.049 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:27:26.620 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:27:26.620 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:27:26.620 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:27:26.620 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:27:26.620 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK5 00:27:26.620 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:27:26.620 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK5 00:27:26.620 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:27:26.620 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:27:26.620 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.620 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:26.620 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.620 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:26.620 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:27:27.192 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:27:27.192 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:27:27.192 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:27:27.192 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:27:27.192 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK6 00:27:27.192 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:27:27.192 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK6 00:27:27.192 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:27:27.192 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:27:27.192 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.192 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:27.192 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.192 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:27.192 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:27:27.453 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:27:27.453 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:27:27.453 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:27:27.453 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:27:27.453 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK7 00:27:27.453 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:27:27.453 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK7 00:27:27.453 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:27:27.453 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:27:27.453 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.453 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:27.453 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.453 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:27.453 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:27:27.714 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:27:27.714 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:27:27.714 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:27:27.714 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:27:27.714 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK8 00:27:27.714 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK8 00:27:27.714 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:27:27.714 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:27:27.714 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:27:27.714 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.714 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:27.714 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.714 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:27.714 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:27:27.974 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:27:27.974 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:27:27.974 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:27:27.974 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:27:27.974 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK9 00:27:27.974 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:27:27.974 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK9 00:27:28.235 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:27:28.235 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:27:28.235 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.235 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:28.235 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.235 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:28.235 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:27:28.495 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:27:28.495 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:27:28.495 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:27:28.495 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:27:28.495 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK10 00:27:28.495 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK10 00:27:28.495 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:27:28.495 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:27:28.495 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:27:28.495 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.495 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:28.495 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.495 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:28.495 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:27:28.755 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:27:28.755 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:27:28.755 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:27:28.755 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:27:28.755 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK11 00:27:28.755 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:27:28.755 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK11 00:27:28.755 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:27:28.755 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:27:28.755 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.755 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:28.755 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.755 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:27:28.755 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:27:28.755 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:27:28.755 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:28.755 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:27:28.755 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:28.755 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:27:28.755 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:28.755 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:28.755 rmmod nvme_tcp 00:27:28.755 rmmod nvme_fabrics 00:27:28.755 rmmod nvme_keyring 00:27:28.755 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:28.755 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:27:28.755 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:27:28.755 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 3938738 ']' 00:27:28.755 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 3938738 00:27:28.755 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@952 -- # '[' -z 3938738 ']' 00:27:28.755 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # kill -0 3938738 00:27:28.755 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@957 -- # uname 00:27:28.755 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:28.755 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3938738 00:27:29.015 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:29.015 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:29.015 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3938738' 00:27:29.015 killing process with pid 3938738 00:27:29.015 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@971 -- # kill 3938738 00:27:29.015 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@976 -- # wait 3938738 00:27:30.922 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:30.922 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:30.922 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:30.922 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:27:30.922 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:27:30.922 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:30.922 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:27:30.922 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:30.922 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:30.922 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.923 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:30.923 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.454 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:33.454 00:27:33.454 real 1m23.093s 00:27:33.454 user 5m13.430s 00:27:33.454 sys 0m16.944s 00:27:33.454 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:33.455 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:33.455 ************************************ 00:27:33.455 END TEST nvmf_multiconnection 00:27:33.455 ************************************ 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:33.455 ************************************ 00:27:33.455 START TEST nvmf_initiator_timeout 00:27:33.455 ************************************ 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:33.455 * Looking for test storage... 00:27:33.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # lcov --version 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:33.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:33.455 --rc genhtml_branch_coverage=1 00:27:33.455 --rc genhtml_function_coverage=1 00:27:33.455 --rc genhtml_legend=1 00:27:33.455 --rc geninfo_all_blocks=1 00:27:33.455 --rc geninfo_unexecuted_blocks=1 00:27:33.455 00:27:33.455 ' 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:33.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:33.455 --rc genhtml_branch_coverage=1 00:27:33.455 --rc genhtml_function_coverage=1 00:27:33.455 --rc genhtml_legend=1 00:27:33.455 --rc geninfo_all_blocks=1 00:27:33.455 --rc geninfo_unexecuted_blocks=1 00:27:33.455 00:27:33.455 ' 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:33.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:33.455 --rc genhtml_branch_coverage=1 00:27:33.455 --rc genhtml_function_coverage=1 00:27:33.455 --rc genhtml_legend=1 00:27:33.455 --rc geninfo_all_blocks=1 00:27:33.455 --rc geninfo_unexecuted_blocks=1 00:27:33.455 00:27:33.455 ' 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:33.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:33.455 --rc genhtml_branch_coverage=1 00:27:33.455 --rc genhtml_function_coverage=1 00:27:33.455 --rc genhtml_legend=1 00:27:33.455 --rc geninfo_all_blocks=1 00:27:33.455 --rc geninfo_unexecuted_blocks=1 00:27:33.455 00:27:33.455 ' 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.455 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.456 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.456 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:27:33.456 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.456 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:27:33.456 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:33.456 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:33.456 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:33.456 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:33.456 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:33.456 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:33.456 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:33.456 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:33.456 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:33.456 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:33.456 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:33.456 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:33.456 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:27:33.456 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:33.456 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:33.456 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:33.456 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:33.456 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:33.456 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.456 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:33.456 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.456 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:33.456 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:33.456 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:27:33.456 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:41.588 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:41.588 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:27:41.588 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:41.588 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:41.588 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:41.588 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:41.588 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:41.588 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:27:41.588 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:41.588 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:27:41.588 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:27:41.588 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:27:41.588 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:27:41.588 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:27:41.588 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:27:41.588 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:41.588 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:41.588 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:41.588 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:41.588 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:41.588 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:41.588 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:41.588 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:41.588 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:41.588 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:41.588 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:41.588 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:41.588 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:41.588 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:41.588 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:41.588 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:41.588 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:41.588 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:41.588 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:41.588 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:41.588 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:41.588 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:41.588 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:41.589 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:41.589 Found net devices under 0000:31:00.0: cvl_0_0 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:41.589 Found net devices under 0000:31:00.1: cvl_0_1 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:41.589 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:41.849 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:41.849 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:41.849 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:41.849 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:41.849 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:41.849 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:41.849 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:41.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:41.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.688 ms 00:27:41.849 00:27:41.849 --- 10.0.0.2 ping statistics --- 00:27:41.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.849 rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms 00:27:41.849 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:41.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:41.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:27:41.849 00:27:41.849 --- 10.0.0.1 ping statistics --- 00:27:41.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.849 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:27:41.849 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:41.849 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:27:41.849 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:41.849 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:41.849 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:41.849 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:41.849 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:41.849 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:41.849 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:41.849 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:27:41.849 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:41.849 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:41.849 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:41.849 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=3956331 00:27:41.849 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 3956331 00:27:41.849 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:41.849 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # '[' -z 3956331 ']' 00:27:41.849 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:41.849 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:41.849 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:41.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:41.849 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:41.849 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:42.108 [2024-11-07 13:32:49.892694] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:27:42.108 [2024-11-07 13:32:49.892823] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:42.108 [2024-11-07 13:32:50.061253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:42.367 [2024-11-07 13:32:50.171712] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:42.367 [2024-11-07 13:32:50.171763] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:42.367 [2024-11-07 13:32:50.171779] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:42.367 [2024-11-07 13:32:50.171790] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:42.367 [2024-11-07 13:32:50.171799] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:42.367 [2024-11-07 13:32:50.174013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:42.367 [2024-11-07 13:32:50.174223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:42.367 [2024-11-07 13:32:50.174344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.367 [2024-11-07 13:32:50.174362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:42.936 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:42.936 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@866 -- # return 0 00:27:42.936 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:42.936 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:42.936 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:42.936 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:42.936 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:42.936 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:42.936 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.936 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:42.936 Malloc0 00:27:42.936 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.936 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:27:42.936 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.936 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:42.936 Delay0 00:27:42.936 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.936 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:42.936 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.936 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:42.936 [2024-11-07 13:32:50.787795] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:42.936 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.936 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:42.936 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.936 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:42.936 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.936 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:42.936 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.936 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:42.936 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.936 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:42.936 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.936 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:42.936 [2024-11-07 13:32:50.828124] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:42.936 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.936 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:44.845 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:27:44.845 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # local i=0 00:27:44.845 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:27:44.845 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:27:44.845 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # sleep 2 00:27:46.751 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:27:46.751 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:27:46.751 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:27:46.751 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:27:46.751 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:27:46.751 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # return 0 00:27:46.751 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=3957048 00:27:46.751 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:27:46.751 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:27:46.751 [global] 00:27:46.751 thread=1 00:27:46.751 invalidate=1 00:27:46.751 rw=write 00:27:46.751 time_based=1 00:27:46.751 runtime=60 00:27:46.751 ioengine=libaio 00:27:46.751 direct=1 00:27:46.751 bs=4096 00:27:46.751 iodepth=1 00:27:46.751 norandommap=0 00:27:46.751 numjobs=1 00:27:46.751 00:27:46.751 verify_dump=1 00:27:46.751 verify_backlog=512 00:27:46.751 verify_state_save=0 00:27:46.751 do_verify=1 00:27:46.751 verify=crc32c-intel 00:27:46.751 [job0] 00:27:46.751 filename=/dev/nvme0n1 00:27:46.751 Could not set queue depth (nvme0n1) 00:27:47.011 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:47.011 fio-3.35 00:27:47.011 Starting 1 thread 00:27:49.555 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:27:49.555 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.555 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:49.555 true 00:27:49.555 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.555 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:27:49.555 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.555 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:49.555 true 00:27:49.555 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.555 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:27:49.555 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.555 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:49.555 true 00:27:49.555 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.555 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:27:49.555 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.555 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:49.555 true 00:27:49.555 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.555 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:27:52.853 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:27:52.853 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.853 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:52.853 true 00:27:52.853 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.853 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:27:52.853 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.853 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:52.853 true 00:27:52.853 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.853 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:27:52.853 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.853 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:52.853 true 00:27:52.853 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.853 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:27:52.853 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.853 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:52.853 true 00:27:52.853 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.853 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:27:52.853 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 3957048 00:28:49.261 00:28:49.261 job0: (groupid=0, jobs=1): err= 0: pid=3957382: Thu Nov 7 13:33:54 2024 00:28:49.261 read: IOPS=75, BW=302KiB/s (309kB/s)(17.7MiB/60001msec) 00:28:49.261 slat (nsec): min=6129, max=75427, avg=24514.50, stdev=8884.18 00:28:49.261 clat (usec): min=320, max=41942k, avg=12570.75, stdev=623267.75 00:28:49.261 lat (usec): min=327, max=41942k, avg=12595.27, stdev=623267.83 00:28:49.261 clat percentiles (usec): 00:28:49.261 | 1.00th=[ 474], 5.00th=[ 562], 10.00th=[ 594], 00:28:49.261 | 20.00th=[ 652], 30.00th=[ 676], 40.00th=[ 701], 00:28:49.261 | 50.00th=[ 742], 60.00th=[ 766], 70.00th=[ 783], 00:28:49.261 | 80.00th=[ 799], 90.00th=[ 832], 95.00th=[ 41681], 00:28:49.261 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 44827], 00:28:49.261 | 99.95th=[ 44827], 99.99th=[17112761] 00:28:49.261 write: IOPS=76, BW=307KiB/s (315kB/s)(18.0MiB/60001msec); 0 zone resets 00:28:49.261 slat (usec): min=9, max=31973, avg=39.42, stdev=470.66 00:28:49.261 clat (usec): min=165, max=1011, avg=587.13, stdev=136.19 00:28:49.261 lat (usec): min=176, max=32794, avg=626.54, stdev=494.46 00:28:49.261 clat percentiles (usec): 00:28:49.261 | 1.00th=[ 289], 5.00th=[ 375], 10.00th=[ 420], 20.00th=[ 482], 00:28:49.261 | 30.00th=[ 519], 40.00th=[ 545], 50.00th=[ 578], 60.00th=[ 611], 00:28:49.261 | 70.00th=[ 644], 80.00th=[ 685], 90.00th=[ 783], 95.00th=[ 848], 00:28:49.261 | 99.00th=[ 914], 99.50th=[ 930], 99.90th=[ 979], 99.95th=[ 996], 00:28:49.261 | 99.99th=[ 1012] 00:28:49.261 bw ( KiB/s): min= 128, max= 4096, per=100.00%, avg=2915.33, stdev=1297.23, samples=12 00:28:49.261 iops : min= 32, max= 1024, avg=728.83, stdev=324.31, samples=12 00:28:49.261 lat (usec) : 250=0.32%, 500=12.30%, 750=58.02%, 1000=26.17% 00:28:49.261 lat (msec) : 2=0.07%, 50=3.12%, >=2000=0.01% 00:28:49.261 cpu : usr=0.35%, sys=0.54%, ctx=9141, majf=0, minf=1 00:28:49.261 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:49.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.261 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.261 issued rwts: total=4529,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:49.261 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:49.261 00:28:49.261 Run status group 0 (all jobs): 00:28:49.261 READ: bw=302KiB/s (309kB/s), 302KiB/s-302KiB/s (309kB/s-309kB/s), io=17.7MiB (18.6MB), run=60001-60001msec 00:28:49.261 WRITE: bw=307KiB/s (315kB/s), 307KiB/s-307KiB/s (315kB/s-315kB/s), io=18.0MiB (18.9MB), run=60001-60001msec 00:28:49.261 00:28:49.261 Disk stats (read/write): 00:28:49.261 nvme0n1: ios=4441/4608, merge=0/0, ticks=16233/2051, in_queue=18284, util=99.87% 00:28:49.261 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:49.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:49.262 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:49.262 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1221 -- # local i=0 00:28:49.262 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:28:49.262 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:49.262 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:28:49.262 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:49.262 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1233 -- # return 0 00:28:49.262 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:28:49.262 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:28:49.262 nvmf hotplug test: fio successful as expected 00:28:49.262 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:49.262 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.262 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:49.262 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.262 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:28:49.262 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:28:49.262 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:28:49.262 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:49.262 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:28:49.262 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:49.262 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:28:49.262 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:49.262 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:49.262 rmmod nvme_tcp 00:28:49.262 rmmod nvme_fabrics 00:28:49.262 rmmod nvme_keyring 00:28:49.262 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:49.262 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:28:49.262 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:28:49.262 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 3956331 ']' 00:28:49.262 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 3956331 00:28:49.262 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # '[' -z 3956331 ']' 00:28:49.262 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # kill -0 3956331 00:28:49.262 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@957 -- # uname 00:28:49.262 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:49.262 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3956331 00:28:49.262 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:49.262 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:49.262 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3956331' 00:28:49.262 killing process with pid 3956331 00:28:49.262 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@971 -- # kill 3956331 00:28:49.262 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@976 -- # wait 3956331 00:28:49.262 13:33:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:49.262 13:33:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:49.262 13:33:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:49.262 13:33:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:28:49.262 13:33:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:28:49.262 13:33:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:49.262 13:33:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:28:49.262 13:33:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:49.262 13:33:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:49.262 13:33:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.262 13:33:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:49.262 13:33:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.648 13:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:50.648 00:28:50.648 real 1m17.306s 00:28:50.648 user 4m37.064s 00:28:50.648 sys 0m9.005s 00:28:50.648 13:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:50.648 13:33:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:50.648 ************************************ 00:28:50.648 END TEST nvmf_initiator_timeout 00:28:50.648 ************************************ 00:28:50.648 13:33:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:28:50.648 13:33:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:28:50.648 13:33:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:28:50.648 13:33:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:28:50.648 13:33:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:58.787 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:58.787 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:58.787 Found net devices under 0000:31:00.0: cvl_0_0 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:58.787 Found net devices under 0000:31:00.1: cvl_0_1 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:58.787 ************************************ 00:28:58.787 START TEST nvmf_perf_adq 00:28:58.787 ************************************ 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:58.787 * Looking for test storage... 00:28:58.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:58.787 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:58.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.788 --rc genhtml_branch_coverage=1 00:28:58.788 --rc genhtml_function_coverage=1 00:28:58.788 --rc genhtml_legend=1 00:28:58.788 --rc geninfo_all_blocks=1 00:28:58.788 --rc geninfo_unexecuted_blocks=1 00:28:58.788 00:28:58.788 ' 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:58.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.788 --rc genhtml_branch_coverage=1 00:28:58.788 --rc genhtml_function_coverage=1 00:28:58.788 --rc genhtml_legend=1 00:28:58.788 --rc geninfo_all_blocks=1 00:28:58.788 --rc geninfo_unexecuted_blocks=1 00:28:58.788 00:28:58.788 ' 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:58.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.788 --rc genhtml_branch_coverage=1 00:28:58.788 --rc genhtml_function_coverage=1 00:28:58.788 --rc genhtml_legend=1 00:28:58.788 --rc geninfo_all_blocks=1 00:28:58.788 --rc geninfo_unexecuted_blocks=1 00:28:58.788 00:28:58.788 ' 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:58.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.788 --rc genhtml_branch_coverage=1 00:28:58.788 --rc genhtml_function_coverage=1 00:28:58.788 --rc genhtml_legend=1 00:28:58.788 --rc geninfo_all_blocks=1 00:28:58.788 --rc geninfo_unexecuted_blocks=1 00:28:58.788 00:28:58.788 ' 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:58.788 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:58.788 13:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:06.927 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:06.927 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:06.927 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.928 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.928 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:06.928 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:06.928 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:06.928 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:06.928 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:06.928 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.928 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:06.928 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:06.928 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:06.928 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:06.928 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.928 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:06.928 Found net devices under 0000:31:00.0: cvl_0_0 00:29:06.928 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.928 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:06.928 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.928 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:06.928 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:06.928 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:06.928 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:06.928 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.928 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:06.928 Found net devices under 0000:31:00.1: cvl_0_1 00:29:06.928 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.928 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:06.928 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:06.928 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:29:06.928 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:29:06.928 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:29:06.928 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:29:06.928 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:29:08.838 13:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:29:10.744 13:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:16.027 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:16.028 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:16.028 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:16.028 Found net devices under 0000:31:00.0: cvl_0_0 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:16.028 Found net devices under 0000:31:00.1: cvl_0_1 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:16.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:16.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.692 ms 00:29:16.028 00:29:16.028 --- 10.0.0.2 ping statistics --- 00:29:16.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:16.028 rtt min/avg/max/mdev = 0.692/0.692/0.692/0.000 ms 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:16.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:16.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:29:16.028 00:29:16.028 --- 10.0.0.1 ping statistics --- 00:29:16.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:16.028 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3979859 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3979859 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 3979859 ']' 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:16.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:16.028 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:16.028 [2024-11-07 13:34:23.914380] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:29:16.028 [2024-11-07 13:34:23.914510] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:16.289 [2024-11-07 13:34:24.081835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:16.289 [2024-11-07 13:34:24.182654] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:16.289 [2024-11-07 13:34:24.182698] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:16.289 [2024-11-07 13:34:24.182710] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:16.289 [2024-11-07 13:34:24.182721] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:16.289 [2024-11-07 13:34:24.182730] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:16.289 [2024-11-07 13:34:24.184967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:16.289 [2024-11-07 13:34:24.185190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:16.289 [2024-11-07 13:34:24.185312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:16.289 [2024-11-07 13:34:24.185330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:16.858 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:16.858 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:29:16.858 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:16.858 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:16.858 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:16.858 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:16.858 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:29:16.859 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:29:16.859 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:29:16.859 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.859 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:16.859 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.859 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:29:16.859 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:29:16.859 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.859 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:16.859 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.859 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:29:16.859 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.859 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:17.118 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.118 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:29:17.118 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.118 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:17.118 [2024-11-07 13:34:25.047086] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:17.118 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.118 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:17.118 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.118 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:17.118 Malloc1 00:29:17.376 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.377 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:17.377 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.377 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:17.377 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.377 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:17.377 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.377 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:17.377 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.377 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:17.377 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.377 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:17.377 [2024-11-07 13:34:25.153722] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:17.377 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.377 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3980224 00:29:17.377 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:29:17.377 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:19.283 13:34:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:29:19.283 13:34:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.283 13:34:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:19.283 13:34:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.283 13:34:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:29:19.283 "tick_rate": 2400000000, 00:29:19.283 "poll_groups": [ 00:29:19.283 { 00:29:19.283 "name": "nvmf_tgt_poll_group_000", 00:29:19.283 "admin_qpairs": 1, 00:29:19.283 "io_qpairs": 1, 00:29:19.283 "current_admin_qpairs": 1, 00:29:19.283 "current_io_qpairs": 1, 00:29:19.283 "pending_bdev_io": 0, 00:29:19.283 "completed_nvme_io": 20011, 00:29:19.283 "transports": [ 00:29:19.283 { 00:29:19.283 "trtype": "TCP" 00:29:19.283 } 00:29:19.283 ] 00:29:19.283 }, 00:29:19.283 { 00:29:19.283 "name": "nvmf_tgt_poll_group_001", 00:29:19.283 "admin_qpairs": 0, 00:29:19.283 "io_qpairs": 1, 00:29:19.283 "current_admin_qpairs": 0, 00:29:19.283 "current_io_qpairs": 1, 00:29:19.283 "pending_bdev_io": 0, 00:29:19.283 "completed_nvme_io": 26685, 00:29:19.283 "transports": [ 00:29:19.283 { 00:29:19.283 "trtype": "TCP" 00:29:19.283 } 00:29:19.283 ] 00:29:19.283 }, 00:29:19.283 { 00:29:19.283 "name": "nvmf_tgt_poll_group_002", 00:29:19.283 "admin_qpairs": 0, 00:29:19.283 "io_qpairs": 1, 00:29:19.283 "current_admin_qpairs": 0, 00:29:19.283 "current_io_qpairs": 1, 00:29:19.283 "pending_bdev_io": 0, 00:29:19.283 "completed_nvme_io": 21731, 00:29:19.283 "transports": [ 00:29:19.283 { 00:29:19.283 "trtype": "TCP" 00:29:19.283 } 00:29:19.283 ] 00:29:19.283 }, 00:29:19.283 { 00:29:19.283 "name": "nvmf_tgt_poll_group_003", 00:29:19.283 "admin_qpairs": 0, 00:29:19.283 "io_qpairs": 1, 00:29:19.283 "current_admin_qpairs": 0, 00:29:19.283 "current_io_qpairs": 1, 00:29:19.283 "pending_bdev_io": 0, 00:29:19.283 "completed_nvme_io": 20440, 00:29:19.283 "transports": [ 00:29:19.283 { 00:29:19.283 "trtype": "TCP" 00:29:19.283 } 00:29:19.283 ] 00:29:19.283 } 00:29:19.283 ] 00:29:19.283 }' 00:29:19.283 13:34:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:29:19.283 13:34:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:29:19.283 13:34:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:29:19.283 13:34:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:29:19.283 13:34:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3980224 00:29:27.426 Initializing NVMe Controllers 00:29:27.426 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:27.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:27.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:27.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:27.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:27.426 Initialization complete. Launching workers. 00:29:27.426 ======================================================== 00:29:27.426 Latency(us) 00:29:27.426 Device Information : IOPS MiB/s Average min max 00:29:27.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13447.90 52.53 4759.12 1350.29 9172.78 00:29:27.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14919.80 58.28 4289.46 1191.50 9551.35 00:29:27.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14204.10 55.48 4519.76 1193.37 46798.37 00:29:27.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11487.40 44.87 5571.51 1719.04 11837.38 00:29:27.426 ======================================================== 00:29:27.426 Total : 54059.19 211.17 4739.24 1191.50 46798.37 00:29:27.426 00:29:27.426 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:29:27.426 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:27.426 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:29:27.426 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:27.426 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:29:27.426 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:27.426 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:27.426 rmmod nvme_tcp 00:29:27.686 rmmod nvme_fabrics 00:29:27.686 rmmod nvme_keyring 00:29:27.686 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:27.686 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:29:27.686 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:29:27.686 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3979859 ']' 00:29:27.686 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3979859 00:29:27.686 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 3979859 ']' 00:29:27.686 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 3979859 00:29:27.686 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:29:27.686 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:27.686 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3979859 00:29:27.686 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:27.686 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:27.686 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3979859' 00:29:27.686 killing process with pid 3979859 00:29:27.686 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 3979859 00:29:27.686 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 3979859 00:29:28.623 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:28.623 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:28.623 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:28.623 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:29:28.623 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:29:28.623 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:28.623 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:29:28.623 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:28.623 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:28.623 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.623 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:28.623 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.535 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:30.535 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:29:30.535 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:29:30.535 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:29:32.445 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:29:34.353 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:39.638 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:39.638 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:39.639 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:39.639 Found net devices under 0000:31:00.0: cvl_0_0 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:39.639 Found net devices under 0000:31:00.1: cvl_0_1 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:39.639 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:39.639 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:29:39.639 00:29:39.639 --- 10.0.0.2 ping statistics --- 00:29:39.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:39.639 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:39.639 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:39.639 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:29:39.639 00:29:39.639 --- 10.0.0.1 ping statistics --- 00:29:39.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:39.639 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:29:39.639 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:29:39.900 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:29:39.900 net.core.busy_poll = 1 00:29:39.900 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:29:39.900 net.core.busy_read = 1 00:29:39.900 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:29:39.900 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:29:39.900 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:29:39.900 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:29:39.900 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:29:40.161 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:40.161 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:40.161 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:40.161 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:40.161 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3984792 00:29:40.161 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3984792 00:29:40.161 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:40.161 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 3984792 ']' 00:29:40.161 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:40.161 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:40.161 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:40.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:40.161 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:40.161 13:34:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:40.161 [2024-11-07 13:34:48.026169] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:29:40.161 [2024-11-07 13:34:48.026314] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:40.421 [2024-11-07 13:34:48.191740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:40.421 [2024-11-07 13:34:48.292800] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:40.421 [2024-11-07 13:34:48.292845] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:40.421 [2024-11-07 13:34:48.292856] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:40.421 [2024-11-07 13:34:48.292874] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:40.421 [2024-11-07 13:34:48.292884] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:40.421 [2024-11-07 13:34:48.295056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:40.421 [2024-11-07 13:34:48.295140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:40.421 [2024-11-07 13:34:48.295257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.422 [2024-11-07 13:34:48.295280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:40.991 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:40.991 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:29:40.991 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:40.991 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:40.991 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:40.991 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:40.991 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:29:40.991 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:29:40.991 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:29:40.991 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.991 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:40.991 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.991 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:29:40.991 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:29:40.991 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.991 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:40.991 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.991 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:29:40.991 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.991 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:41.251 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.251 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:29:41.251 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.251 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:41.251 [2024-11-07 13:34:49.152381] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:41.251 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.251 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:41.251 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.251 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:41.251 Malloc1 00:29:41.251 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.251 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:41.251 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.251 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:41.251 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.251 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:41.251 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.251 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:41.511 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.511 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:41.511 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.511 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:41.511 [2024-11-07 13:34:49.268791] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:41.511 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.511 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3984981 00:29:41.511 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:41.511 13:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:29:43.442 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:29:43.442 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.442 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:43.442 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.442 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:29:43.442 "tick_rate": 2400000000, 00:29:43.442 "poll_groups": [ 00:29:43.442 { 00:29:43.442 "name": "nvmf_tgt_poll_group_000", 00:29:43.442 "admin_qpairs": 1, 00:29:43.442 "io_qpairs": 2, 00:29:43.442 "current_admin_qpairs": 1, 00:29:43.442 "current_io_qpairs": 2, 00:29:43.442 "pending_bdev_io": 0, 00:29:43.442 "completed_nvme_io": 25699, 00:29:43.442 "transports": [ 00:29:43.442 { 00:29:43.442 "trtype": "TCP" 00:29:43.442 } 00:29:43.442 ] 00:29:43.442 }, 00:29:43.442 { 00:29:43.442 "name": "nvmf_tgt_poll_group_001", 00:29:43.442 "admin_qpairs": 0, 00:29:43.442 "io_qpairs": 2, 00:29:43.442 "current_admin_qpairs": 0, 00:29:43.442 "current_io_qpairs": 2, 00:29:43.442 "pending_bdev_io": 0, 00:29:43.442 "completed_nvme_io": 34491, 00:29:43.442 "transports": [ 00:29:43.442 { 00:29:43.442 "trtype": "TCP" 00:29:43.442 } 00:29:43.442 ] 00:29:43.442 }, 00:29:43.442 { 00:29:43.442 "name": "nvmf_tgt_poll_group_002", 00:29:43.442 "admin_qpairs": 0, 00:29:43.442 "io_qpairs": 0, 00:29:43.442 "current_admin_qpairs": 0, 00:29:43.442 "current_io_qpairs": 0, 00:29:43.442 "pending_bdev_io": 0, 00:29:43.442 "completed_nvme_io": 0, 00:29:43.442 "transports": [ 00:29:43.442 { 00:29:43.442 "trtype": "TCP" 00:29:43.442 } 00:29:43.442 ] 00:29:43.442 }, 00:29:43.442 { 00:29:43.442 "name": "nvmf_tgt_poll_group_003", 00:29:43.442 "admin_qpairs": 0, 00:29:43.442 "io_qpairs": 0, 00:29:43.442 "current_admin_qpairs": 0, 00:29:43.442 "current_io_qpairs": 0, 00:29:43.442 "pending_bdev_io": 0, 00:29:43.442 "completed_nvme_io": 0, 00:29:43.442 "transports": [ 00:29:43.442 { 00:29:43.442 "trtype": "TCP" 00:29:43.442 } 00:29:43.442 ] 00:29:43.442 } 00:29:43.442 ] 00:29:43.442 }' 00:29:43.442 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:29:43.442 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:29:43.442 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:29:43.442 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:29:43.442 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3984981 00:29:51.584 Initializing NVMe Controllers 00:29:51.584 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:51.584 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:51.584 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:51.584 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:51.584 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:51.584 Initialization complete. Launching workers. 00:29:51.584 ======================================================== 00:29:51.584 Latency(us) 00:29:51.584 Device Information : IOPS MiB/s Average min max 00:29:51.584 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9748.70 38.08 6564.52 1437.96 49920.04 00:29:51.584 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9190.00 35.90 6966.54 1220.38 50205.79 00:29:51.584 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7391.00 28.87 8661.43 1500.90 53789.22 00:29:51.584 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10064.50 39.31 6380.86 1322.46 53517.44 00:29:51.584 ======================================================== 00:29:51.584 Total : 36394.20 142.16 7041.09 1220.38 53789.22 00:29:51.584 00:29:51.584 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:29:51.584 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:51.584 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:29:51.584 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:51.584 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:29:51.584 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:51.584 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:51.584 rmmod nvme_tcp 00:29:51.584 rmmod nvme_fabrics 00:29:51.845 rmmod nvme_keyring 00:29:51.845 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:51.845 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:29:51.845 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:29:51.845 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3984792 ']' 00:29:51.845 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3984792 00:29:51.845 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 3984792 ']' 00:29:51.845 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 3984792 00:29:51.845 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:29:51.845 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:51.845 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3984792 00:29:51.845 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:51.845 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:51.845 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3984792' 00:29:51.845 killing process with pid 3984792 00:29:51.845 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 3984792 00:29:51.845 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 3984792 00:29:52.786 13:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:52.786 13:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:52.786 13:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:52.786 13:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:29:52.786 13:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:29:52.786 13:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:52.786 13:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:29:52.786 13:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:52.786 13:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:52.786 13:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.786 13:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:52.786 13:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:54.692 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:54.692 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:29:54.692 00:29:54.692 real 0m56.207s 00:29:54.692 user 2m54.760s 00:29:54.692 sys 0m13.031s 00:29:54.692 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:54.692 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:54.692 ************************************ 00:29:54.692 END TEST nvmf_perf_adq 00:29:54.692 ************************************ 00:29:54.692 13:35:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:54.692 13:35:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:54.692 13:35:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:54.692 13:35:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:54.951 ************************************ 00:29:54.951 START TEST nvmf_shutdown 00:29:54.951 ************************************ 00:29:54.951 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:54.951 * Looking for test storage... 00:29:54.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:54.951 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:54.951 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:29:54.951 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:54.951 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:54.951 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:54.951 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:54.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.952 --rc genhtml_branch_coverage=1 00:29:54.952 --rc genhtml_function_coverage=1 00:29:54.952 --rc genhtml_legend=1 00:29:54.952 --rc geninfo_all_blocks=1 00:29:54.952 --rc geninfo_unexecuted_blocks=1 00:29:54.952 00:29:54.952 ' 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:54.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.952 --rc genhtml_branch_coverage=1 00:29:54.952 --rc genhtml_function_coverage=1 00:29:54.952 --rc genhtml_legend=1 00:29:54.952 --rc geninfo_all_blocks=1 00:29:54.952 --rc geninfo_unexecuted_blocks=1 00:29:54.952 00:29:54.952 ' 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:54.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.952 --rc genhtml_branch_coverage=1 00:29:54.952 --rc genhtml_function_coverage=1 00:29:54.952 --rc genhtml_legend=1 00:29:54.952 --rc geninfo_all_blocks=1 00:29:54.952 --rc geninfo_unexecuted_blocks=1 00:29:54.952 00:29:54.952 ' 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:54.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.952 --rc genhtml_branch_coverage=1 00:29:54.952 --rc genhtml_function_coverage=1 00:29:54.952 --rc genhtml_legend=1 00:29:54.952 --rc geninfo_all_blocks=1 00:29:54.952 --rc geninfo_unexecuted_blocks=1 00:29:54.952 00:29:54.952 ' 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:54.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:54.952 ************************************ 00:29:54.952 START TEST nvmf_shutdown_tc1 00:29:54.952 ************************************ 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc1 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:29:54.952 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:54.953 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:54.953 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:54.953 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:54.953 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:54.953 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:54.953 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:54.953 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:54.953 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:54.953 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:54.953 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:54.953 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:54.953 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:03.092 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:03.092 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:30:03.092 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:03.092 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:03.092 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:03.092 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:03.092 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:03.092 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:03.093 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:03.093 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:03.093 Found net devices under 0000:31:00.0: cvl_0_0 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:03.093 Found net devices under 0000:31:00.1: cvl_0_1 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:03.093 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:03.093 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:03.093 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:03.093 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:03.093 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:03.468 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:03.468 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:03.468 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:03.468 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:03.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:03.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:30:03.468 00:30:03.468 --- 10.0.0.2 ping statistics --- 00:30:03.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.468 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:30:03.468 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:03.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:03.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:30:03.468 00:30:03.468 --- 10.0.0.1 ping statistics --- 00:30:03.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.468 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:30:03.468 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:03.468 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:30:03.468 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:03.468 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:03.468 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:03.468 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:03.468 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:03.468 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:03.468 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:03.468 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:30:03.468 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:03.468 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:03.468 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:03.468 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3991886 00:30:03.468 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3991886 00:30:03.468 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:03.468 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 3991886 ']' 00:30:03.468 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:03.468 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:03.468 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:03.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:03.468 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:03.468 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:03.468 [2024-11-07 13:35:11.370874] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:30:03.468 [2024-11-07 13:35:11.371003] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:03.772 [2024-11-07 13:35:11.552823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:03.773 [2024-11-07 13:35:11.680891] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:03.773 [2024-11-07 13:35:11.680966] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:03.773 [2024-11-07 13:35:11.680985] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:03.773 [2024-11-07 13:35:11.680998] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:03.773 [2024-11-07 13:35:11.681008] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:03.773 [2024-11-07 13:35:11.683906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:03.773 [2024-11-07 13:35:11.684075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:03.773 [2024-11-07 13:35:11.684185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:03.773 [2024-11-07 13:35:11.684212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:04.342 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:04.342 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:30:04.342 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:04.342 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:04.342 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:04.342 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:04.342 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:04.343 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.343 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:04.343 [2024-11-07 13:35:12.185877] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:04.343 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.343 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:04.343 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:04.343 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:04.343 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:04.343 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:04.343 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:04.343 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:04.343 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:04.343 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:04.343 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:04.343 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:04.343 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:04.343 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:04.343 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:04.343 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:04.343 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:04.343 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:04.343 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:04.343 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:04.343 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:04.343 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:04.343 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:04.343 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:04.343 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:04.343 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:04.343 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:04.343 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.343 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:04.343 Malloc1 00:30:04.343 [2024-11-07 13:35:12.342711] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:04.602 Malloc2 00:30:04.602 Malloc3 00:30:04.602 Malloc4 00:30:04.861 Malloc5 00:30:04.861 Malloc6 00:30:04.861 Malloc7 00:30:05.121 Malloc8 00:30:05.121 Malloc9 00:30:05.121 Malloc10 00:30:05.121 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.121 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:05.121 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:05.121 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:05.121 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3992283 00:30:05.121 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3992283 /var/tmp/bdevperf.sock 00:30:05.121 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 3992283 ']' 00:30:05.121 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:05.121 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:05.121 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:05.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:05.121 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:30:05.121 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:05.121 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:05.121 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:05.121 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:30:05.121 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:30:05.121 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:05.121 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:05.121 { 00:30:05.121 "params": { 00:30:05.121 "name": "Nvme$subsystem", 00:30:05.121 "trtype": "$TEST_TRANSPORT", 00:30:05.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.122 "adrfam": "ipv4", 00:30:05.122 "trsvcid": "$NVMF_PORT", 00:30:05.122 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.122 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.122 "hdgst": ${hdgst:-false}, 00:30:05.122 "ddgst": ${ddgst:-false} 00:30:05.122 }, 00:30:05.122 "method": "bdev_nvme_attach_controller" 00:30:05.122 } 00:30:05.122 EOF 00:30:05.122 )") 00:30:05.122 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:05.122 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:05.122 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:05.122 { 00:30:05.122 "params": { 00:30:05.122 "name": "Nvme$subsystem", 00:30:05.122 "trtype": "$TEST_TRANSPORT", 00:30:05.122 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.122 "adrfam": "ipv4", 00:30:05.122 "trsvcid": "$NVMF_PORT", 00:30:05.122 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.122 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.122 "hdgst": ${hdgst:-false}, 00:30:05.122 "ddgst": ${ddgst:-false} 00:30:05.122 }, 00:30:05.122 "method": "bdev_nvme_attach_controller" 00:30:05.122 } 00:30:05.122 EOF 00:30:05.122 )") 00:30:05.122 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:05.382 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:05.382 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:05.382 { 00:30:05.382 "params": { 00:30:05.382 "name": "Nvme$subsystem", 00:30:05.382 "trtype": "$TEST_TRANSPORT", 00:30:05.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.382 "adrfam": "ipv4", 00:30:05.382 "trsvcid": "$NVMF_PORT", 00:30:05.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.382 "hdgst": ${hdgst:-false}, 00:30:05.382 "ddgst": ${ddgst:-false} 00:30:05.382 }, 00:30:05.382 "method": "bdev_nvme_attach_controller" 00:30:05.382 } 00:30:05.382 EOF 00:30:05.382 )") 00:30:05.383 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:05.383 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:05.383 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:05.383 { 00:30:05.383 "params": { 00:30:05.383 "name": "Nvme$subsystem", 00:30:05.383 "trtype": "$TEST_TRANSPORT", 00:30:05.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.383 "adrfam": "ipv4", 00:30:05.383 "trsvcid": "$NVMF_PORT", 00:30:05.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.383 "hdgst": ${hdgst:-false}, 00:30:05.383 "ddgst": ${ddgst:-false} 00:30:05.383 }, 00:30:05.383 "method": "bdev_nvme_attach_controller" 00:30:05.383 } 00:30:05.383 EOF 00:30:05.383 )") 00:30:05.383 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:05.383 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:05.383 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:05.383 { 00:30:05.383 "params": { 00:30:05.383 "name": "Nvme$subsystem", 00:30:05.383 "trtype": "$TEST_TRANSPORT", 00:30:05.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.383 "adrfam": "ipv4", 00:30:05.383 "trsvcid": "$NVMF_PORT", 00:30:05.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.383 "hdgst": ${hdgst:-false}, 00:30:05.383 "ddgst": ${ddgst:-false} 00:30:05.383 }, 00:30:05.383 "method": "bdev_nvme_attach_controller" 00:30:05.383 } 00:30:05.383 EOF 00:30:05.383 )") 00:30:05.383 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:05.383 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:05.383 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:05.383 { 00:30:05.383 "params": { 00:30:05.383 "name": "Nvme$subsystem", 00:30:05.383 "trtype": "$TEST_TRANSPORT", 00:30:05.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.383 "adrfam": "ipv4", 00:30:05.383 "trsvcid": "$NVMF_PORT", 00:30:05.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.383 "hdgst": ${hdgst:-false}, 00:30:05.383 "ddgst": ${ddgst:-false} 00:30:05.383 }, 00:30:05.383 "method": "bdev_nvme_attach_controller" 00:30:05.383 } 00:30:05.383 EOF 00:30:05.383 )") 00:30:05.383 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:05.383 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:05.383 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:05.383 { 00:30:05.383 "params": { 00:30:05.383 "name": "Nvme$subsystem", 00:30:05.383 "trtype": "$TEST_TRANSPORT", 00:30:05.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.383 "adrfam": "ipv4", 00:30:05.383 "trsvcid": "$NVMF_PORT", 00:30:05.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.383 "hdgst": ${hdgst:-false}, 00:30:05.383 "ddgst": ${ddgst:-false} 00:30:05.383 }, 00:30:05.383 "method": "bdev_nvme_attach_controller" 00:30:05.383 } 00:30:05.383 EOF 00:30:05.383 )") 00:30:05.383 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:05.383 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:05.383 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:05.383 { 00:30:05.383 "params": { 00:30:05.383 "name": "Nvme$subsystem", 00:30:05.383 "trtype": "$TEST_TRANSPORT", 00:30:05.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.383 "adrfam": "ipv4", 00:30:05.383 "trsvcid": "$NVMF_PORT", 00:30:05.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.383 "hdgst": ${hdgst:-false}, 00:30:05.383 "ddgst": ${ddgst:-false} 00:30:05.383 }, 00:30:05.383 "method": "bdev_nvme_attach_controller" 00:30:05.383 } 00:30:05.383 EOF 00:30:05.383 )") 00:30:05.383 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:05.383 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:05.383 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:05.383 { 00:30:05.383 "params": { 00:30:05.383 "name": "Nvme$subsystem", 00:30:05.383 "trtype": "$TEST_TRANSPORT", 00:30:05.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.383 "adrfam": "ipv4", 00:30:05.383 "trsvcid": "$NVMF_PORT", 00:30:05.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.383 "hdgst": ${hdgst:-false}, 00:30:05.383 "ddgst": ${ddgst:-false} 00:30:05.383 }, 00:30:05.383 "method": "bdev_nvme_attach_controller" 00:30:05.383 } 00:30:05.383 EOF 00:30:05.383 )") 00:30:05.383 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:05.383 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:05.383 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:05.383 { 00:30:05.383 "params": { 00:30:05.383 "name": "Nvme$subsystem", 00:30:05.383 "trtype": "$TEST_TRANSPORT", 00:30:05.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.383 "adrfam": "ipv4", 00:30:05.383 "trsvcid": "$NVMF_PORT", 00:30:05.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.383 "hdgst": ${hdgst:-false}, 00:30:05.383 "ddgst": ${ddgst:-false} 00:30:05.383 }, 00:30:05.383 "method": "bdev_nvme_attach_controller" 00:30:05.383 } 00:30:05.383 EOF 00:30:05.383 )") 00:30:05.383 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:05.383 [2024-11-07 13:35:13.189581] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:30:05.383 [2024-11-07 13:35:13.189685] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:05.383 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:30:05.383 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:30:05.383 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:05.383 "params": { 00:30:05.383 "name": "Nvme1", 00:30:05.383 "trtype": "tcp", 00:30:05.383 "traddr": "10.0.0.2", 00:30:05.383 "adrfam": "ipv4", 00:30:05.383 "trsvcid": "4420", 00:30:05.383 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:05.383 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:05.383 "hdgst": false, 00:30:05.383 "ddgst": false 00:30:05.383 }, 00:30:05.383 "method": "bdev_nvme_attach_controller" 00:30:05.383 },{ 00:30:05.383 "params": { 00:30:05.383 "name": "Nvme2", 00:30:05.383 "trtype": "tcp", 00:30:05.383 "traddr": "10.0.0.2", 00:30:05.383 "adrfam": "ipv4", 00:30:05.383 "trsvcid": "4420", 00:30:05.383 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:05.383 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:05.383 "hdgst": false, 00:30:05.383 "ddgst": false 00:30:05.383 }, 00:30:05.383 "method": "bdev_nvme_attach_controller" 00:30:05.383 },{ 00:30:05.383 "params": { 00:30:05.383 "name": "Nvme3", 00:30:05.383 "trtype": "tcp", 00:30:05.383 "traddr": "10.0.0.2", 00:30:05.383 "adrfam": "ipv4", 00:30:05.383 "trsvcid": "4420", 00:30:05.383 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:05.383 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:05.383 "hdgst": false, 00:30:05.383 "ddgst": false 00:30:05.383 }, 00:30:05.383 "method": "bdev_nvme_attach_controller" 00:30:05.383 },{ 00:30:05.383 "params": { 00:30:05.383 "name": "Nvme4", 00:30:05.383 "trtype": "tcp", 00:30:05.383 "traddr": "10.0.0.2", 00:30:05.383 "adrfam": "ipv4", 00:30:05.383 "trsvcid": "4420", 00:30:05.383 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:05.383 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:05.383 "hdgst": false, 00:30:05.383 "ddgst": false 00:30:05.383 }, 00:30:05.383 "method": "bdev_nvme_attach_controller" 00:30:05.383 },{ 00:30:05.383 "params": { 00:30:05.383 "name": "Nvme5", 00:30:05.383 "trtype": "tcp", 00:30:05.383 "traddr": "10.0.0.2", 00:30:05.383 "adrfam": "ipv4", 00:30:05.383 "trsvcid": "4420", 00:30:05.383 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:05.383 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:05.383 "hdgst": false, 00:30:05.383 "ddgst": false 00:30:05.383 }, 00:30:05.383 "method": "bdev_nvme_attach_controller" 00:30:05.383 },{ 00:30:05.383 "params": { 00:30:05.383 "name": "Nvme6", 00:30:05.383 "trtype": "tcp", 00:30:05.383 "traddr": "10.0.0.2", 00:30:05.383 "adrfam": "ipv4", 00:30:05.383 "trsvcid": "4420", 00:30:05.384 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:05.384 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:05.384 "hdgst": false, 00:30:05.384 "ddgst": false 00:30:05.384 }, 00:30:05.384 "method": "bdev_nvme_attach_controller" 00:30:05.384 },{ 00:30:05.384 "params": { 00:30:05.384 "name": "Nvme7", 00:30:05.384 "trtype": "tcp", 00:30:05.384 "traddr": "10.0.0.2", 00:30:05.384 "adrfam": "ipv4", 00:30:05.384 "trsvcid": "4420", 00:30:05.384 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:05.384 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:05.384 "hdgst": false, 00:30:05.384 "ddgst": false 00:30:05.384 }, 00:30:05.384 "method": "bdev_nvme_attach_controller" 00:30:05.384 },{ 00:30:05.384 "params": { 00:30:05.384 "name": "Nvme8", 00:30:05.384 "trtype": "tcp", 00:30:05.384 "traddr": "10.0.0.2", 00:30:05.384 "adrfam": "ipv4", 00:30:05.384 "trsvcid": "4420", 00:30:05.384 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:05.384 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:05.384 "hdgst": false, 00:30:05.384 "ddgst": false 00:30:05.384 }, 00:30:05.384 "method": "bdev_nvme_attach_controller" 00:30:05.384 },{ 00:30:05.384 "params": { 00:30:05.384 "name": "Nvme9", 00:30:05.384 "trtype": "tcp", 00:30:05.384 "traddr": "10.0.0.2", 00:30:05.384 "adrfam": "ipv4", 00:30:05.384 "trsvcid": "4420", 00:30:05.384 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:05.384 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:05.384 "hdgst": false, 00:30:05.384 "ddgst": false 00:30:05.384 }, 00:30:05.384 "method": "bdev_nvme_attach_controller" 00:30:05.384 },{ 00:30:05.384 "params": { 00:30:05.384 "name": "Nvme10", 00:30:05.384 "trtype": "tcp", 00:30:05.384 "traddr": "10.0.0.2", 00:30:05.384 "adrfam": "ipv4", 00:30:05.384 "trsvcid": "4420", 00:30:05.384 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:05.384 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:05.384 "hdgst": false, 00:30:05.384 "ddgst": false 00:30:05.384 }, 00:30:05.384 "method": "bdev_nvme_attach_controller" 00:30:05.384 }' 00:30:05.384 [2024-11-07 13:35:13.329817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.643 [2024-11-07 13:35:13.428737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.021 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:07.021 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:30:07.021 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:07.021 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.021 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:07.021 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.021 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3992283 00:30:07.021 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:30:07.021 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:30:07.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3992283 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:30:07.958 13:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3991886 00:30:07.958 13:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:07.958 13:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:07.958 13:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:30:07.958 13:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:30:07.958 13:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:07.958 13:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:07.958 { 00:30:07.958 "params": { 00:30:07.958 "name": "Nvme$subsystem", 00:30:07.958 "trtype": "$TEST_TRANSPORT", 00:30:07.958 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:07.958 "adrfam": "ipv4", 00:30:07.958 "trsvcid": "$NVMF_PORT", 00:30:07.958 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:07.958 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:07.958 "hdgst": ${hdgst:-false}, 00:30:07.958 "ddgst": ${ddgst:-false} 00:30:07.958 }, 00:30:07.958 "method": "bdev_nvme_attach_controller" 00:30:07.958 } 00:30:07.958 EOF 00:30:07.958 )") 00:30:07.958 13:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:08.219 13:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:08.219 13:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:08.219 { 00:30:08.219 "params": { 00:30:08.219 "name": "Nvme$subsystem", 00:30:08.219 "trtype": "$TEST_TRANSPORT", 00:30:08.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:08.219 "adrfam": "ipv4", 00:30:08.219 "trsvcid": "$NVMF_PORT", 00:30:08.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:08.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:08.219 "hdgst": ${hdgst:-false}, 00:30:08.219 "ddgst": ${ddgst:-false} 00:30:08.219 }, 00:30:08.219 "method": "bdev_nvme_attach_controller" 00:30:08.219 } 00:30:08.219 EOF 00:30:08.219 )") 00:30:08.219 13:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:08.219 13:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:08.219 13:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:08.219 { 00:30:08.219 "params": { 00:30:08.219 "name": "Nvme$subsystem", 00:30:08.219 "trtype": "$TEST_TRANSPORT", 00:30:08.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:08.219 "adrfam": "ipv4", 00:30:08.219 "trsvcid": "$NVMF_PORT", 00:30:08.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:08.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:08.219 "hdgst": ${hdgst:-false}, 00:30:08.219 "ddgst": ${ddgst:-false} 00:30:08.219 }, 00:30:08.219 "method": "bdev_nvme_attach_controller" 00:30:08.219 } 00:30:08.219 EOF 00:30:08.219 )") 00:30:08.219 13:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:08.219 13:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:08.219 13:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:08.219 { 00:30:08.219 "params": { 00:30:08.219 "name": "Nvme$subsystem", 00:30:08.219 "trtype": "$TEST_TRANSPORT", 00:30:08.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:08.219 "adrfam": "ipv4", 00:30:08.219 "trsvcid": "$NVMF_PORT", 00:30:08.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:08.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:08.219 "hdgst": ${hdgst:-false}, 00:30:08.219 "ddgst": ${ddgst:-false} 00:30:08.219 }, 00:30:08.219 "method": "bdev_nvme_attach_controller" 00:30:08.219 } 00:30:08.219 EOF 00:30:08.219 )") 00:30:08.219 13:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:08.219 13:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:08.219 13:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:08.219 { 00:30:08.219 "params": { 00:30:08.219 "name": "Nvme$subsystem", 00:30:08.219 "trtype": "$TEST_TRANSPORT", 00:30:08.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:08.219 "adrfam": "ipv4", 00:30:08.219 "trsvcid": "$NVMF_PORT", 00:30:08.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:08.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:08.219 "hdgst": ${hdgst:-false}, 00:30:08.219 "ddgst": ${ddgst:-false} 00:30:08.219 }, 00:30:08.219 "method": "bdev_nvme_attach_controller" 00:30:08.219 } 00:30:08.219 EOF 00:30:08.219 )") 00:30:08.219 13:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:08.219 13:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:08.219 13:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:08.219 { 00:30:08.219 "params": { 00:30:08.219 "name": "Nvme$subsystem", 00:30:08.219 "trtype": "$TEST_TRANSPORT", 00:30:08.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:08.219 "adrfam": "ipv4", 00:30:08.219 "trsvcid": "$NVMF_PORT", 00:30:08.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:08.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:08.219 "hdgst": ${hdgst:-false}, 00:30:08.219 "ddgst": ${ddgst:-false} 00:30:08.219 }, 00:30:08.219 "method": "bdev_nvme_attach_controller" 00:30:08.219 } 00:30:08.219 EOF 00:30:08.219 )") 00:30:08.219 13:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:08.219 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:08.219 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:08.219 { 00:30:08.219 "params": { 00:30:08.219 "name": "Nvme$subsystem", 00:30:08.219 "trtype": "$TEST_TRANSPORT", 00:30:08.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:08.219 "adrfam": "ipv4", 00:30:08.219 "trsvcid": "$NVMF_PORT", 00:30:08.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:08.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:08.220 "hdgst": ${hdgst:-false}, 00:30:08.220 "ddgst": ${ddgst:-false} 00:30:08.220 }, 00:30:08.220 "method": "bdev_nvme_attach_controller" 00:30:08.220 } 00:30:08.220 EOF 00:30:08.220 )") 00:30:08.220 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:08.220 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:08.220 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:08.220 { 00:30:08.220 "params": { 00:30:08.220 "name": "Nvme$subsystem", 00:30:08.220 "trtype": "$TEST_TRANSPORT", 00:30:08.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:08.220 "adrfam": "ipv4", 00:30:08.220 "trsvcid": "$NVMF_PORT", 00:30:08.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:08.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:08.220 "hdgst": ${hdgst:-false}, 00:30:08.220 "ddgst": ${ddgst:-false} 00:30:08.220 }, 00:30:08.220 "method": "bdev_nvme_attach_controller" 00:30:08.220 } 00:30:08.220 EOF 00:30:08.220 )") 00:30:08.220 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:08.220 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:08.220 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:08.220 { 00:30:08.220 "params": { 00:30:08.220 "name": "Nvme$subsystem", 00:30:08.220 "trtype": "$TEST_TRANSPORT", 00:30:08.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:08.220 "adrfam": "ipv4", 00:30:08.220 "trsvcid": "$NVMF_PORT", 00:30:08.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:08.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:08.220 "hdgst": ${hdgst:-false}, 00:30:08.220 "ddgst": ${ddgst:-false} 00:30:08.220 }, 00:30:08.220 "method": "bdev_nvme_attach_controller" 00:30:08.220 } 00:30:08.220 EOF 00:30:08.220 )") 00:30:08.220 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:08.220 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:08.220 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:08.220 { 00:30:08.220 "params": { 00:30:08.220 "name": "Nvme$subsystem", 00:30:08.220 "trtype": "$TEST_TRANSPORT", 00:30:08.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:08.220 "adrfam": "ipv4", 00:30:08.220 "trsvcid": "$NVMF_PORT", 00:30:08.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:08.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:08.220 "hdgst": ${hdgst:-false}, 00:30:08.220 "ddgst": ${ddgst:-false} 00:30:08.220 }, 00:30:08.220 "method": "bdev_nvme_attach_controller" 00:30:08.220 } 00:30:08.220 EOF 00:30:08.220 )") 00:30:08.220 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:08.220 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:30:08.220 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:30:08.220 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:08.220 "params": { 00:30:08.220 "name": "Nvme1", 00:30:08.220 "trtype": "tcp", 00:30:08.220 "traddr": "10.0.0.2", 00:30:08.220 "adrfam": "ipv4", 00:30:08.220 "trsvcid": "4420", 00:30:08.220 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:08.220 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:08.220 "hdgst": false, 00:30:08.220 "ddgst": false 00:30:08.220 }, 00:30:08.220 "method": "bdev_nvme_attach_controller" 00:30:08.220 },{ 00:30:08.220 "params": { 00:30:08.220 "name": "Nvme2", 00:30:08.220 "trtype": "tcp", 00:30:08.220 "traddr": "10.0.0.2", 00:30:08.220 "adrfam": "ipv4", 00:30:08.220 "trsvcid": "4420", 00:30:08.220 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:08.220 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:08.220 "hdgst": false, 00:30:08.220 "ddgst": false 00:30:08.220 }, 00:30:08.220 "method": "bdev_nvme_attach_controller" 00:30:08.220 },{ 00:30:08.220 "params": { 00:30:08.220 "name": "Nvme3", 00:30:08.220 "trtype": "tcp", 00:30:08.220 "traddr": "10.0.0.2", 00:30:08.220 "adrfam": "ipv4", 00:30:08.220 "trsvcid": "4420", 00:30:08.220 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:08.220 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:08.220 "hdgst": false, 00:30:08.220 "ddgst": false 00:30:08.220 }, 00:30:08.220 "method": "bdev_nvme_attach_controller" 00:30:08.220 },{ 00:30:08.220 "params": { 00:30:08.220 "name": "Nvme4", 00:30:08.220 "trtype": "tcp", 00:30:08.220 "traddr": "10.0.0.2", 00:30:08.220 "adrfam": "ipv4", 00:30:08.220 "trsvcid": "4420", 00:30:08.220 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:08.220 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:08.220 "hdgst": false, 00:30:08.220 "ddgst": false 00:30:08.220 }, 00:30:08.220 "method": "bdev_nvme_attach_controller" 00:30:08.220 },{ 00:30:08.220 "params": { 00:30:08.220 "name": "Nvme5", 00:30:08.220 "trtype": "tcp", 00:30:08.220 "traddr": "10.0.0.2", 00:30:08.220 "adrfam": "ipv4", 00:30:08.220 "trsvcid": "4420", 00:30:08.220 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:08.220 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:08.220 "hdgst": false, 00:30:08.220 "ddgst": false 00:30:08.220 }, 00:30:08.220 "method": "bdev_nvme_attach_controller" 00:30:08.220 },{ 00:30:08.220 "params": { 00:30:08.220 "name": "Nvme6", 00:30:08.220 "trtype": "tcp", 00:30:08.220 "traddr": "10.0.0.2", 00:30:08.220 "adrfam": "ipv4", 00:30:08.220 "trsvcid": "4420", 00:30:08.220 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:08.220 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:08.220 "hdgst": false, 00:30:08.220 "ddgst": false 00:30:08.220 }, 00:30:08.220 "method": "bdev_nvme_attach_controller" 00:30:08.220 },{ 00:30:08.220 "params": { 00:30:08.220 "name": "Nvme7", 00:30:08.220 "trtype": "tcp", 00:30:08.220 "traddr": "10.0.0.2", 00:30:08.220 "adrfam": "ipv4", 00:30:08.220 "trsvcid": "4420", 00:30:08.220 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:08.220 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:08.220 "hdgst": false, 00:30:08.220 "ddgst": false 00:30:08.220 }, 00:30:08.220 "method": "bdev_nvme_attach_controller" 00:30:08.220 },{ 00:30:08.220 "params": { 00:30:08.220 "name": "Nvme8", 00:30:08.220 "trtype": "tcp", 00:30:08.220 "traddr": "10.0.0.2", 00:30:08.220 "adrfam": "ipv4", 00:30:08.220 "trsvcid": "4420", 00:30:08.220 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:08.220 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:08.220 "hdgst": false, 00:30:08.220 "ddgst": false 00:30:08.220 }, 00:30:08.220 "method": "bdev_nvme_attach_controller" 00:30:08.220 },{ 00:30:08.220 "params": { 00:30:08.220 "name": "Nvme9", 00:30:08.220 "trtype": "tcp", 00:30:08.220 "traddr": "10.0.0.2", 00:30:08.220 "adrfam": "ipv4", 00:30:08.220 "trsvcid": "4420", 00:30:08.220 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:08.220 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:08.220 "hdgst": false, 00:30:08.220 "ddgst": false 00:30:08.220 }, 00:30:08.220 "method": "bdev_nvme_attach_controller" 00:30:08.220 },{ 00:30:08.220 "params": { 00:30:08.220 "name": "Nvme10", 00:30:08.220 "trtype": "tcp", 00:30:08.220 "traddr": "10.0.0.2", 00:30:08.220 "adrfam": "ipv4", 00:30:08.220 "trsvcid": "4420", 00:30:08.220 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:08.220 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:08.220 "hdgst": false, 00:30:08.220 "ddgst": false 00:30:08.220 }, 00:30:08.220 "method": "bdev_nvme_attach_controller" 00:30:08.220 }' 00:30:08.220 [2024-11-07 13:35:16.042787] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:30:08.220 [2024-11-07 13:35:16.042943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3992799 ] 00:30:08.220 [2024-11-07 13:35:16.183740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:08.481 [2024-11-07 13:35:16.282363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:09.861 Running I/O for 1 seconds... 00:30:11.061 1728.00 IOPS, 108.00 MiB/s 00:30:11.061 Latency(us) 00:30:11.061 [2024-11-07T12:35:19.068Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:11.061 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:11.061 Verification LBA range: start 0x0 length 0x400 00:30:11.061 Nvme1n1 : 1.15 222.50 13.91 0.00 0.00 284474.45 20097.71 255153.49 00:30:11.061 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:11.061 Verification LBA range: start 0x0 length 0x400 00:30:11.061 Nvme2n1 : 1.16 221.42 13.84 0.00 0.00 281135.79 23592.96 260396.37 00:30:11.061 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:11.061 Verification LBA range: start 0x0 length 0x400 00:30:11.062 Nvme3n1 : 1.15 223.45 13.97 0.00 0.00 273680.21 17803.95 267386.88 00:30:11.062 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:11.062 Verification LBA range: start 0x0 length 0x400 00:30:11.062 Nvme4n1 : 1.13 226.27 14.14 0.00 0.00 264321.07 16711.68 277872.64 00:30:11.062 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:11.062 Verification LBA range: start 0x0 length 0x400 00:30:11.062 Nvme5n1 : 1.20 214.18 13.39 0.00 0.00 275907.63 18568.53 293601.28 00:30:11.062 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:11.062 Verification LBA range: start 0x0 length 0x400 00:30:11.062 Nvme6n1 : 1.14 224.69 14.04 0.00 0.00 257376.00 18459.31 269134.51 00:30:11.062 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:11.062 Verification LBA range: start 0x0 length 0x400 00:30:11.062 Nvme7n1 : 1.17 221.83 13.86 0.00 0.00 256202.84 1303.89 269134.51 00:30:11.062 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:11.062 Verification LBA range: start 0x0 length 0x400 00:30:11.062 Nvme8n1 : 1.16 220.87 13.80 0.00 0.00 252461.44 19551.57 270882.13 00:30:11.062 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:11.062 Verification LBA range: start 0x0 length 0x400 00:30:11.062 Nvme9n1 : 1.20 212.79 13.30 0.00 0.00 258686.08 18786.99 293601.28 00:30:11.062 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:11.062 Verification LBA range: start 0x0 length 0x400 00:30:11.062 Nvme10n1 : 1.21 263.46 16.47 0.00 0.00 205394.43 9939.63 270882.13 00:30:11.062 [2024-11-07T12:35:19.069Z] =================================================================================================================== 00:30:11.062 [2024-11-07T12:35:19.069Z] Total : 2251.44 140.72 0.00 0.00 259604.75 1303.89 293601.28 00:30:11.631 13:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:30:11.631 13:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:11.631 13:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:11.631 13:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:11.631 13:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:11.631 13:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:11.631 13:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:30:11.631 13:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:11.631 13:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:30:11.631 13:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:11.631 13:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:11.889 rmmod nvme_tcp 00:30:11.890 rmmod nvme_fabrics 00:30:11.890 rmmod nvme_keyring 00:30:11.890 13:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:11.890 13:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:30:11.890 13:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:30:11.890 13:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3991886 ']' 00:30:11.890 13:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3991886 00:30:11.890 13:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' -z 3991886 ']' 00:30:11.890 13:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # kill -0 3991886 00:30:11.890 13:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # uname 00:30:11.890 13:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:11.890 13:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3991886 00:30:11.890 13:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:11.890 13:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:11.890 13:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3991886' 00:30:11.890 killing process with pid 3991886 00:30:11.890 13:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # kill 3991886 00:30:11.890 13:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@976 -- # wait 3991886 00:30:13.268 13:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:13.268 13:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:13.268 13:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:13.268 13:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:30:13.268 13:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:30:13.268 13:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:13.268 13:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:30:13.268 13:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:13.268 13:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:13.268 13:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:13.268 13:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:13.268 13:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:15.808 00:30:15.808 real 0m20.354s 00:30:15.808 user 0m44.699s 00:30:15.808 sys 0m7.854s 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:15.808 ************************************ 00:30:15.808 END TEST nvmf_shutdown_tc1 00:30:15.808 ************************************ 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:15.808 ************************************ 00:30:15.808 START TEST nvmf_shutdown_tc2 00:30:15.808 ************************************ 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc2 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:15.808 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:15.808 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:15.808 Found net devices under 0000:31:00.0: cvl_0_0 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:15.808 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:15.809 Found net devices under 0000:31:00.1: cvl_0_1 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:15.809 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:15.809 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.465 ms 00:30:15.809 00:30:15.809 --- 10.0.0.2 ping statistics --- 00:30:15.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:15.809 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:15.809 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:15.809 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:30:15.809 00:30:15.809 --- 10.0.0.1 ping statistics --- 00:30:15.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:15.809 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3994358 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3994358 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3994358 ']' 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:15.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:15.809 13:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:15.809 [2024-11-07 13:35:23.809527] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:30:15.809 [2024-11-07 13:35:23.809662] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:16.070 [2024-11-07 13:35:23.978297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:16.070 [2024-11-07 13:35:24.059902] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:16.070 [2024-11-07 13:35:24.059942] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:16.070 [2024-11-07 13:35:24.059951] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:16.070 [2024-11-07 13:35:24.059959] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:16.070 [2024-11-07 13:35:24.059966] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:16.070 [2024-11-07 13:35:24.061722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:16.070 [2024-11-07 13:35:24.061882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:16.070 [2024-11-07 13:35:24.061969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:16.070 [2024-11-07 13:35:24.061994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:16.640 13:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:16.640 13:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:30:16.640 13:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:16.640 13:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:16.640 13:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:16.640 13:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:16.640 13:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:16.640 13:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.640 13:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:16.640 [2024-11-07 13:35:24.618086] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:16.640 13:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.640 13:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:16.640 13:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:16.640 13:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:16.640 13:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:16.899 13:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:16.899 13:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:16.899 13:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:16.899 13:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:16.899 13:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:16.899 13:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:16.899 13:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:16.899 13:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:16.899 13:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:16.900 13:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:16.900 13:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:16.900 13:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:16.900 13:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:16.900 13:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:16.900 13:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:16.900 13:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:16.900 13:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:16.900 13:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:16.900 13:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:16.900 13:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:16.900 13:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:16.900 13:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:16.900 13:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.900 13:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:16.900 Malloc1 00:30:16.900 [2024-11-07 13:35:24.762689] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:16.900 Malloc2 00:30:16.900 Malloc3 00:30:17.158 Malloc4 00:30:17.158 Malloc5 00:30:17.158 Malloc6 00:30:17.158 Malloc7 00:30:17.416 Malloc8 00:30:17.416 Malloc9 00:30:17.416 Malloc10 00:30:17.416 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.416 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:17.416 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:17.416 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.416 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3994697 00:30:17.416 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3994697 /var/tmp/bdevperf.sock 00:30:17.416 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3994697 ']' 00:30:17.416 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:17.416 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:17.416 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:17.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:17.416 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:17.416 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:17.416 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:17.416 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.416 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:30:17.416 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:30:17.416 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:17.416 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:17.416 { 00:30:17.416 "params": { 00:30:17.416 "name": "Nvme$subsystem", 00:30:17.416 "trtype": "$TEST_TRANSPORT", 00:30:17.416 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:17.416 "adrfam": "ipv4", 00:30:17.416 "trsvcid": "$NVMF_PORT", 00:30:17.416 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:17.416 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:17.416 "hdgst": ${hdgst:-false}, 00:30:17.416 "ddgst": ${ddgst:-false} 00:30:17.416 }, 00:30:17.416 "method": "bdev_nvme_attach_controller" 00:30:17.416 } 00:30:17.416 EOF 00:30:17.416 )") 00:30:17.416 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:17.417 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:17.417 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:17.417 { 00:30:17.417 "params": { 00:30:17.417 "name": "Nvme$subsystem", 00:30:17.417 "trtype": "$TEST_TRANSPORT", 00:30:17.417 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:17.417 "adrfam": "ipv4", 00:30:17.417 "trsvcid": "$NVMF_PORT", 00:30:17.417 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:17.417 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:17.417 "hdgst": ${hdgst:-false}, 00:30:17.417 "ddgst": ${ddgst:-false} 00:30:17.417 }, 00:30:17.417 "method": "bdev_nvme_attach_controller" 00:30:17.417 } 00:30:17.417 EOF 00:30:17.417 )") 00:30:17.675 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:17.675 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:17.676 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:17.676 { 00:30:17.676 "params": { 00:30:17.676 "name": "Nvme$subsystem", 00:30:17.676 "trtype": "$TEST_TRANSPORT", 00:30:17.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:17.676 "adrfam": "ipv4", 00:30:17.676 "trsvcid": "$NVMF_PORT", 00:30:17.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:17.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:17.676 "hdgst": ${hdgst:-false}, 00:30:17.676 "ddgst": ${ddgst:-false} 00:30:17.676 }, 00:30:17.676 "method": "bdev_nvme_attach_controller" 00:30:17.676 } 00:30:17.676 EOF 00:30:17.676 )") 00:30:17.676 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:17.676 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:17.676 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:17.676 { 00:30:17.676 "params": { 00:30:17.676 "name": "Nvme$subsystem", 00:30:17.676 "trtype": "$TEST_TRANSPORT", 00:30:17.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:17.676 "adrfam": "ipv4", 00:30:17.676 "trsvcid": "$NVMF_PORT", 00:30:17.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:17.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:17.676 "hdgst": ${hdgst:-false}, 00:30:17.676 "ddgst": ${ddgst:-false} 00:30:17.676 }, 00:30:17.676 "method": "bdev_nvme_attach_controller" 00:30:17.676 } 00:30:17.676 EOF 00:30:17.676 )") 00:30:17.676 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:17.676 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:17.676 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:17.676 { 00:30:17.676 "params": { 00:30:17.676 "name": "Nvme$subsystem", 00:30:17.676 "trtype": "$TEST_TRANSPORT", 00:30:17.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:17.676 "adrfam": "ipv4", 00:30:17.676 "trsvcid": "$NVMF_PORT", 00:30:17.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:17.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:17.676 "hdgst": ${hdgst:-false}, 00:30:17.676 "ddgst": ${ddgst:-false} 00:30:17.676 }, 00:30:17.676 "method": "bdev_nvme_attach_controller" 00:30:17.676 } 00:30:17.676 EOF 00:30:17.676 )") 00:30:17.676 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:17.676 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:17.676 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:17.676 { 00:30:17.676 "params": { 00:30:17.676 "name": "Nvme$subsystem", 00:30:17.676 "trtype": "$TEST_TRANSPORT", 00:30:17.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:17.676 "adrfam": "ipv4", 00:30:17.676 "trsvcid": "$NVMF_PORT", 00:30:17.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:17.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:17.676 "hdgst": ${hdgst:-false}, 00:30:17.676 "ddgst": ${ddgst:-false} 00:30:17.676 }, 00:30:17.676 "method": "bdev_nvme_attach_controller" 00:30:17.676 } 00:30:17.676 EOF 00:30:17.676 )") 00:30:17.676 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:17.676 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:17.676 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:17.676 { 00:30:17.676 "params": { 00:30:17.676 "name": "Nvme$subsystem", 00:30:17.676 "trtype": "$TEST_TRANSPORT", 00:30:17.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:17.676 "adrfam": "ipv4", 00:30:17.676 "trsvcid": "$NVMF_PORT", 00:30:17.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:17.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:17.676 "hdgst": ${hdgst:-false}, 00:30:17.676 "ddgst": ${ddgst:-false} 00:30:17.676 }, 00:30:17.676 "method": "bdev_nvme_attach_controller" 00:30:17.676 } 00:30:17.676 EOF 00:30:17.676 )") 00:30:17.676 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:17.676 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:17.676 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:17.676 { 00:30:17.676 "params": { 00:30:17.676 "name": "Nvme$subsystem", 00:30:17.676 "trtype": "$TEST_TRANSPORT", 00:30:17.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:17.676 "adrfam": "ipv4", 00:30:17.676 "trsvcid": "$NVMF_PORT", 00:30:17.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:17.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:17.676 "hdgst": ${hdgst:-false}, 00:30:17.676 "ddgst": ${ddgst:-false} 00:30:17.676 }, 00:30:17.676 "method": "bdev_nvme_attach_controller" 00:30:17.676 } 00:30:17.676 EOF 00:30:17.676 )") 00:30:17.676 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:17.676 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:17.676 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:17.676 { 00:30:17.676 "params": { 00:30:17.676 "name": "Nvme$subsystem", 00:30:17.676 "trtype": "$TEST_TRANSPORT", 00:30:17.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:17.676 "adrfam": "ipv4", 00:30:17.676 "trsvcid": "$NVMF_PORT", 00:30:17.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:17.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:17.676 "hdgst": ${hdgst:-false}, 00:30:17.676 "ddgst": ${ddgst:-false} 00:30:17.676 }, 00:30:17.676 "method": "bdev_nvme_attach_controller" 00:30:17.676 } 00:30:17.676 EOF 00:30:17.676 )") 00:30:17.676 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:17.676 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:17.676 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:17.676 { 00:30:17.676 "params": { 00:30:17.676 "name": "Nvme$subsystem", 00:30:17.676 "trtype": "$TEST_TRANSPORT", 00:30:17.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:17.676 "adrfam": "ipv4", 00:30:17.676 "trsvcid": "$NVMF_PORT", 00:30:17.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:17.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:17.676 "hdgst": ${hdgst:-false}, 00:30:17.676 "ddgst": ${ddgst:-false} 00:30:17.676 }, 00:30:17.676 "method": "bdev_nvme_attach_controller" 00:30:17.676 } 00:30:17.676 EOF 00:30:17.676 )") 00:30:17.676 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:17.676 [2024-11-07 13:35:25.487347] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:30:17.676 [2024-11-07 13:35:25.487457] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3994697 ] 00:30:17.676 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:30:17.676 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:30:17.676 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:17.676 "params": { 00:30:17.676 "name": "Nvme1", 00:30:17.676 "trtype": "tcp", 00:30:17.676 "traddr": "10.0.0.2", 00:30:17.676 "adrfam": "ipv4", 00:30:17.676 "trsvcid": "4420", 00:30:17.676 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:17.676 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:17.676 "hdgst": false, 00:30:17.676 "ddgst": false 00:30:17.676 }, 00:30:17.676 "method": "bdev_nvme_attach_controller" 00:30:17.676 },{ 00:30:17.676 "params": { 00:30:17.676 "name": "Nvme2", 00:30:17.676 "trtype": "tcp", 00:30:17.676 "traddr": "10.0.0.2", 00:30:17.676 "adrfam": "ipv4", 00:30:17.676 "trsvcid": "4420", 00:30:17.676 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:17.676 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:17.676 "hdgst": false, 00:30:17.676 "ddgst": false 00:30:17.676 }, 00:30:17.676 "method": "bdev_nvme_attach_controller" 00:30:17.676 },{ 00:30:17.676 "params": { 00:30:17.676 "name": "Nvme3", 00:30:17.676 "trtype": "tcp", 00:30:17.676 "traddr": "10.0.0.2", 00:30:17.676 "adrfam": "ipv4", 00:30:17.676 "trsvcid": "4420", 00:30:17.676 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:17.676 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:17.676 "hdgst": false, 00:30:17.676 "ddgst": false 00:30:17.676 }, 00:30:17.676 "method": "bdev_nvme_attach_controller" 00:30:17.676 },{ 00:30:17.676 "params": { 00:30:17.676 "name": "Nvme4", 00:30:17.676 "trtype": "tcp", 00:30:17.676 "traddr": "10.0.0.2", 00:30:17.676 "adrfam": "ipv4", 00:30:17.676 "trsvcid": "4420", 00:30:17.677 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:17.677 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:17.677 "hdgst": false, 00:30:17.677 "ddgst": false 00:30:17.677 }, 00:30:17.677 "method": "bdev_nvme_attach_controller" 00:30:17.677 },{ 00:30:17.677 "params": { 00:30:17.677 "name": "Nvme5", 00:30:17.677 "trtype": "tcp", 00:30:17.677 "traddr": "10.0.0.2", 00:30:17.677 "adrfam": "ipv4", 00:30:17.677 "trsvcid": "4420", 00:30:17.677 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:17.677 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:17.677 "hdgst": false, 00:30:17.677 "ddgst": false 00:30:17.677 }, 00:30:17.677 "method": "bdev_nvme_attach_controller" 00:30:17.677 },{ 00:30:17.677 "params": { 00:30:17.677 "name": "Nvme6", 00:30:17.677 "trtype": "tcp", 00:30:17.677 "traddr": "10.0.0.2", 00:30:17.677 "adrfam": "ipv4", 00:30:17.677 "trsvcid": "4420", 00:30:17.677 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:17.677 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:17.677 "hdgst": false, 00:30:17.677 "ddgst": false 00:30:17.677 }, 00:30:17.677 "method": "bdev_nvme_attach_controller" 00:30:17.677 },{ 00:30:17.677 "params": { 00:30:17.677 "name": "Nvme7", 00:30:17.677 "trtype": "tcp", 00:30:17.677 "traddr": "10.0.0.2", 00:30:17.677 "adrfam": "ipv4", 00:30:17.677 "trsvcid": "4420", 00:30:17.677 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:17.677 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:17.677 "hdgst": false, 00:30:17.677 "ddgst": false 00:30:17.677 }, 00:30:17.677 "method": "bdev_nvme_attach_controller" 00:30:17.677 },{ 00:30:17.677 "params": { 00:30:17.677 "name": "Nvme8", 00:30:17.677 "trtype": "tcp", 00:30:17.677 "traddr": "10.0.0.2", 00:30:17.677 "adrfam": "ipv4", 00:30:17.677 "trsvcid": "4420", 00:30:17.677 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:17.677 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:17.677 "hdgst": false, 00:30:17.677 "ddgst": false 00:30:17.677 }, 00:30:17.677 "method": "bdev_nvme_attach_controller" 00:30:17.677 },{ 00:30:17.677 "params": { 00:30:17.677 "name": "Nvme9", 00:30:17.677 "trtype": "tcp", 00:30:17.677 "traddr": "10.0.0.2", 00:30:17.677 "adrfam": "ipv4", 00:30:17.677 "trsvcid": "4420", 00:30:17.677 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:17.677 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:17.677 "hdgst": false, 00:30:17.677 "ddgst": false 00:30:17.677 }, 00:30:17.677 "method": "bdev_nvme_attach_controller" 00:30:17.677 },{ 00:30:17.677 "params": { 00:30:17.677 "name": "Nvme10", 00:30:17.677 "trtype": "tcp", 00:30:17.677 "traddr": "10.0.0.2", 00:30:17.677 "adrfam": "ipv4", 00:30:17.677 "trsvcid": "4420", 00:30:17.677 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:17.677 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:17.677 "hdgst": false, 00:30:17.677 "ddgst": false 00:30:17.677 }, 00:30:17.677 "method": "bdev_nvme_attach_controller" 00:30:17.677 }' 00:30:17.677 [2024-11-07 13:35:25.626824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:17.935 [2024-11-07 13:35:25.724695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:19.841 Running I/O for 10 seconds... 00:30:20.101 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:20.101 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:30:20.101 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:20.101 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.101 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:20.101 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.101 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:30:20.101 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:20.101 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:30:20.101 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:30:20.101 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:30:20.101 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:30:20.101 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:20.101 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:20.101 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:20.101 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.101 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:20.101 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.101 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:30:20.101 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:30:20.101 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:30:20.361 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:30:20.361 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:20.361 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:20.361 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:20.361 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.361 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:20.361 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.361 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:30:20.361 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:30:20.361 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:30:20.361 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:30:20.361 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:30:20.361 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3994697 00:30:20.361 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 3994697 ']' 00:30:20.361 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 3994697 00:30:20.361 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:30:20.361 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:20.361 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3994697 00:30:20.621 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:20.621 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:20.621 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3994697' 00:30:20.621 killing process with pid 3994697 00:30:20.621 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 3994697 00:30:20.621 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 3994697 00:30:20.621 Received shutdown signal, test time was about 0.973989 seconds 00:30:20.621 00:30:20.621 Latency(us) 00:30:20.621 [2024-11-07T12:35:28.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:20.621 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:20.621 Verification LBA range: start 0x0 length 0x400 00:30:20.621 Nvme1n1 : 0.93 206.04 12.88 0.00 0.00 306534.12 20534.61 269134.51 00:30:20.621 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:20.621 Verification LBA range: start 0x0 length 0x400 00:30:20.621 Nvme2n1 : 0.95 202.31 12.64 0.00 0.00 305735.96 18786.99 269134.51 00:30:20.621 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:20.621 Verification LBA range: start 0x0 length 0x400 00:30:20.621 Nvme3n1 : 0.96 265.48 16.59 0.00 0.00 228363.09 18786.99 272629.76 00:30:20.621 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:20.621 Verification LBA range: start 0x0 length 0x400 00:30:20.621 Nvme4n1 : 0.97 264.02 16.50 0.00 0.00 224685.23 17913.17 265639.25 00:30:20.621 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:20.621 Verification LBA range: start 0x0 length 0x400 00:30:20.621 Nvme5n1 : 0.94 203.42 12.71 0.00 0.00 284431.36 24466.77 269134.51 00:30:20.621 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:20.621 Verification LBA range: start 0x0 length 0x400 00:30:20.621 Nvme6n1 : 0.96 200.99 12.56 0.00 0.00 281822.72 20206.93 279620.27 00:30:20.621 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:20.621 Verification LBA range: start 0x0 length 0x400 00:30:20.621 Nvme7n1 : 0.97 263.08 16.44 0.00 0.00 210772.69 22937.60 269134.51 00:30:20.621 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:20.621 Verification LBA range: start 0x0 length 0x400 00:30:20.621 Nvme8n1 : 0.94 204.39 12.77 0.00 0.00 263250.49 38447.79 248162.99 00:30:20.621 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:20.621 Verification LBA range: start 0x0 length 0x400 00:30:20.621 Nvme9n1 : 0.96 199.78 12.49 0.00 0.00 264306.35 15400.96 295348.91 00:30:20.621 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:20.621 Verification LBA range: start 0x0 length 0x400 00:30:20.621 Nvme10n1 : 0.92 208.22 13.01 0.00 0.00 244797.72 18131.63 269134.51 00:30:20.621 [2024-11-07T12:35:28.628Z] =================================================================================================================== 00:30:20.621 [2024-11-07T12:35:28.628Z] Total : 2217.74 138.61 0.00 0.00 257815.76 15400.96 295348.91 00:30:21.192 13:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:30:22.587 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3994358 00:30:22.587 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:30:22.587 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:22.587 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:22.587 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:22.587 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:22.587 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:22.587 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:30:22.587 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:22.587 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:30:22.587 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:22.587 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:22.587 rmmod nvme_tcp 00:30:22.587 rmmod nvme_fabrics 00:30:22.587 rmmod nvme_keyring 00:30:22.587 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:22.587 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:30:22.587 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:30:22.587 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3994358 ']' 00:30:22.587 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3994358 00:30:22.587 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 3994358 ']' 00:30:22.587 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 3994358 00:30:22.587 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:30:22.588 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:22.588 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3994358 00:30:22.588 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:22.588 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:22.588 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3994358' 00:30:22.588 killing process with pid 3994358 00:30:22.588 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 3994358 00:30:22.588 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 3994358 00:30:23.970 13:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:23.970 13:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:23.970 13:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:23.970 13:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:30:23.970 13:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:30:23.971 13:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:23.971 13:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:30:23.971 13:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:23.971 13:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:23.971 13:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.971 13:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:23.971 13:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:25.880 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:25.880 00:30:25.880 real 0m10.517s 00:30:25.880 user 0m34.137s 00:30:25.880 sys 0m1.581s 00:30:25.880 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:25.880 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:25.880 ************************************ 00:30:25.880 END TEST nvmf_shutdown_tc2 00:30:25.880 ************************************ 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:26.141 ************************************ 00:30:26.141 START TEST nvmf_shutdown_tc3 00:30:26.141 ************************************ 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc3 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:26.141 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:26.141 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:26.141 Found net devices under 0000:31:00.0: cvl_0_0 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:26.141 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:26.142 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:26.142 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:26.142 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:26.142 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:26.142 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:26.142 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:26.142 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:26.142 Found net devices under 0000:31:00.1: cvl_0_1 00:30:26.142 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:26.142 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:26.142 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:30:26.142 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:26.142 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:26.142 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:26.142 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:26.142 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:26.142 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:26.142 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:26.142 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:26.142 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:26.142 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:26.142 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:26.142 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:26.142 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:26.142 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:26.142 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:26.142 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:26.142 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:26.142 13:35:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:26.142 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:26.142 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:26.142 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:26.142 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:26.403 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:26.403 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:26.403 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:26.403 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:26.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:26.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.691 ms 00:30:26.403 00:30:26.403 --- 10.0.0.2 ping statistics --- 00:30:26.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.403 rtt min/avg/max/mdev = 0.691/0.691/0.691/0.000 ms 00:30:26.403 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:26.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:26.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:30:26.403 00:30:26.403 --- 10.0.0.1 ping statistics --- 00:30:26.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.403 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:30:26.403 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:26.403 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:30:26.403 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:26.403 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:26.403 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:26.403 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:26.403 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:26.403 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:26.403 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:26.403 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:30:26.403 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:26.403 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:26.403 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:26.403 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3996459 00:30:26.403 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3996459 00:30:26.403 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:26.403 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 3996459 ']' 00:30:26.403 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:26.403 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:26.403 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:26.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:26.403 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:26.403 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:26.403 [2024-11-07 13:35:34.396670] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:30:26.403 [2024-11-07 13:35:34.396807] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:26.663 [2024-11-07 13:35:34.566089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:26.663 [2024-11-07 13:35:34.650367] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:26.663 [2024-11-07 13:35:34.650406] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:26.663 [2024-11-07 13:35:34.650415] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:26.663 [2024-11-07 13:35:34.650423] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:26.663 [2024-11-07 13:35:34.650430] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:26.663 [2024-11-07 13:35:34.652231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:26.663 [2024-11-07 13:35:34.652377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:26.663 [2024-11-07 13:35:34.652484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:26.663 [2024-11-07 13:35:34.652511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:27.233 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:27.233 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:30:27.233 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:27.233 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:27.233 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:27.233 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:27.233 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:27.233 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.233 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:27.233 [2024-11-07 13:35:35.212380] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:27.493 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.493 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:27.493 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:27.493 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:27.493 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:27.493 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:27.493 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:27.493 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:27.493 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:27.493 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:27.493 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:27.493 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:27.493 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:27.493 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:27.493 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:27.493 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:27.494 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:27.494 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:27.494 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:27.494 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:27.494 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:27.494 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:27.494 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:27.494 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:27.494 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:27.494 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:27.494 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:27.494 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.494 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:27.494 Malloc1 00:30:27.494 [2024-11-07 13:35:35.357702] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:27.494 Malloc2 00:30:27.494 Malloc3 00:30:27.753 Malloc4 00:30:27.753 Malloc5 00:30:27.753 Malloc6 00:30:27.753 Malloc7 00:30:28.013 Malloc8 00:30:28.013 Malloc9 00:30:28.013 Malloc10 00:30:28.013 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.013 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:28.013 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:28.013 13:35:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:28.013 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3996805 00:30:28.013 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3996805 /var/tmp/bdevperf.sock 00:30:28.013 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 3996805 ']' 00:30:28.013 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:28.013 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:28.013 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:28.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:28.013 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:28.013 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:28.013 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:28.013 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:28.013 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:30:28.013 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:30:28.013 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:28.013 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:28.013 { 00:30:28.013 "params": { 00:30:28.013 "name": "Nvme$subsystem", 00:30:28.013 "trtype": "$TEST_TRANSPORT", 00:30:28.013 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.013 "adrfam": "ipv4", 00:30:28.013 "trsvcid": "$NVMF_PORT", 00:30:28.013 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.013 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.013 "hdgst": ${hdgst:-false}, 00:30:28.013 "ddgst": ${ddgst:-false} 00:30:28.013 }, 00:30:28.013 "method": "bdev_nvme_attach_controller" 00:30:28.013 } 00:30:28.013 EOF 00:30:28.013 )") 00:30:28.013 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:28.274 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:28.274 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:28.274 { 00:30:28.274 "params": { 00:30:28.274 "name": "Nvme$subsystem", 00:30:28.274 "trtype": "$TEST_TRANSPORT", 00:30:28.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.274 "adrfam": "ipv4", 00:30:28.274 "trsvcid": "$NVMF_PORT", 00:30:28.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.274 "hdgst": ${hdgst:-false}, 00:30:28.274 "ddgst": ${ddgst:-false} 00:30:28.274 }, 00:30:28.274 "method": "bdev_nvme_attach_controller" 00:30:28.274 } 00:30:28.274 EOF 00:30:28.274 )") 00:30:28.274 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:28.274 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:28.274 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:28.274 { 00:30:28.274 "params": { 00:30:28.274 "name": "Nvme$subsystem", 00:30:28.274 "trtype": "$TEST_TRANSPORT", 00:30:28.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.274 "adrfam": "ipv4", 00:30:28.274 "trsvcid": "$NVMF_PORT", 00:30:28.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.274 "hdgst": ${hdgst:-false}, 00:30:28.274 "ddgst": ${ddgst:-false} 00:30:28.274 }, 00:30:28.274 "method": "bdev_nvme_attach_controller" 00:30:28.274 } 00:30:28.274 EOF 00:30:28.274 )") 00:30:28.274 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:28.274 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:28.274 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:28.274 { 00:30:28.274 "params": { 00:30:28.274 "name": "Nvme$subsystem", 00:30:28.274 "trtype": "$TEST_TRANSPORT", 00:30:28.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.274 "adrfam": "ipv4", 00:30:28.274 "trsvcid": "$NVMF_PORT", 00:30:28.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.274 "hdgst": ${hdgst:-false}, 00:30:28.274 "ddgst": ${ddgst:-false} 00:30:28.274 }, 00:30:28.274 "method": "bdev_nvme_attach_controller" 00:30:28.274 } 00:30:28.274 EOF 00:30:28.274 )") 00:30:28.274 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:28.274 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:28.274 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:28.274 { 00:30:28.274 "params": { 00:30:28.274 "name": "Nvme$subsystem", 00:30:28.274 "trtype": "$TEST_TRANSPORT", 00:30:28.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.274 "adrfam": "ipv4", 00:30:28.274 "trsvcid": "$NVMF_PORT", 00:30:28.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.274 "hdgst": ${hdgst:-false}, 00:30:28.274 "ddgst": ${ddgst:-false} 00:30:28.274 }, 00:30:28.274 "method": "bdev_nvme_attach_controller" 00:30:28.274 } 00:30:28.274 EOF 00:30:28.274 )") 00:30:28.274 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:28.274 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:28.274 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:28.274 { 00:30:28.274 "params": { 00:30:28.274 "name": "Nvme$subsystem", 00:30:28.274 "trtype": "$TEST_TRANSPORT", 00:30:28.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.274 "adrfam": "ipv4", 00:30:28.274 "trsvcid": "$NVMF_PORT", 00:30:28.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.275 "hdgst": ${hdgst:-false}, 00:30:28.275 "ddgst": ${ddgst:-false} 00:30:28.275 }, 00:30:28.275 "method": "bdev_nvme_attach_controller" 00:30:28.275 } 00:30:28.275 EOF 00:30:28.275 )") 00:30:28.275 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:28.275 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:28.275 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:28.275 { 00:30:28.275 "params": { 00:30:28.275 "name": "Nvme$subsystem", 00:30:28.275 "trtype": "$TEST_TRANSPORT", 00:30:28.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.275 "adrfam": "ipv4", 00:30:28.275 "trsvcid": "$NVMF_PORT", 00:30:28.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.275 "hdgst": ${hdgst:-false}, 00:30:28.275 "ddgst": ${ddgst:-false} 00:30:28.275 }, 00:30:28.275 "method": "bdev_nvme_attach_controller" 00:30:28.275 } 00:30:28.275 EOF 00:30:28.275 )") 00:30:28.275 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:28.275 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:28.275 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:28.275 { 00:30:28.275 "params": { 00:30:28.275 "name": "Nvme$subsystem", 00:30:28.275 "trtype": "$TEST_TRANSPORT", 00:30:28.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.275 "adrfam": "ipv4", 00:30:28.275 "trsvcid": "$NVMF_PORT", 00:30:28.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.275 "hdgst": ${hdgst:-false}, 00:30:28.275 "ddgst": ${ddgst:-false} 00:30:28.275 }, 00:30:28.275 "method": "bdev_nvme_attach_controller" 00:30:28.275 } 00:30:28.275 EOF 00:30:28.275 )") 00:30:28.275 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:28.275 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:28.275 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:28.275 { 00:30:28.275 "params": { 00:30:28.275 "name": "Nvme$subsystem", 00:30:28.275 "trtype": "$TEST_TRANSPORT", 00:30:28.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.275 "adrfam": "ipv4", 00:30:28.275 "trsvcid": "$NVMF_PORT", 00:30:28.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.275 "hdgst": ${hdgst:-false}, 00:30:28.275 "ddgst": ${ddgst:-false} 00:30:28.275 }, 00:30:28.275 "method": "bdev_nvme_attach_controller" 00:30:28.275 } 00:30:28.275 EOF 00:30:28.275 )") 00:30:28.275 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:28.275 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:28.275 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:28.275 { 00:30:28.275 "params": { 00:30:28.275 "name": "Nvme$subsystem", 00:30:28.275 "trtype": "$TEST_TRANSPORT", 00:30:28.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.275 "adrfam": "ipv4", 00:30:28.275 "trsvcid": "$NVMF_PORT", 00:30:28.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.275 "hdgst": ${hdgst:-false}, 00:30:28.275 "ddgst": ${ddgst:-false} 00:30:28.275 }, 00:30:28.275 "method": "bdev_nvme_attach_controller" 00:30:28.275 } 00:30:28.275 EOF 00:30:28.275 )") 00:30:28.275 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:28.275 [2024-11-07 13:35:36.089104] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:30:28.275 [2024-11-07 13:35:36.089211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3996805 ] 00:30:28.275 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:30:28.275 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:30:28.275 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:28.275 "params": { 00:30:28.275 "name": "Nvme1", 00:30:28.275 "trtype": "tcp", 00:30:28.275 "traddr": "10.0.0.2", 00:30:28.275 "adrfam": "ipv4", 00:30:28.275 "trsvcid": "4420", 00:30:28.275 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:28.275 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:28.275 "hdgst": false, 00:30:28.275 "ddgst": false 00:30:28.275 }, 00:30:28.275 "method": "bdev_nvme_attach_controller" 00:30:28.275 },{ 00:30:28.275 "params": { 00:30:28.275 "name": "Nvme2", 00:30:28.275 "trtype": "tcp", 00:30:28.275 "traddr": "10.0.0.2", 00:30:28.275 "adrfam": "ipv4", 00:30:28.275 "trsvcid": "4420", 00:30:28.275 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:28.275 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:28.275 "hdgst": false, 00:30:28.275 "ddgst": false 00:30:28.275 }, 00:30:28.275 "method": "bdev_nvme_attach_controller" 00:30:28.275 },{ 00:30:28.275 "params": { 00:30:28.275 "name": "Nvme3", 00:30:28.275 "trtype": "tcp", 00:30:28.275 "traddr": "10.0.0.2", 00:30:28.275 "adrfam": "ipv4", 00:30:28.275 "trsvcid": "4420", 00:30:28.275 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:28.275 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:28.275 "hdgst": false, 00:30:28.275 "ddgst": false 00:30:28.275 }, 00:30:28.275 "method": "bdev_nvme_attach_controller" 00:30:28.275 },{ 00:30:28.275 "params": { 00:30:28.275 "name": "Nvme4", 00:30:28.275 "trtype": "tcp", 00:30:28.275 "traddr": "10.0.0.2", 00:30:28.275 "adrfam": "ipv4", 00:30:28.275 "trsvcid": "4420", 00:30:28.275 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:28.275 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:28.275 "hdgst": false, 00:30:28.275 "ddgst": false 00:30:28.275 }, 00:30:28.275 "method": "bdev_nvme_attach_controller" 00:30:28.275 },{ 00:30:28.275 "params": { 00:30:28.275 "name": "Nvme5", 00:30:28.275 "trtype": "tcp", 00:30:28.275 "traddr": "10.0.0.2", 00:30:28.275 "adrfam": "ipv4", 00:30:28.275 "trsvcid": "4420", 00:30:28.275 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:28.275 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:28.275 "hdgst": false, 00:30:28.275 "ddgst": false 00:30:28.275 }, 00:30:28.275 "method": "bdev_nvme_attach_controller" 00:30:28.275 },{ 00:30:28.275 "params": { 00:30:28.275 "name": "Nvme6", 00:30:28.275 "trtype": "tcp", 00:30:28.275 "traddr": "10.0.0.2", 00:30:28.275 "adrfam": "ipv4", 00:30:28.275 "trsvcid": "4420", 00:30:28.275 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:28.275 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:28.275 "hdgst": false, 00:30:28.275 "ddgst": false 00:30:28.275 }, 00:30:28.275 "method": "bdev_nvme_attach_controller" 00:30:28.275 },{ 00:30:28.275 "params": { 00:30:28.275 "name": "Nvme7", 00:30:28.275 "trtype": "tcp", 00:30:28.275 "traddr": "10.0.0.2", 00:30:28.275 "adrfam": "ipv4", 00:30:28.275 "trsvcid": "4420", 00:30:28.275 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:28.275 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:28.275 "hdgst": false, 00:30:28.275 "ddgst": false 00:30:28.275 }, 00:30:28.275 "method": "bdev_nvme_attach_controller" 00:30:28.275 },{ 00:30:28.275 "params": { 00:30:28.275 "name": "Nvme8", 00:30:28.275 "trtype": "tcp", 00:30:28.275 "traddr": "10.0.0.2", 00:30:28.275 "adrfam": "ipv4", 00:30:28.275 "trsvcid": "4420", 00:30:28.275 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:28.275 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:28.275 "hdgst": false, 00:30:28.275 "ddgst": false 00:30:28.275 }, 00:30:28.275 "method": "bdev_nvme_attach_controller" 00:30:28.275 },{ 00:30:28.275 "params": { 00:30:28.275 "name": "Nvme9", 00:30:28.275 "trtype": "tcp", 00:30:28.275 "traddr": "10.0.0.2", 00:30:28.275 "adrfam": "ipv4", 00:30:28.275 "trsvcid": "4420", 00:30:28.275 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:28.275 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:28.275 "hdgst": false, 00:30:28.275 "ddgst": false 00:30:28.275 }, 00:30:28.275 "method": "bdev_nvme_attach_controller" 00:30:28.275 },{ 00:30:28.275 "params": { 00:30:28.275 "name": "Nvme10", 00:30:28.275 "trtype": "tcp", 00:30:28.275 "traddr": "10.0.0.2", 00:30:28.275 "adrfam": "ipv4", 00:30:28.275 "trsvcid": "4420", 00:30:28.275 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:28.275 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:28.275 "hdgst": false, 00:30:28.275 "ddgst": false 00:30:28.275 }, 00:30:28.275 "method": "bdev_nvme_attach_controller" 00:30:28.275 }' 00:30:28.275 [2024-11-07 13:35:36.226898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.536 [2024-11-07 13:35:36.325184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:30.445 Running I/O for 10 seconds... 00:30:30.704 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:30.704 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:30:30.704 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:30.704 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.704 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:30.704 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.704 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:30.704 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:30:30.704 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:30.704 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:30:30.704 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:30:30.704 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:30:30.704 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:30:30.704 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:30.704 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:30.704 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:30.704 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.704 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:30.704 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.704 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:30:30.704 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:30:30.704 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:30:30.964 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:30:30.964 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:30.964 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:30.964 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:30.964 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.964 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:30.964 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.964 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:30:30.964 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:30:30.964 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:30:30.964 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:30:30.964 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:30:30.964 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3996459 00:30:30.964 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 3996459 ']' 00:30:30.964 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 3996459 00:30:30.964 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # uname 00:30:30.964 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:30.964 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3996459 00:30:31.239 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:31.239 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:31.239 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3996459' 00:30:31.239 killing process with pid 3996459 00:30:31.239 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # kill 3996459 00:30:31.239 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@976 -- # wait 3996459 00:30:31.239 [2024-11-07 13:35:38.999207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.239 [2024-11-07 13:35:38.999260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.239 [2024-11-07 13:35:38.999269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.239 [2024-11-07 13:35:38.999276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.239 [2024-11-07 13:35:38.999283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.239 [2024-11-07 13:35:38.999290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.239 [2024-11-07 13:35:38.999298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.239 [2024-11-07 13:35:38.999304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.239 [2024-11-07 13:35:38.999310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.239 [2024-11-07 13:35:38.999317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.239 [2024-11-07 13:35:38.999323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.239 [2024-11-07 13:35:38.999330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.239 [2024-11-07 13:35:38.999336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.239 [2024-11-07 13:35:38.999343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.239 [2024-11-07 13:35:38.999349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.239 [2024-11-07 13:35:38.999355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:38.999666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.240 [2024-11-07 13:35:39.002403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.240 [2024-11-07 13:35:39.002453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.240 [2024-11-07 13:35:39.002482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.240 [2024-11-07 13:35:39.002495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.240 [2024-11-07 13:35:39.002510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.240 [2024-11-07 13:35:39.002522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.240 [2024-11-07 13:35:39.002541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.240 [2024-11-07 13:35:39.002552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.240 [2024-11-07 13:35:39.002567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.240 [2024-11-07 13:35:39.002578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.240 [2024-11-07 13:35:39.002592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.240 [2024-11-07 13:35:39.002602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.240 [2024-11-07 13:35:39.002616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.240 [2024-11-07 13:35:39.002627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.240 [2024-11-07 13:35:39.002640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.240 [2024-11-07 13:35:39.002651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.240 [2024-11-07 13:35:39.002664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.240 [2024-11-07 13:35:39.002676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.240 [2024-11-07 13:35:39.002689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.240 [2024-11-07 13:35:39.002700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.240 [2024-11-07 13:35:39.002713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.240 [2024-11-07 13:35:39.002725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.240 [2024-11-07 13:35:39.002738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.240 [2024-11-07 13:35:39.002749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.240 [2024-11-07 13:35:39.002762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.240 [2024-11-07 13:35:39.002774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.240 [2024-11-07 13:35:39.002787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.240 [2024-11-07 13:35:39.002798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.240 [2024-11-07 13:35:39.002811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.240 [2024-11-07 13:35:39.002822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.240 [2024-11-07 13:35:39.002835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.240 [2024-11-07 13:35:39.002847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.240 [2024-11-07 13:35:39.002860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.240 [2024-11-07 13:35:39.002880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.241 [2024-11-07 13:35:39.002893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.241 [2024-11-07 13:35:39.002904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.241 [2024-11-07 13:35:39.002917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.241 [2024-11-07 13:35:39.002928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.241 [2024-11-07 13:35:39.002942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.241 [2024-11-07 13:35:39.002952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.241 [2024-11-07 13:35:39.002965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.241 [2024-11-07 13:35:39.002976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.241 [2024-11-07 13:35:39.002990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.241 [2024-11-07 13:35:39.003001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.241 [2024-11-07 13:35:39.003014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.241 [2024-11-07 13:35:39.003024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.241 [2024-11-07 13:35:39.003039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.241 [2024-11-07 13:35:39.003050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.241 [2024-11-07 13:35:39.003064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.241 [2024-11-07 13:35:39.003074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.241 [2024-11-07 13:35:39.003088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.241 [2024-11-07 13:35:39.003100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.241 [2024-11-07 13:35:39.003114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.241 [2024-11-07 13:35:39.003124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.241 [2024-11-07 13:35:39.003137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.241 [2024-11-07 13:35:39.003149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.241 [2024-11-07 13:35:39.003164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.241 [2024-11-07 13:35:39.003175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.241 [2024-11-07 13:35:39.003187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.241 [2024-11-07 13:35:39.003199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.241 [2024-11-07 13:35:39.003212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.241 [2024-11-07 13:35:39.003222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.241 [2024-11-07 13:35:39.003236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.241 [2024-11-07 13:35:39.003232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same [2024-11-07 13:35:39.003247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:30:31.241 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.241 [2024-11-07 13:35:39.003264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same [2024-11-07 13:35:39.003264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128with the state(6) to be set 00:30:31.241 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.241 [2024-11-07 13:35:39.003276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.241 [2024-11-07 13:35:39.003279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.241 [2024-11-07 13:35:39.003283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.241 [2024-11-07 13:35:39.003291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.241 [2024-11-07 13:35:39.003295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.241 [2024-11-07 13:35:39.003298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.241 [2024-11-07 13:35:39.003307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.241 [2024-11-07 13:35:39.003308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.241 [2024-11-07 13:35:39.003314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.241 [2024-11-07 13:35:39.003322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.241 [2024-11-07 13:35:39.003328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.241 [2024-11-07 13:35:39.003328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.241 [2024-11-07 13:35:39.003336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.241 [2024-11-07 13:35:39.003343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.241 [2024-11-07 13:35:39.003342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.241 [2024-11-07 13:35:39.003350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.241 [2024-11-07 13:35:39.003358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.241 [2024-11-07 13:35:39.003359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.241 [2024-11-07 13:35:39.003364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.241 [2024-11-07 13:35:39.003371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.241 [2024-11-07 13:35:39.003373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.241 [2024-11-07 13:35:39.003378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.241 [2024-11-07 13:35:39.003386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.241 [2024-11-07 13:35:39.003392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same [2024-11-07 13:35:39.003390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:12with the state(6) to be set 00:30:31.241 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.241 [2024-11-07 13:35:39.003401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.241 [2024-11-07 13:35:39.003404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.241 [2024-11-07 13:35:39.003408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.241 [2024-11-07 13:35:39.003416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.241 [2024-11-07 13:35:39.003420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:12[2024-11-07 13:35:39.003423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.241 with the state(6) to be set 00:30:31.241 [2024-11-07 13:35:39.003432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.241 [2024-11-07 13:35:39.003434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.241 [2024-11-07 13:35:39.003439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.241 [2024-11-07 13:35:39.003446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.241 [2024-11-07 13:35:39.003448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.241 [2024-11-07 13:35:39.003453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.241 [2024-11-07 13:35:39.003460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.241 [2024-11-07 13:35:39.003460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.241 [2024-11-07 13:35:39.003468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.241 [2024-11-07 13:35:39.003475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same [2024-11-07 13:35:39.003474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:12with the state(6) to be set 00:30:31.241 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.241 [2024-11-07 13:35:39.003491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.241 [2024-11-07 13:35:39.003495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.241 [2024-11-07 13:35:39.003499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.241 [2024-11-07 13:35:39.003506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.241 [2024-11-07 13:35:39.003508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.241 [2024-11-07 13:35:39.003513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.242 [2024-11-07 13:35:39.003520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.242 [2024-11-07 13:35:39.003520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.242 [2024-11-07 13:35:39.003527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.242 [2024-11-07 13:35:39.003534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.242 [2024-11-07 13:35:39.003534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.242 [2024-11-07 13:35:39.003542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.242 [2024-11-07 13:35:39.003546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.242 [2024-11-07 13:35:39.003549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.242 [2024-11-07 13:35:39.003556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.242 [2024-11-07 13:35:39.003559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.242 [2024-11-07 13:35:39.003563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.242 [2024-11-07 13:35:39.003570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.242 [2024-11-07 13:35:39.003570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.242 [2024-11-07 13:35:39.003578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.242 [2024-11-07 13:35:39.003585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.242 [2024-11-07 13:35:39.003585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.242 [2024-11-07 13:35:39.003592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.242 [2024-11-07 13:35:39.003597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.242 [2024-11-07 13:35:39.003600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.242 [2024-11-07 13:35:39.003608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.242 [2024-11-07 13:35:39.003610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.242 [2024-11-07 13:35:39.003614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.242 [2024-11-07 13:35:39.003621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.242 [2024-11-07 13:35:39.003622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.242 [2024-11-07 13:35:39.003628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.242 [2024-11-07 13:35:39.003636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.242 [2024-11-07 13:35:39.003636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.242 [2024-11-07 13:35:39.003643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.242 [2024-11-07 13:35:39.003647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.242 [2024-11-07 13:35:39.003650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.242 [2024-11-07 13:35:39.003657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.242 [2024-11-07 13:35:39.003660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.242 [2024-11-07 13:35:39.003663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.242 [2024-11-07 13:35:39.003670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.242 [2024-11-07 13:35:39.003671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.242 [2024-11-07 13:35:39.003677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.242 [2024-11-07 13:35:39.003684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.242 [2024-11-07 13:35:39.003685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.242 [2024-11-07 13:35:39.003691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.242 [2024-11-07 13:35:39.003696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-07 13:35:39.003698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.242 with the state(6) to be set 00:30:31.242 [2024-11-07 13:35:39.003707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.242 [2024-11-07 13:35:39.003711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.242 [2024-11-07 13:35:39.003713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.242 [2024-11-07 13:35:39.003722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.242 [2024-11-07 13:35:39.003722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.242 [2024-11-07 13:35:39.003736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.242 [2024-11-07 13:35:39.003749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.242 [2024-11-07 13:35:39.003762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.242 [2024-11-07 13:35:39.003773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.242 [2024-11-07 13:35:39.003786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.242 [2024-11-07 13:35:39.003797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.242 [2024-11-07 13:35:39.003811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.242 [2024-11-07 13:35:39.003821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.242 [2024-11-07 13:35:39.003834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.242 [2024-11-07 13:35:39.003845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.242 [2024-11-07 13:35:39.003858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.242 [2024-11-07 13:35:39.003874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.242 [2024-11-07 13:35:39.003886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.242 [2024-11-07 13:35:39.003897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.242 [2024-11-07 13:35:39.003910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.242 [2024-11-07 13:35:39.003921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.242 [2024-11-07 13:35:39.003935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.242 [2024-11-07 13:35:39.003945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.242 [2024-11-07 13:35:39.003958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.242 [2024-11-07 13:35:39.003969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.242 [2024-11-07 13:35:39.003982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.242 [2024-11-07 13:35:39.003993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.242 [2024-11-07 13:35:39.004006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.242 [2024-11-07 13:35:39.004029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.242 [2024-11-07 13:35:39.004043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.242 [2024-11-07 13:35:39.004053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.242 [2024-11-07 13:35:39.004066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.242 [2024-11-07 13:35:39.004077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.242 [2024-11-07 13:35:39.004091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.242 [2024-11-07 13:35:39.004102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.242 [2024-11-07 13:35:39.005316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.242 [2024-11-07 13:35:39.005345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.242 [2024-11-07 13:35:39.005353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.242 [2024-11-07 13:35:39.005375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.005974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.243 [2024-11-07 13:35:39.006003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.243 [2024-11-07 13:35:39.006017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.243 [2024-11-07 13:35:39.006029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.243 [2024-11-07 13:35:39.006042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.243 [2024-11-07 13:35:39.006053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.243 [2024-11-07 13:35:39.006064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.243 [2024-11-07 13:35:39.006075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.243 [2024-11-07 13:35:39.006087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500041f300 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.006170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.243 [2024-11-07 13:35:39.006185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.243 [2024-11-07 13:35:39.006197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.243 [2024-11-07 13:35:39.006208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.243 [2024-11-07 13:35:39.006220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.243 [2024-11-07 13:35:39.006231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.243 [2024-11-07 13:35:39.006243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.243 [2024-11-07 13:35:39.006254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.243 [2024-11-07 13:35:39.006264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000419900 is same with the state(6) to be set 00:30:31.243 [2024-11-07 13:35:39.006308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.243 [2024-11-07 13:35:39.006321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.243 [2024-11-07 13:35:39.006333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.243 [2024-11-07 13:35:39.006344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.243 [2024-11-07 13:35:39.006357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.243 [2024-11-07 13:35:39.006368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.244 [2024-11-07 13:35:39.006379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.244 [2024-11-07 13:35:39.006390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.244 [2024-11-07 13:35:39.006400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.006431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.244 [2024-11-07 13:35:39.006444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.244 [2024-11-07 13:35:39.006456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.244 [2024-11-07 13:35:39.006468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.244 [2024-11-07 13:35:39.006479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.244 [2024-11-07 13:35:39.006489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.244 [2024-11-07 13:35:39.006501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.244 [2024-11-07 13:35:39.006514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.244 [2024-11-07 13:35:39.006525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000418a00 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.006558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.244 [2024-11-07 13:35:39.006571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.244 [2024-11-07 13:35:39.006583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.244 [2024-11-07 13:35:39.006594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.244 [2024-11-07 13:35:39.006606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.244 [2024-11-07 13:35:39.006617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.244 [2024-11-07 13:35:39.006628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.244 [2024-11-07 13:35:39.006639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.244 [2024-11-07 13:35:39.006649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000417b00 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.006821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.006847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.006856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.006866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.006873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.006881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.006889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.006896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.006903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.006910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.006917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.006923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.006930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.006936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.006943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.006956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.006963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.006970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.006977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.006984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.006990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.006999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.007006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.007013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.007020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.007027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.007033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.007040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.007047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.007055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.007061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.007068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.007075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.007081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.007088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.007094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.007101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.007108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.007114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.007121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.007128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.007135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.007142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.007148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.007154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.007161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.007168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.244 [2024-11-07 13:35:39.007175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.007181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.007187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.007194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.007201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.007207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.007214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.007221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.007227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.007233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.007240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.007246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.007252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.007259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.007266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.007272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.008987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.010630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.011797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.012389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.012407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.245 [2024-11-07 13:35:39.012416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.012819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.246 [2024-11-07 13:35:39.042506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.246 [2024-11-07 13:35:39.042545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.246 [2024-11-07 13:35:39.042580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.246 [2024-11-07 13:35:39.042591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.246 [2024-11-07 13:35:39.042605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.246 [2024-11-07 13:35:39.042616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.246 [2024-11-07 13:35:39.042630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.246 [2024-11-07 13:35:39.042644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.246 [2024-11-07 13:35:39.042657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.246 [2024-11-07 13:35:39.042669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.246 [2024-11-07 13:35:39.042682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.246 [2024-11-07 13:35:39.042693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.246 [2024-11-07 13:35:39.042706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.246 [2024-11-07 13:35:39.042717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.246 [2024-11-07 13:35:39.042730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.246 [2024-11-07 13:35:39.042740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.246 [2024-11-07 13:35:39.042754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.246 [2024-11-07 13:35:39.042765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.246 [2024-11-07 13:35:39.042777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.246 [2024-11-07 13:35:39.042788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.246 [2024-11-07 13:35:39.042800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.246 [2024-11-07 13:35:39.042812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.246 [2024-11-07 13:35:39.042825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.247 [2024-11-07 13:35:39.042836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.247 [2024-11-07 13:35:39.042849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.247 [2024-11-07 13:35:39.042861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.247 [2024-11-07 13:35:39.042881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.247 [2024-11-07 13:35:39.042892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.247 [2024-11-07 13:35:39.042905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.247 [2024-11-07 13:35:39.042916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.247 [2024-11-07 13:35:39.042930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.247 [2024-11-07 13:35:39.042940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.247 [2024-11-07 13:35:39.042956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.247 [2024-11-07 13:35:39.042967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.247 [2024-11-07 13:35:39.042980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.247 [2024-11-07 13:35:39.042992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.247 [2024-11-07 13:35:39.043005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.247 [2024-11-07 13:35:39.043015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.247 [2024-11-07 13:35:39.043029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.247 [2024-11-07 13:35:39.043041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.247 [2024-11-07 13:35:39.043054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.247 [2024-11-07 13:35:39.043065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.247 [2024-11-07 13:35:39.043078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.247 [2024-11-07 13:35:39.043088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.247 [2024-11-07 13:35:39.043102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.247 [2024-11-07 13:35:39.043114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.247 [2024-11-07 13:35:39.043127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.247 [2024-11-07 13:35:39.043137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.247 [2024-11-07 13:35:39.043150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.247 [2024-11-07 13:35:39.043162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.247 [2024-11-07 13:35:39.043175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.247 [2024-11-07 13:35:39.043187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.247 [2024-11-07 13:35:39.043200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.247 [2024-11-07 13:35:39.043211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.247 [2024-11-07 13:35:39.043223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.247 [2024-11-07 13:35:39.043235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.247 [2024-11-07 13:35:39.043248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.247 [2024-11-07 13:35:39.043260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.247 [2024-11-07 13:35:39.043274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.247 [2024-11-07 13:35:39.043284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.247 [2024-11-07 13:35:39.043297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.247 [2024-11-07 13:35:39.043309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.247 [2024-11-07 13:35:39.043322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.247 [2024-11-07 13:35:39.043332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.247 [2024-11-07 13:35:39.043345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.247 [2024-11-07 13:35:39.043357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.247 [2024-11-07 13:35:39.043369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.247 [2024-11-07 13:35:39.043381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.247 [2024-11-07 13:35:39.043395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.247 [2024-11-07 13:35:39.043406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.247 [2024-11-07 13:35:39.043419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.247 [2024-11-07 13:35:39.043431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.247 [2024-11-07 13:35:39.043444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.247 [2024-11-07 13:35:39.043455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.247 [2024-11-07 13:35:39.043468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.247 [2024-11-07 13:35:39.043480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.247 [2024-11-07 13:35:39.043525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.247 [2024-11-07 13:35:39.043536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.247 [2024-11-07 13:35:39.043550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.247 [2024-11-07 13:35:39.043561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.247 [2024-11-07 13:35:39.043575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.247 [2024-11-07 13:35:39.043586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.247 [2024-11-07 13:35:39.043603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.247 [2024-11-07 13:35:39.043614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.247 [2024-11-07 13:35:39.043626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.247 [2024-11-07 13:35:39.043638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.247 [2024-11-07 13:35:39.043651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.247 [2024-11-07 13:35:39.043662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.247 [2024-11-07 13:35:39.043678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.247 [2024-11-07 13:35:39.043689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.247 [2024-11-07 13:35:39.043702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.247 [2024-11-07 13:35:39.043713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.247 [2024-11-07 13:35:39.043726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.247 [2024-11-07 13:35:39.043736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.247 [2024-11-07 13:35:39.043749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.247 [2024-11-07 13:35:39.043761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.247 [2024-11-07 13:35:39.043774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.247 [2024-11-07 13:35:39.043784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.247 [2024-11-07 13:35:39.043797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.247 [2024-11-07 13:35:39.043809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.247 [2024-11-07 13:35:39.043821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.247 [2024-11-07 13:35:39.043832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.248 [2024-11-07 13:35:39.043845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.248 [2024-11-07 13:35:39.043856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.248 [2024-11-07 13:35:39.043873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.248 [2024-11-07 13:35:39.043884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.248 [2024-11-07 13:35:39.043898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.248 [2024-11-07 13:35:39.043910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.248 [2024-11-07 13:35:39.043923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.248 [2024-11-07 13:35:39.043933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.248 [2024-11-07 13:35:39.043946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.248 [2024-11-07 13:35:39.043957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.248 [2024-11-07 13:35:39.043971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.248 [2024-11-07 13:35:39.043982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.248 [2024-11-07 13:35:39.043995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.248 [2024-11-07 13:35:39.044006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.248 [2024-11-07 13:35:39.044019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.248 [2024-11-07 13:35:39.044029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.248 [2024-11-07 13:35:39.044042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.248 [2024-11-07 13:35:39.044053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.248 [2024-11-07 13:35:39.044066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.248 [2024-11-07 13:35:39.044076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.248 [2024-11-07 13:35:39.044089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.248 [2024-11-07 13:35:39.044101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.248 [2024-11-07 13:35:39.044114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.248 [2024-11-07 13:35:39.044125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.248 [2024-11-07 13:35:39.044137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.248 [2024-11-07 13:35:39.044148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.248 [2024-11-07 13:35:39.044194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.248 [2024-11-07 13:35:39.046548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.248 [2024-11-07 13:35:39.046574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.248 [2024-11-07 13:35:39.046598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.248 [2024-11-07 13:35:39.046612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.248 [2024-11-07 13:35:39.046626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.248 [2024-11-07 13:35:39.046638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.248 [2024-11-07 13:35:39.046651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.248 [2024-11-07 13:35:39.046663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.248 [2024-11-07 13:35:39.046676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.248 [2024-11-07 13:35:39.046687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.248 [2024-11-07 13:35:39.046701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.248 [2024-11-07 13:35:39.046712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.248 [2024-11-07 13:35:39.046726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.248 [2024-11-07 13:35:39.046737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.248 [2024-11-07 13:35:39.046750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.248 [2024-11-07 13:35:39.046760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.248 [2024-11-07 13:35:39.046774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.248 [2024-11-07 13:35:39.046785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.248 [2024-11-07 13:35:39.046798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.248 [2024-11-07 13:35:39.046808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.248 [2024-11-07 13:35:39.046823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.248 [2024-11-07 13:35:39.046833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.248 [2024-11-07 13:35:39.046846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.248 [2024-11-07 13:35:39.046857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.248 [2024-11-07 13:35:39.046884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.248 [2024-11-07 13:35:39.046896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.248 [2024-11-07 13:35:39.046909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.248 [2024-11-07 13:35:39.046920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.248 [2024-11-07 13:35:39.046936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.248 [2024-11-07 13:35:39.046947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.248 [2024-11-07 13:35:39.046960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.248 [2024-11-07 13:35:39.046971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.248 [2024-11-07 13:35:39.046984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.248 [2024-11-07 13:35:39.046995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.248 [2024-11-07 13:35:39.047009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.248 [2024-11-07 13:35:39.047020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.248 [2024-11-07 13:35:39.047032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.248 [2024-11-07 13:35:39.047044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.248 [2024-11-07 13:35:39.047057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.248 [2024-11-07 13:35:39.047068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.248 [2024-11-07 13:35:39.047080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.248 [2024-11-07 13:35:39.047091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.248 [2024-11-07 13:35:39.047105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.248 [2024-11-07 13:35:39.047116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.249 [2024-11-07 13:35:39.047129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.249 [2024-11-07 13:35:39.047139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.249 [2024-11-07 13:35:39.047153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.249 [2024-11-07 13:35:39.047163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.249 [2024-11-07 13:35:39.047176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.249 [2024-11-07 13:35:39.047187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.249 [2024-11-07 13:35:39.047200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.249 [2024-11-07 13:35:39.047211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.249 [2024-11-07 13:35:39.047224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.249 [2024-11-07 13:35:39.047236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.249 [2024-11-07 13:35:39.047249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.249 [2024-11-07 13:35:39.047260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.249 [2024-11-07 13:35:39.047274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.249 [2024-11-07 13:35:39.047284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.249 [2024-11-07 13:35:39.047297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.249 [2024-11-07 13:35:39.047307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.249 [2024-11-07 13:35:39.047321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.249 [2024-11-07 13:35:39.047331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.249 [2024-11-07 13:35:39.047344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.249 [2024-11-07 13:35:39.047354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.249 [2024-11-07 13:35:39.047368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.249 [2024-11-07 13:35:39.047379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.249 [2024-11-07 13:35:39.047391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.249 [2024-11-07 13:35:39.047402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.249 [2024-11-07 13:35:39.047415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.249 [2024-11-07 13:35:39.047426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.249 [2024-11-07 13:35:39.047439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.249 [2024-11-07 13:35:39.047449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.249 [2024-11-07 13:35:39.047463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.249 [2024-11-07 13:35:39.047474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.249 [2024-11-07 13:35:39.047488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.249 [2024-11-07 13:35:39.047504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.249 [2024-11-07 13:35:39.047518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.249 [2024-11-07 13:35:39.047529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.249 [2024-11-07 13:35:39.047544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.249 [2024-11-07 13:35:39.047554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.249 [2024-11-07 13:35:39.047568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.249 [2024-11-07 13:35:39.047578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.249 [2024-11-07 13:35:39.047591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.249 [2024-11-07 13:35:39.047602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.249 [2024-11-07 13:35:39.047614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.249 [2024-11-07 13:35:39.047626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.249 [2024-11-07 13:35:39.047639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.249 [2024-11-07 13:35:39.047650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.249 [2024-11-07 13:35:39.047664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.249 [2024-11-07 13:35:39.047676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.249 [2024-11-07 13:35:39.047689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.249 [2024-11-07 13:35:39.047700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.249 [2024-11-07 13:35:39.047713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.249 [2024-11-07 13:35:39.047724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.249 [2024-11-07 13:35:39.047738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.249 [2024-11-07 13:35:39.047748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.249 [2024-11-07 13:35:39.047761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.249 [2024-11-07 13:35:39.047772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.249 [2024-11-07 13:35:39.047785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.249 [2024-11-07 13:35:39.047796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.249 [2024-11-07 13:35:39.047809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.249 [2024-11-07 13:35:39.047819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.249 [2024-11-07 13:35:39.047833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.249 [2024-11-07 13:35:39.047845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.249 [2024-11-07 13:35:39.047859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.249 [2024-11-07 13:35:39.047875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.249 [2024-11-07 13:35:39.047891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.249 [2024-11-07 13:35:39.047902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.249 [2024-11-07 13:35:39.047916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.249 [2024-11-07 13:35:39.047926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.249 [2024-11-07 13:35:39.047939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.249 [2024-11-07 13:35:39.047951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.249 [2024-11-07 13:35:39.047964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.249 [2024-11-07 13:35:39.047975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.249 [2024-11-07 13:35:39.047987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.249 [2024-11-07 13:35:39.047998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.249 [2024-11-07 13:35:39.048011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.249 [2024-11-07 13:35:39.048022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.249 [2024-11-07 13:35:39.048035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.249 [2024-11-07 13:35:39.048045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.249 [2024-11-07 13:35:39.048059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.249 [2024-11-07 13:35:39.048070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.249 [2024-11-07 13:35:39.048083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.249 [2024-11-07 13:35:39.048093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.250 [2024-11-07 13:35:39.048106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.250 [2024-11-07 13:35:39.048118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.250 [2024-11-07 13:35:39.048131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.250 [2024-11-07 13:35:39.048142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.250 [2024-11-07 13:35:39.048176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.250 [2024-11-07 13:35:39.052109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:30:31.250 [2024-11-07 13:35:39.052168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000417b00 (9): Bad file descriptor 00:30:31.250 [2024-11-07 13:35:39.052235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500041f300 (9): Bad file descriptor 00:30:31.250 [2024-11-07 13:35:39.052285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.250 [2024-11-07 13:35:39.052301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.250 [2024-11-07 13:35:39.052314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.250 [2024-11-07 13:35:39.052325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.250 [2024-11-07 13:35:39.052337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.250 [2024-11-07 13:35:39.052348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.250 [2024-11-07 13:35:39.052360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.250 [2024-11-07 13:35:39.052370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.250 [2024-11-07 13:35:39.052381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500041e400 is same with the state(6) to be set 00:30:31.250 [2024-11-07 13:35:39.052415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.250 [2024-11-07 13:35:39.052428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.250 [2024-11-07 13:35:39.052440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.250 [2024-11-07 13:35:39.052450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.250 [2024-11-07 13:35:39.052462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.250 [2024-11-07 13:35:39.052472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.250 [2024-11-07 13:35:39.052484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.250 [2024-11-07 13:35:39.052495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.250 [2024-11-07 13:35:39.052505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500041b700 is same with the state(6) to be set 00:30:31.250 [2024-11-07 13:35:39.052547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.250 [2024-11-07 13:35:39.052560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.250 [2024-11-07 13:35:39.052572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.250 [2024-11-07 13:35:39.052584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.250 [2024-11-07 13:35:39.052599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.250 [2024-11-07 13:35:39.052610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.250 [2024-11-07 13:35:39.052622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.250 [2024-11-07 13:35:39.052632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.250 [2024-11-07 13:35:39.052643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500041c600 is same with the state(6) to be set 00:30:31.250 [2024-11-07 13:35:39.052671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.250 [2024-11-07 13:35:39.052683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.250 [2024-11-07 13:35:39.052696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.250 [2024-11-07 13:35:39.052706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.250 [2024-11-07 13:35:39.052718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.250 [2024-11-07 13:35:39.052729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.250 [2024-11-07 13:35:39.052740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.250 [2024-11-07 13:35:39.052751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.250 [2024-11-07 13:35:39.052761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500041d500 is same with the state(6) to be set 00:30:31.250 [2024-11-07 13:35:39.052777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000419900 (9): Bad file descriptor 00:30:31.250 [2024-11-07 13:35:39.052810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.250 [2024-11-07 13:35:39.052822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.250 [2024-11-07 13:35:39.052835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.250 [2024-11-07 13:35:39.052846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.250 [2024-11-07 13:35:39.052858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.250 [2024-11-07 13:35:39.052880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.250 [2024-11-07 13:35:39.052891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.250 [2024-11-07 13:35:39.052902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.250 [2024-11-07 13:35:39.052912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500041a800 is same with the state(6) to be set 00:30:31.250 [2024-11-07 13:35:39.052935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:30:31.250 [2024-11-07 13:35:39.052955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000418a00 (9): Bad file descriptor 00:30:31.250 [2024-11-07 13:35:39.053015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.250 [2024-11-07 13:35:39.053034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.250 [2024-11-07 13:35:39.053054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.250 [2024-11-07 13:35:39.053065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.250 [2024-11-07 13:35:39.053079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.250 [2024-11-07 13:35:39.053090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.250 [2024-11-07 13:35:39.053104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.250 [2024-11-07 13:35:39.053114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.250 [2024-11-07 13:35:39.053128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.250 [2024-11-07 13:35:39.053139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.250 [2024-11-07 13:35:39.053152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.250 [2024-11-07 13:35:39.053163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.250 [2024-11-07 13:35:39.053176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.250 [2024-11-07 13:35:39.053187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.250 [2024-11-07 13:35:39.053200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.250 [2024-11-07 13:35:39.053211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.250 [2024-11-07 13:35:39.053224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.250 [2024-11-07 13:35:39.053234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.250 [2024-11-07 13:35:39.053248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.250 [2024-11-07 13:35:39.053258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.250 [2024-11-07 13:35:39.053271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.250 [2024-11-07 13:35:39.053282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.250 [2024-11-07 13:35:39.053296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.250 [2024-11-07 13:35:39.053312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.250 [2024-11-07 13:35:39.053326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.251 [2024-11-07 13:35:39.053339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.251 [2024-11-07 13:35:39.053352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.251 [2024-11-07 13:35:39.053363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.251 [2024-11-07 13:35:39.053376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.251 [2024-11-07 13:35:39.053386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.251 [2024-11-07 13:35:39.053399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.251 [2024-11-07 13:35:39.053410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.251 [2024-11-07 13:35:39.053423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.251 [2024-11-07 13:35:39.053434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.251 [2024-11-07 13:35:39.053447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.251 [2024-11-07 13:35:39.053457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.251 [2024-11-07 13:35:39.053473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.251 [2024-11-07 13:35:39.053484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.251 [2024-11-07 13:35:39.053496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.251 [2024-11-07 13:35:39.053507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.251 [2024-11-07 13:35:39.053519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.251 [2024-11-07 13:35:39.053530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.251 [2024-11-07 13:35:39.053543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.251 [2024-11-07 13:35:39.053553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.251 [2024-11-07 13:35:39.053566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.251 [2024-11-07 13:35:39.053576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.251 [2024-11-07 13:35:39.053589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.251 [2024-11-07 13:35:39.053599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.251 [2024-11-07 13:35:39.053612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.251 [2024-11-07 13:35:39.053622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.251 [2024-11-07 13:35:39.053637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.251 [2024-11-07 13:35:39.053648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.251 [2024-11-07 13:35:39.053661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.251 [2024-11-07 13:35:39.053671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.251 [2024-11-07 13:35:39.053684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.251 [2024-11-07 13:35:39.053694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.251 [2024-11-07 13:35:39.053707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.251 [2024-11-07 13:35:39.053718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.251 [2024-11-07 13:35:39.053731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.251 [2024-11-07 13:35:39.053742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.251 [2024-11-07 13:35:39.053756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.251 [2024-11-07 13:35:39.053766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.251 [2024-11-07 13:35:39.053781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.251 [2024-11-07 13:35:39.053791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.251 [2024-11-07 13:35:39.053804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.251 [2024-11-07 13:35:39.053815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.251 [2024-11-07 13:35:39.053828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.251 [2024-11-07 13:35:39.053839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.251 [2024-11-07 13:35:39.053852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.251 [2024-11-07 13:35:39.053873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.251 [2024-11-07 13:35:39.053887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.251 [2024-11-07 13:35:39.053898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.251 [2024-11-07 13:35:39.053912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.251 [2024-11-07 13:35:39.053922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.251 [2024-11-07 13:35:39.053935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.251 [2024-11-07 13:35:39.053948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.251 [2024-11-07 13:35:39.053962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.251 [2024-11-07 13:35:39.053972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.251 [2024-11-07 13:35:39.053985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.251 [2024-11-07 13:35:39.053996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.251 [2024-11-07 13:35:39.054009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.251 [2024-11-07 13:35:39.054020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.251 [2024-11-07 13:35:39.054033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.251 [2024-11-07 13:35:39.054044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.251 [2024-11-07 13:35:39.054057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.251 [2024-11-07 13:35:39.054068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.251 [2024-11-07 13:35:39.054082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.251 [2024-11-07 13:35:39.054094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.251 [2024-11-07 13:35:39.054108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.251 [2024-11-07 13:35:39.054121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.251 [2024-11-07 13:35:39.054134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.251 [2024-11-07 13:35:39.054145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.251 [2024-11-07 13:35:39.054159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.251 [2024-11-07 13:35:39.054170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.251 [2024-11-07 13:35:39.054183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.251 [2024-11-07 13:35:39.054194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.251 [2024-11-07 13:35:39.054207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.251 [2024-11-07 13:35:39.054218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.251 [2024-11-07 13:35:39.054231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.251 [2024-11-07 13:35:39.054242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.251 [2024-11-07 13:35:39.054258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.251 [2024-11-07 13:35:39.054269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.251 [2024-11-07 13:35:39.054283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.251 [2024-11-07 13:35:39.054294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.251 [2024-11-07 13:35:39.054307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.252 [2024-11-07 13:35:39.054318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.252 [2024-11-07 13:35:39.054332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.252 [2024-11-07 13:35:39.054343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.252 [2024-11-07 13:35:39.054357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.252 [2024-11-07 13:35:39.054369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.252 [2024-11-07 13:35:39.054383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.252 [2024-11-07 13:35:39.054394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.252 [2024-11-07 13:35:39.054407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.252 [2024-11-07 13:35:39.054418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.252 [2024-11-07 13:35:39.054430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.252 [2024-11-07 13:35:39.054442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.252 [2024-11-07 13:35:39.054455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.252 [2024-11-07 13:35:39.054466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.252 [2024-11-07 13:35:39.054480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.252 [2024-11-07 13:35:39.054492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.252 [2024-11-07 13:35:39.054505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.252 [2024-11-07 13:35:39.054516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.252 [2024-11-07 13:35:39.054529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.252 [2024-11-07 13:35:39.054540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.252 [2024-11-07 13:35:39.054555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.252 [2024-11-07 13:35:39.054568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.252 [2024-11-07 13:35:39.054581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.252 [2024-11-07 13:35:39.054592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.252 [2024-11-07 13:35:39.054604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000421880 is same with the state(6) to be set 00:30:31.252 [2024-11-07 13:35:39.057575] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:31.252 task offset: 29696 on job bdev=Nvme2n1 fails 00:30:31.252 1664.00 IOPS, 104.00 MiB/s [2024-11-07T12:35:39.259Z] [2024-11-07 13:35:39.057633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:30:31.252 [2024-11-07 13:35:39.057658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500041c600 (9): Bad file descriptor 00:30:31.252 [2024-11-07 13:35:39.059607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:30:31.252 [2024-11-07 13:35:39.059639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:30:31.252 [2024-11-07 13:35:39.059674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500041d500 (9): Bad file descriptor 00:30:31.252 [2024-11-07 13:35:39.059816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.252 [2024-11-07 13:35:39.059840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:30:31.252 [2024-11-07 13:35:39.059858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000417b00 is same with the state(6) to be set 00:30:31.252 [2024-11-07 13:35:39.060057] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:31.252 [2024-11-07 13:35:39.061401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.252 [2024-11-07 13:35:39.061428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041c600 with addr=10.0.0.2, port=4420 00:30:31.252 [2024-11-07 13:35:39.061441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500041c600 is same with the state(6) to be set 00:30:31.252 [2024-11-07 13:35:39.061641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.252 [2024-11-07 13:35:39.061656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:30:31.252 [2024-11-07 13:35:39.061667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:30:31.252 [2024-11-07 13:35:39.061693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000417b00 (9): Bad file descriptor 00:30:31.252 [2024-11-07 13:35:39.062205] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:31.252 [2024-11-07 13:35:39.062258] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:31.252 [2024-11-07 13:35:39.062304] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:31.252 [2024-11-07 13:35:39.062536] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:31.252 [2024-11-07 13:35:39.062875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.252 [2024-11-07 13:35:39.062901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041d500 with addr=10.0.0.2, port=4420 00:30:31.252 [2024-11-07 13:35:39.062914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500041d500 is same with the state(6) to be set 00:30:31.252 [2024-11-07 13:35:39.062929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500041c600 (9): Bad file descriptor 00:30:31.252 [2024-11-07 13:35:39.062948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:30:31.252 [2024-11-07 13:35:39.062962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:30:31.252 [2024-11-07 13:35:39.062974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:30:31.252 [2024-11-07 13:35:39.062988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:30:31.252 [2024-11-07 13:35:39.063004] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:30:31.252 [2024-11-07 13:35:39.063044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500041e400 (9): Bad file descriptor 00:30:31.252 [2024-11-07 13:35:39.063074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500041b700 (9): Bad file descriptor 00:30:31.252 [2024-11-07 13:35:39.063108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500041a800 (9): Bad file descriptor 00:30:31.252 [2024-11-07 13:35:39.063333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500041d500 (9): Bad file descriptor 00:30:31.252 [2024-11-07 13:35:39.063351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:30:31.252 [2024-11-07 13:35:39.063362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:30:31.252 [2024-11-07 13:35:39.063373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:30:31.252 [2024-11-07 13:35:39.063384] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:30:31.252 [2024-11-07 13:35:39.063395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:30:31.252 [2024-11-07 13:35:39.063404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:30:31.252 [2024-11-07 13:35:39.063415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:30:31.252 [2024-11-07 13:35:39.063424] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:30:31.252 [2024-11-07 13:35:39.063475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.252 [2024-11-07 13:35:39.063493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.252 [2024-11-07 13:35:39.063517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.252 [2024-11-07 13:35:39.063530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.252 [2024-11-07 13:35:39.063544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.252 [2024-11-07 13:35:39.063556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.252 [2024-11-07 13:35:39.063570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.252 [2024-11-07 13:35:39.063580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.252 [2024-11-07 13:35:39.063596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.252 [2024-11-07 13:35:39.063608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.252 [2024-11-07 13:35:39.063625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.252 [2024-11-07 13:35:39.063637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.252 [2024-11-07 13:35:39.063650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.252 [2024-11-07 13:35:39.063662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.252 [2024-11-07 13:35:39.063676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.253 [2024-11-07 13:35:39.063688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.253 [2024-11-07 13:35:39.063702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.253 [2024-11-07 13:35:39.063713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.253 [2024-11-07 13:35:39.063726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.253 [2024-11-07 13:35:39.063738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.253 [2024-11-07 13:35:39.063751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.253 [2024-11-07 13:35:39.063764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.253 [2024-11-07 13:35:39.063778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.253 [2024-11-07 13:35:39.063789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.253 [2024-11-07 13:35:39.063802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.253 [2024-11-07 13:35:39.063813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.253 [2024-11-07 13:35:39.063826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.253 [2024-11-07 13:35:39.063838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.253 [2024-11-07 13:35:39.063852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.253 [2024-11-07 13:35:39.063870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.253 [2024-11-07 13:35:39.063884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.253 [2024-11-07 13:35:39.063895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.253 [2024-11-07 13:35:39.063908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.253 [2024-11-07 13:35:39.063920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.253 [2024-11-07 13:35:39.063933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.253 [2024-11-07 13:35:39.063948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.253 [2024-11-07 13:35:39.063962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.253 [2024-11-07 13:35:39.063973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.253 [2024-11-07 13:35:39.063987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.253 [2024-11-07 13:35:39.063999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.253 [2024-11-07 13:35:39.064013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.253 [2024-11-07 13:35:39.064026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.253 [2024-11-07 13:35:39.064039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.253 [2024-11-07 13:35:39.064050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.253 [2024-11-07 13:35:39.064064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.253 [2024-11-07 13:35:39.064075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.253 [2024-11-07 13:35:39.064088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.253 [2024-11-07 13:35:39.064100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.253 [2024-11-07 13:35:39.064114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.253 [2024-11-07 13:35:39.064126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.253 [2024-11-07 13:35:39.064140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.253 [2024-11-07 13:35:39.064150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.253 [2024-11-07 13:35:39.064164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.253 [2024-11-07 13:35:39.064176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.253 [2024-11-07 13:35:39.064190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.253 [2024-11-07 13:35:39.064202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.253 [2024-11-07 13:35:39.064215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.253 [2024-11-07 13:35:39.064227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.253 [2024-11-07 13:35:39.064241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.253 [2024-11-07 13:35:39.064254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.253 [2024-11-07 13:35:39.064269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.253 [2024-11-07 13:35:39.064281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.253 [2024-11-07 13:35:39.064294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.253 [2024-11-07 13:35:39.064305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.253 [2024-11-07 13:35:39.064318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.253 [2024-11-07 13:35:39.064329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.253 [2024-11-07 13:35:39.064342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.253 [2024-11-07 13:35:39.064353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.253 [2024-11-07 13:35:39.064366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.253 [2024-11-07 13:35:39.064377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.253 [2024-11-07 13:35:39.064390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.253 [2024-11-07 13:35:39.064402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.253 [2024-11-07 13:35:39.064415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.253 [2024-11-07 13:35:39.064426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.253 [2024-11-07 13:35:39.064439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.253 [2024-11-07 13:35:39.064450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.253 [2024-11-07 13:35:39.064463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.253 [2024-11-07 13:35:39.064474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.253 [2024-11-07 13:35:39.064487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.253 [2024-11-07 13:35:39.064498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.253 [2024-11-07 13:35:39.064512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.253 [2024-11-07 13:35:39.064524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.253 [2024-11-07 13:35:39.064537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.253 [2024-11-07 13:35:39.064549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.253 [2024-11-07 13:35:39.064563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.253 [2024-11-07 13:35:39.064575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.253 [2024-11-07 13:35:39.064589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.253 [2024-11-07 13:35:39.064600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.253 [2024-11-07 13:35:39.064613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.254 [2024-11-07 13:35:39.064625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.254 [2024-11-07 13:35:39.064639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.254 [2024-11-07 13:35:39.064651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.254 [2024-11-07 13:35:39.064665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.254 [2024-11-07 13:35:39.064676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.254 [2024-11-07 13:35:39.064690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.254 [2024-11-07 13:35:39.064702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.254 [2024-11-07 13:35:39.064716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.254 [2024-11-07 13:35:39.064727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.254 [2024-11-07 13:35:39.064740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.254 [2024-11-07 13:35:39.064752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.254 [2024-11-07 13:35:39.064766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.254 [2024-11-07 13:35:39.064778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.254 [2024-11-07 13:35:39.064791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.254 [2024-11-07 13:35:39.064802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.254 [2024-11-07 13:35:39.064816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.254 [2024-11-07 13:35:39.064827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.254 [2024-11-07 13:35:39.064842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.254 [2024-11-07 13:35:39.064853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.254 [2024-11-07 13:35:39.064876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.254 [2024-11-07 13:35:39.064887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.254 [2024-11-07 13:35:39.064904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.254 [2024-11-07 13:35:39.064916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.254 [2024-11-07 13:35:39.064930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.254 [2024-11-07 13:35:39.064941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.254 [2024-11-07 13:35:39.064954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.254 [2024-11-07 13:35:39.064965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.254 [2024-11-07 13:35:39.064980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.254 [2024-11-07 13:35:39.064992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.254 [2024-11-07 13:35:39.065006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.254 [2024-11-07 13:35:39.065017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.254 [2024-11-07 13:35:39.065030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.254 [2024-11-07 13:35:39.065043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.254 [2024-11-07 13:35:39.065057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.254 [2024-11-07 13:35:39.065069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.254 [2024-11-07 13:35:39.065084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.254 [2024-11-07 13:35:39.065095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.254 [2024-11-07 13:35:39.065108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.254 [2024-11-07 13:35:39.065120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.254 [2024-11-07 13:35:39.065132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000422280 is same with the state(6) to be set 00:30:31.254 [2024-11-07 13:35:39.066633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.254 [2024-11-07 13:35:39.066654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.254 [2024-11-07 13:35:39.066672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.254 [2024-11-07 13:35:39.066683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.254 [2024-11-07 13:35:39.066699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.254 [2024-11-07 13:35:39.066711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.254 [2024-11-07 13:35:39.066729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.254 [2024-11-07 13:35:39.066740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.254 [2024-11-07 13:35:39.066754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.254 [2024-11-07 13:35:39.066766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.254 [2024-11-07 13:35:39.066782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.254 [2024-11-07 13:35:39.066794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.254 [2024-11-07 13:35:39.066808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.254 [2024-11-07 13:35:39.066820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.254 [2024-11-07 13:35:39.066835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.254 [2024-11-07 13:35:39.066847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.254 [2024-11-07 13:35:39.066861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.254 [2024-11-07 13:35:39.066878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.254 [2024-11-07 13:35:39.066891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.254 [2024-11-07 13:35:39.066903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.254 [2024-11-07 13:35:39.066919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.254 [2024-11-07 13:35:39.066931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.254 [2024-11-07 13:35:39.066945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.254 [2024-11-07 13:35:39.066956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.254 [2024-11-07 13:35:39.066971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.254 [2024-11-07 13:35:39.066983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.254 [2024-11-07 13:35:39.066998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.254 [2024-11-07 13:35:39.067010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.254 [2024-11-07 13:35:39.067024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.254 [2024-11-07 13:35:39.067034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.254 [2024-11-07 13:35:39.067049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.254 [2024-11-07 13:35:39.067063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.254 [2024-11-07 13:35:39.067077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.254 [2024-11-07 13:35:39.067088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.254 [2024-11-07 13:35:39.067102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.254 [2024-11-07 13:35:39.067114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.254 [2024-11-07 13:35:39.067128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.254 [2024-11-07 13:35:39.067140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.254 [2024-11-07 13:35:39.067154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.254 [2024-11-07 13:35:39.067165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.254 [2024-11-07 13:35:39.067178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.255 [2024-11-07 13:35:39.067189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.255 [2024-11-07 13:35:39.067202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.255 [2024-11-07 13:35:39.067213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.255 [2024-11-07 13:35:39.067227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.255 [2024-11-07 13:35:39.067237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.255 [2024-11-07 13:35:39.067251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.255 [2024-11-07 13:35:39.067261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.255 [2024-11-07 13:35:39.067275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.255 [2024-11-07 13:35:39.067287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.255 [2024-11-07 13:35:39.067301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.255 [2024-11-07 13:35:39.067313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.255 [2024-11-07 13:35:39.067328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.255 [2024-11-07 13:35:39.067338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.255 [2024-11-07 13:35:39.067352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.255 [2024-11-07 13:35:39.067363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.255 [2024-11-07 13:35:39.067380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.255 [2024-11-07 13:35:39.067391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.255 [2024-11-07 13:35:39.067406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.255 [2024-11-07 13:35:39.067416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.255 [2024-11-07 13:35:39.067430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.255 [2024-11-07 13:35:39.067442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.255 [2024-11-07 13:35:39.067456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.255 [2024-11-07 13:35:39.067467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.255 [2024-11-07 13:35:39.067481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.255 [2024-11-07 13:35:39.067491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.255 [2024-11-07 13:35:39.067505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.255 [2024-11-07 13:35:39.067517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.255 [2024-11-07 13:35:39.067531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.255 [2024-11-07 13:35:39.067542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.255 [2024-11-07 13:35:39.067555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.255 [2024-11-07 13:35:39.067566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.255 [2024-11-07 13:35:39.067581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.255 [2024-11-07 13:35:39.067593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.255 [2024-11-07 13:35:39.067607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.255 [2024-11-07 13:35:39.067618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.255 [2024-11-07 13:35:39.067631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.255 [2024-11-07 13:35:39.067643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.255 [2024-11-07 13:35:39.067657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.255 [2024-11-07 13:35:39.067669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.255 [2024-11-07 13:35:39.067683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.255 [2024-11-07 13:35:39.067696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.255 [2024-11-07 13:35:39.067709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.255 [2024-11-07 13:35:39.067722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.255 [2024-11-07 13:35:39.067736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.255 [2024-11-07 13:35:39.067748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.255 [2024-11-07 13:35:39.067761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.255 [2024-11-07 13:35:39.067772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.255 [2024-11-07 13:35:39.067786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.255 [2024-11-07 13:35:39.067798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.255 [2024-11-07 13:35:39.067813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.255 [2024-11-07 13:35:39.067824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.255 [2024-11-07 13:35:39.067837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.255 [2024-11-07 13:35:39.067848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.255 [2024-11-07 13:35:39.067866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.255 [2024-11-07 13:35:39.067878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.255 [2024-11-07 13:35:39.067891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.255 [2024-11-07 13:35:39.067903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.255 [2024-11-07 13:35:39.067916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.255 [2024-11-07 13:35:39.067927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.255 [2024-11-07 13:35:39.067942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.255 [2024-11-07 13:35:39.067954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.255 [2024-11-07 13:35:39.067968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.255 [2024-11-07 13:35:39.067979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.255 [2024-11-07 13:35:39.067992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.255 [2024-11-07 13:35:39.068003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.255 [2024-11-07 13:35:39.068018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.255 [2024-11-07 13:35:39.068037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.255 [2024-11-07 13:35:39.068051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.255 [2024-11-07 13:35:39.068063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.255 [2024-11-07 13:35:39.068077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.255 [2024-11-07 13:35:39.068087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.255 [2024-11-07 13:35:39.068101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.255 [2024-11-07 13:35:39.068111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.255 [2024-11-07 13:35:39.068125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.255 [2024-11-07 13:35:39.068137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.255 [2024-11-07 13:35:39.068151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.255 [2024-11-07 13:35:39.068162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.255 [2024-11-07 13:35:39.068176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.255 [2024-11-07 13:35:39.068187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.256 [2024-11-07 13:35:39.068202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.256 [2024-11-07 13:35:39.068214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.256 [2024-11-07 13:35:39.068229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.256 [2024-11-07 13:35:39.068240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.256 [2024-11-07 13:35:39.068253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.256 [2024-11-07 13:35:39.068264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.256 [2024-11-07 13:35:39.068279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.256 [2024-11-07 13:35:39.068291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.256 [2024-11-07 13:35:39.068303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000422780 is same with the state(6) to be set 00:30:31.256 [2024-11-07 13:35:39.069818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.256 [2024-11-07 13:35:39.069839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.256 [2024-11-07 13:35:39.069870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.256 [2024-11-07 13:35:39.069886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.256 [2024-11-07 13:35:39.069900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.256 [2024-11-07 13:35:39.069914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.256 [2024-11-07 13:35:39.069929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.256 [2024-11-07 13:35:39.069941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.256 [2024-11-07 13:35:39.069956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.256 [2024-11-07 13:35:39.069967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.256 [2024-11-07 13:35:39.069980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.256 [2024-11-07 13:35:39.069992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.256 [2024-11-07 13:35:39.070007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.256 [2024-11-07 13:35:39.070019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.256 [2024-11-07 13:35:39.070033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.256 [2024-11-07 13:35:39.070044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.256 [2024-11-07 13:35:39.070059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.256 [2024-11-07 13:35:39.070070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.256 [2024-11-07 13:35:39.070084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.256 [2024-11-07 13:35:39.070095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.256 [2024-11-07 13:35:39.070108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.256 [2024-11-07 13:35:39.070119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.256 [2024-11-07 13:35:39.070134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.256 [2024-11-07 13:35:39.070146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.256 [2024-11-07 13:35:39.070160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.256 [2024-11-07 13:35:39.070171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.256 [2024-11-07 13:35:39.070185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.256 [2024-11-07 13:35:39.070195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.256 [2024-11-07 13:35:39.070212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.256 [2024-11-07 13:35:39.070224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.256 [2024-11-07 13:35:39.070238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.256 [2024-11-07 13:35:39.070249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.256 [2024-11-07 13:35:39.070262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.256 [2024-11-07 13:35:39.070273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.256 [2024-11-07 13:35:39.070286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.256 [2024-11-07 13:35:39.070297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.256 [2024-11-07 13:35:39.070311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.256 [2024-11-07 13:35:39.070322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.256 [2024-11-07 13:35:39.070336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.256 [2024-11-07 13:35:39.070348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.256 [2024-11-07 13:35:39.070363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.256 [2024-11-07 13:35:39.070374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.256 [2024-11-07 13:35:39.070388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.256 [2024-11-07 13:35:39.070398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.256 [2024-11-07 13:35:39.070413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.256 [2024-11-07 13:35:39.070425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.256 [2024-11-07 13:35:39.070439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.256 [2024-11-07 13:35:39.070450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.256 [2024-11-07 13:35:39.070464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.256 [2024-11-07 13:35:39.070475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.256 [2024-11-07 13:35:39.070489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.256 [2024-11-07 13:35:39.070501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.256 [2024-11-07 13:35:39.070515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.256 [2024-11-07 13:35:39.070527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.256 [2024-11-07 13:35:39.070541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.256 [2024-11-07 13:35:39.070553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.256 [2024-11-07 13:35:39.070567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.256 [2024-11-07 13:35:39.070579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.256 [2024-11-07 13:35:39.070593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.256 [2024-11-07 13:35:39.070604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.256 [2024-11-07 13:35:39.070619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.256 [2024-11-07 13:35:39.070630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.256 [2024-11-07 13:35:39.070645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.256 [2024-11-07 13:35:39.070656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.256 [2024-11-07 13:35:39.070669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.256 [2024-11-07 13:35:39.070680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.256 [2024-11-07 13:35:39.070694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.256 [2024-11-07 13:35:39.070706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.256 [2024-11-07 13:35:39.070721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.256 [2024-11-07 13:35:39.070732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.256 [2024-11-07 13:35:39.070745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.257 [2024-11-07 13:35:39.070757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.257 [2024-11-07 13:35:39.070771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.257 [2024-11-07 13:35:39.070783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.257 [2024-11-07 13:35:39.070797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.257 [2024-11-07 13:35:39.070808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.257 [2024-11-07 13:35:39.070822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.257 [2024-11-07 13:35:39.070833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.257 [2024-11-07 13:35:39.070849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.257 [2024-11-07 13:35:39.070861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.257 [2024-11-07 13:35:39.070879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.257 [2024-11-07 13:35:39.070890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.257 [2024-11-07 13:35:39.070904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.257 [2024-11-07 13:35:39.070915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.257 [2024-11-07 13:35:39.070929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.257 [2024-11-07 13:35:39.070941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.257 [2024-11-07 13:35:39.070954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.257 [2024-11-07 13:35:39.070965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.257 [2024-11-07 13:35:39.070979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.257 [2024-11-07 13:35:39.070990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.257 [2024-11-07 13:35:39.071005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.257 [2024-11-07 13:35:39.071016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.257 [2024-11-07 13:35:39.071029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.257 [2024-11-07 13:35:39.071041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.257 [2024-11-07 13:35:39.071053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.257 [2024-11-07 13:35:39.071065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.257 [2024-11-07 13:35:39.071078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.257 [2024-11-07 13:35:39.071091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.257 [2024-11-07 13:35:39.071104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.257 [2024-11-07 13:35:39.071115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.257 [2024-11-07 13:35:39.071128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.257 [2024-11-07 13:35:39.071139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.257 [2024-11-07 13:35:39.071153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.257 [2024-11-07 13:35:39.071168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.257 [2024-11-07 13:35:39.071181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.257 [2024-11-07 13:35:39.071193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.257 [2024-11-07 13:35:39.071211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.257 [2024-11-07 13:35:39.071222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.257 [2024-11-07 13:35:39.071236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.257 [2024-11-07 13:35:39.071248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.257 [2024-11-07 13:35:39.071261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.257 [2024-11-07 13:35:39.071272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.257 [2024-11-07 13:35:39.071286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.257 [2024-11-07 13:35:39.071297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.257 [2024-11-07 13:35:39.071312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.257 [2024-11-07 13:35:39.071323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.257 [2024-11-07 13:35:39.071337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.257 [2024-11-07 13:35:39.071348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.257 [2024-11-07 13:35:39.071363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.257 [2024-11-07 13:35:39.071375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.257 [2024-11-07 13:35:39.071389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.257 [2024-11-07 13:35:39.071400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.257 [2024-11-07 13:35:39.071413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.257 [2024-11-07 13:35:39.071425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.257 [2024-11-07 13:35:39.071441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.257 [2024-11-07 13:35:39.071452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.257 [2024-11-07 13:35:39.071466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.257 [2024-11-07 13:35:39.071477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.257 [2024-11-07 13:35:39.071491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000424580 is same with the state(6) to be set 00:30:31.257 [2024-11-07 13:35:39.072962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:30:31.257 [2024-11-07 13:35:39.072983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:30:31.257 [2024-11-07 13:35:39.072997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:30:31.257 [2024-11-07 13:35:39.073057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:30:31.257 [2024-11-07 13:35:39.073070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:30:31.257 [2024-11-07 13:35:39.073082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:30:31.257 [2024-11-07 13:35:39.073092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:30:31.257 [2024-11-07 13:35:39.073592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.257 [2024-11-07 13:35:39.073614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000418a00 with addr=10.0.0.2, port=4420 00:30:31.257 [2024-11-07 13:35:39.073627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000418a00 is same with the state(6) to be set 00:30:31.257 [2024-11-07 13:35:39.073822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.257 [2024-11-07 13:35:39.073838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000419900 with addr=10.0.0.2, port=4420 00:30:31.257 [2024-11-07 13:35:39.073849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000419900 is same with the state(6) to be set 00:30:31.257 [2024-11-07 13:35:39.074202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.257 [2024-11-07 13:35:39.074218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041f300 with addr=10.0.0.2, port=4420 00:30:31.257 [2024-11-07 13:35:39.074229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500041f300 is same with the state(6) to be set 00:30:31.258 [2024-11-07 13:35:39.075108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.258 [2024-11-07 13:35:39.075132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.258 [2024-11-07 13:35:39.075150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.258 [2024-11-07 13:35:39.075163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.258 [2024-11-07 13:35:39.075178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.258 [2024-11-07 13:35:39.075190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.258 [2024-11-07 13:35:39.075205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.258 [2024-11-07 13:35:39.075216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.258 [2024-11-07 13:35:39.075230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.258 [2024-11-07 13:35:39.075242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.258 [2024-11-07 13:35:39.075260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.258 [2024-11-07 13:35:39.075272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.258 [2024-11-07 13:35:39.075287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.258 [2024-11-07 13:35:39.075299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.258 [2024-11-07 13:35:39.075313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.258 [2024-11-07 13:35:39.075324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.258 [2024-11-07 13:35:39.075337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.258 [2024-11-07 13:35:39.075350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.258 [2024-11-07 13:35:39.075364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.258 [2024-11-07 13:35:39.075375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.258 [2024-11-07 13:35:39.075390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.258 [2024-11-07 13:35:39.075402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.258 [2024-11-07 13:35:39.075416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.258 [2024-11-07 13:35:39.075429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.258 [2024-11-07 13:35:39.075444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.258 [2024-11-07 13:35:39.075454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.258 [2024-11-07 13:35:39.075468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.258 [2024-11-07 13:35:39.075481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.258 [2024-11-07 13:35:39.075495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.258 [2024-11-07 13:35:39.075507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.258 [2024-11-07 13:35:39.075521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.258 [2024-11-07 13:35:39.075533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.258 [2024-11-07 13:35:39.075547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.258 [2024-11-07 13:35:39.075559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.258 [2024-11-07 13:35:39.075573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.258 [2024-11-07 13:35:39.075588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.258 [2024-11-07 13:35:39.075602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.258 [2024-11-07 13:35:39.075613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.258 [2024-11-07 13:35:39.075628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.258 [2024-11-07 13:35:39.075639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.258 [2024-11-07 13:35:39.075653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.258 [2024-11-07 13:35:39.075666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.258 [2024-11-07 13:35:39.075681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.258 [2024-11-07 13:35:39.075692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.258 [2024-11-07 13:35:39.075705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.258 [2024-11-07 13:35:39.075717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.258 [2024-11-07 13:35:39.075731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.258 [2024-11-07 13:35:39.075741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.258 [2024-11-07 13:35:39.075755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.258 [2024-11-07 13:35:39.075767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.258 [2024-11-07 13:35:39.075782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.258 [2024-11-07 13:35:39.075794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.258 [2024-11-07 13:35:39.075808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.258 [2024-11-07 13:35:39.075820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.258 [2024-11-07 13:35:39.075835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.258 [2024-11-07 13:35:39.075847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.258 [2024-11-07 13:35:39.075860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.258 [2024-11-07 13:35:39.075877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.258 [2024-11-07 13:35:39.075891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.258 [2024-11-07 13:35:39.075902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.258 [2024-11-07 13:35:39.075918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.258 [2024-11-07 13:35:39.075930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.258 [2024-11-07 13:35:39.075944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.258 [2024-11-07 13:35:39.075956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.258 [2024-11-07 13:35:39.075971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.258 [2024-11-07 13:35:39.075983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.258 [2024-11-07 13:35:39.075997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.258 [2024-11-07 13:35:39.076009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.258 [2024-11-07 13:35:39.076024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.258 [2024-11-07 13:35:39.076036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.258 [2024-11-07 13:35:39.076050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.258 [2024-11-07 13:35:39.076062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.258 [2024-11-07 13:35:39.076077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.258 [2024-11-07 13:35:39.076088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.258 [2024-11-07 13:35:39.076102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.258 [2024-11-07 13:35:39.076115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.258 [2024-11-07 13:35:39.076129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.258 [2024-11-07 13:35:39.076140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.258 [2024-11-07 13:35:39.076155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.259 [2024-11-07 13:35:39.076168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.259 [2024-11-07 13:35:39.076181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.259 [2024-11-07 13:35:39.076192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.259 [2024-11-07 13:35:39.076207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.259 [2024-11-07 13:35:39.076220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.259 [2024-11-07 13:35:39.076233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.259 [2024-11-07 13:35:39.076247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.259 [2024-11-07 13:35:39.076260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.259 [2024-11-07 13:35:39.076273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.259 [2024-11-07 13:35:39.076288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.259 [2024-11-07 13:35:39.076305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.259 [2024-11-07 13:35:39.076321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.259 [2024-11-07 13:35:39.076334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.259 [2024-11-07 13:35:39.076348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.259 [2024-11-07 13:35:39.076360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.259 [2024-11-07 13:35:39.076373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.259 [2024-11-07 13:35:39.076385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.259 [2024-11-07 13:35:39.076399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.259 [2024-11-07 13:35:39.076411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.259 [2024-11-07 13:35:39.076425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.259 [2024-11-07 13:35:39.076437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.259 [2024-11-07 13:35:39.076451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.259 [2024-11-07 13:35:39.076464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.259 [2024-11-07 13:35:39.076478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.259 [2024-11-07 13:35:39.076490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.259 [2024-11-07 13:35:39.076504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.259 [2024-11-07 13:35:39.076516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.259 [2024-11-07 13:35:39.076529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.259 [2024-11-07 13:35:39.076540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.259 [2024-11-07 13:35:39.076555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.259 [2024-11-07 13:35:39.076567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.259 [2024-11-07 13:35:39.076581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.259 [2024-11-07 13:35:39.076594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.259 [2024-11-07 13:35:39.076609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.259 [2024-11-07 13:35:39.076619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.259 [2024-11-07 13:35:39.076633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.259 [2024-11-07 13:35:39.076645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.259 [2024-11-07 13:35:39.076660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.259 [2024-11-07 13:35:39.076671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.259 [2024-11-07 13:35:39.076686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.259 [2024-11-07 13:35:39.076698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.259 [2024-11-07 13:35:39.076711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.259 [2024-11-07 13:35:39.076723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.259 [2024-11-07 13:35:39.076737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.259 [2024-11-07 13:35:39.076748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.259 [2024-11-07 13:35:39.076763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.259 [2024-11-07 13:35:39.076775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.259 [2024-11-07 13:35:39.076788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.259 [2024-11-07 13:35:39.076800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.259 [2024-11-07 13:35:39.076814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000422c80 is same with the state(6) to be set 00:30:31.259 [2024-11-07 13:35:39.078309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.259 [2024-11-07 13:35:39.078331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.259 [2024-11-07 13:35:39.078347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.259 [2024-11-07 13:35:39.078359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.259 [2024-11-07 13:35:39.078374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.259 [2024-11-07 13:35:39.078385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.259 [2024-11-07 13:35:39.078398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.259 [2024-11-07 13:35:39.078414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.259 [2024-11-07 13:35:39.078427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.259 [2024-11-07 13:35:39.078438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.259 [2024-11-07 13:35:39.078452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.259 [2024-11-07 13:35:39.078463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.259 [2024-11-07 13:35:39.078477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.259 [2024-11-07 13:35:39.078489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.259 [2024-11-07 13:35:39.078504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.259 [2024-11-07 13:35:39.078516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.259 [2024-11-07 13:35:39.078530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.259 [2024-11-07 13:35:39.078542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.259 [2024-11-07 13:35:39.078557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.259 [2024-11-07 13:35:39.078569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.259 [2024-11-07 13:35:39.078582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.259 [2024-11-07 13:35:39.078594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.259 [2024-11-07 13:35:39.078608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.259 [2024-11-07 13:35:39.078619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.259 [2024-11-07 13:35:39.078632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.259 [2024-11-07 13:35:39.078645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.259 [2024-11-07 13:35:39.078659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.259 [2024-11-07 13:35:39.078671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.259 [2024-11-07 13:35:39.078685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.259 [2024-11-07 13:35:39.078697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.260 [2024-11-07 13:35:39.078711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.260 [2024-11-07 13:35:39.078723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.260 [2024-11-07 13:35:39.078738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.260 [2024-11-07 13:35:39.078751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.260 [2024-11-07 13:35:39.078765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.260 [2024-11-07 13:35:39.078776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.260 [2024-11-07 13:35:39.078790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.260 [2024-11-07 13:35:39.078802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.260 [2024-11-07 13:35:39.078817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.260 [2024-11-07 13:35:39.078827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.260 [2024-11-07 13:35:39.078843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.260 [2024-11-07 13:35:39.078855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.260 [2024-11-07 13:35:39.078873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.260 [2024-11-07 13:35:39.078885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.260 [2024-11-07 13:35:39.078899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.260 [2024-11-07 13:35:39.078911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.260 [2024-11-07 13:35:39.078925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.260 [2024-11-07 13:35:39.078938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.260 [2024-11-07 13:35:39.078952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.260 [2024-11-07 13:35:39.078964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.260 [2024-11-07 13:35:39.078977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.260 [2024-11-07 13:35:39.078989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.260 [2024-11-07 13:35:39.079004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.260 [2024-11-07 13:35:39.079015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.260 [2024-11-07 13:35:39.079029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.260 [2024-11-07 13:35:39.079040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.260 [2024-11-07 13:35:39.079054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.260 [2024-11-07 13:35:39.079068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.260 [2024-11-07 13:35:39.079083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.260 [2024-11-07 13:35:39.079095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.260 [2024-11-07 13:35:39.079109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.260 [2024-11-07 13:35:39.079122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.260 [2024-11-07 13:35:39.079136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.260 [2024-11-07 13:35:39.079148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.260 [2024-11-07 13:35:39.079162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.260 [2024-11-07 13:35:39.079173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.260 [2024-11-07 13:35:39.079188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.260 [2024-11-07 13:35:39.079201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.260 [2024-11-07 13:35:39.079214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.260 [2024-11-07 13:35:39.079227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.260 [2024-11-07 13:35:39.079241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.260 [2024-11-07 13:35:39.079253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.260 [2024-11-07 13:35:39.079267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.260 [2024-11-07 13:35:39.079280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.260 [2024-11-07 13:35:39.079294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.260 [2024-11-07 13:35:39.079305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.260 [2024-11-07 13:35:39.079320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.260 [2024-11-07 13:35:39.079332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.260 [2024-11-07 13:35:39.079345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.260 [2024-11-07 13:35:39.079357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.260 [2024-11-07 13:35:39.079372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.260 [2024-11-07 13:35:39.079384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.260 [2024-11-07 13:35:39.079399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.260 [2024-11-07 13:35:39.079411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.260 [2024-11-07 13:35:39.079426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.260 [2024-11-07 13:35:39.079438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.260 [2024-11-07 13:35:39.079453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.260 [2024-11-07 13:35:39.079465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.260 [2024-11-07 13:35:39.079487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.260 [2024-11-07 13:35:39.079499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.260 [2024-11-07 13:35:39.079513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.260 [2024-11-07 13:35:39.079526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.260 [2024-11-07 13:35:39.079540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.260 [2024-11-07 13:35:39.079552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.260 [2024-11-07 13:35:39.079566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.260 [2024-11-07 13:35:39.079579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.260 [2024-11-07 13:35:39.079593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.260 [2024-11-07 13:35:39.079605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.260 [2024-11-07 13:35:39.079621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.260 [2024-11-07 13:35:39.079633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.260 [2024-11-07 13:35:39.079647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.260 [2024-11-07 13:35:39.079659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.260 [2024-11-07 13:35:39.079673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.260 [2024-11-07 13:35:39.079685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.260 [2024-11-07 13:35:39.079699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.260 [2024-11-07 13:35:39.079712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.260 [2024-11-07 13:35:39.079725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.260 [2024-11-07 13:35:39.079741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.260 [2024-11-07 13:35:39.079756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.260 [2024-11-07 13:35:39.079768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.261 [2024-11-07 13:35:39.079782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.261 [2024-11-07 13:35:39.079793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.261 [2024-11-07 13:35:39.079807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.261 [2024-11-07 13:35:39.079819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.261 [2024-11-07 13:35:39.079831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.261 [2024-11-07 13:35:39.079844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.261 [2024-11-07 13:35:39.079858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.261 [2024-11-07 13:35:39.079874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.261 [2024-11-07 13:35:39.079888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.261 [2024-11-07 13:35:39.079901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.261 [2024-11-07 13:35:39.079915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.261 [2024-11-07 13:35:39.079926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.261 [2024-11-07 13:35:39.079941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.261 [2024-11-07 13:35:39.079953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.261 [2024-11-07 13:35:39.079967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.261 [2024-11-07 13:35:39.079978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.261 [2024-11-07 13:35:39.079992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.261 [2024-11-07 13:35:39.080004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.261 [2024-11-07 13:35:39.080017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000423180 is same with the state(6) to be set 00:30:31.261 [2024-11-07 13:35:39.081488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.261 [2024-11-07 13:35:39.081509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.261 [2024-11-07 13:35:39.081525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.261 [2024-11-07 13:35:39.081540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.261 [2024-11-07 13:35:39.081554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.261 [2024-11-07 13:35:39.081565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.261 [2024-11-07 13:35:39.081579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.261 [2024-11-07 13:35:39.081590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.261 [2024-11-07 13:35:39.081605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.261 [2024-11-07 13:35:39.081616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.261 [2024-11-07 13:35:39.081629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.261 [2024-11-07 13:35:39.081639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.261 [2024-11-07 13:35:39.081653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.261 [2024-11-07 13:35:39.081666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.261 [2024-11-07 13:35:39.081681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.261 [2024-11-07 13:35:39.081693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.261 [2024-11-07 13:35:39.081706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.261 [2024-11-07 13:35:39.081718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.261 [2024-11-07 13:35:39.081733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.261 [2024-11-07 13:35:39.081744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.261 [2024-11-07 13:35:39.081757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.261 [2024-11-07 13:35:39.081770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.261 [2024-11-07 13:35:39.081785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.261 [2024-11-07 13:35:39.081796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.261 [2024-11-07 13:35:39.081811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.261 [2024-11-07 13:35:39.081823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.261 [2024-11-07 13:35:39.081837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.261 [2024-11-07 13:35:39.081849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.261 [2024-11-07 13:35:39.081875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.261 [2024-11-07 13:35:39.081887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.261 [2024-11-07 13:35:39.081901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.261 [2024-11-07 13:35:39.081914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.261 [2024-11-07 13:35:39.081929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.261 [2024-11-07 13:35:39.081940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.261 [2024-11-07 13:35:39.081955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.261 [2024-11-07 13:35:39.081967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.261 [2024-11-07 13:35:39.081980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.261 [2024-11-07 13:35:39.081992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.261 [2024-11-07 13:35:39.082006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.261 [2024-11-07 13:35:39.082018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.261 [2024-11-07 13:35:39.082033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.261 [2024-11-07 13:35:39.082045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.261 [2024-11-07 13:35:39.082059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.261 [2024-11-07 13:35:39.082072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.261 [2024-11-07 13:35:39.082085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.261 [2024-11-07 13:35:39.082098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.261 [2024-11-07 13:35:39.082112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.261 [2024-11-07 13:35:39.082124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.261 [2024-11-07 13:35:39.082139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.261 [2024-11-07 13:35:39.082151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.261 [2024-11-07 13:35:39.082165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.261 [2024-11-07 13:35:39.082177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.261 [2024-11-07 13:35:39.082191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.261 [2024-11-07 13:35:39.082204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.262 [2024-11-07 13:35:39.082219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.262 [2024-11-07 13:35:39.082231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.262 [2024-11-07 13:35:39.082245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.262 [2024-11-07 13:35:39.082256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.262 [2024-11-07 13:35:39.082272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.262 [2024-11-07 13:35:39.082283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.262 [2024-11-07 13:35:39.082297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.262 [2024-11-07 13:35:39.082309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.262 [2024-11-07 13:35:39.082324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.262 [2024-11-07 13:35:39.082335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.262 [2024-11-07 13:35:39.082349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.262 [2024-11-07 13:35:39.082362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.262 [2024-11-07 13:35:39.082377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.262 [2024-11-07 13:35:39.082388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.262 [2024-11-07 13:35:39.082403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.262 [2024-11-07 13:35:39.082415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.262 [2024-11-07 13:35:39.082430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.262 [2024-11-07 13:35:39.082441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.262 [2024-11-07 13:35:39.082455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.262 [2024-11-07 13:35:39.082467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.262 [2024-11-07 13:35:39.082481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.262 [2024-11-07 13:35:39.082494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.262 [2024-11-07 13:35:39.082508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.262 [2024-11-07 13:35:39.082519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.262 [2024-11-07 13:35:39.082535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.262 [2024-11-07 13:35:39.082548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.262 [2024-11-07 13:35:39.082563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.262 [2024-11-07 13:35:39.082574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.262 [2024-11-07 13:35:39.082589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.262 [2024-11-07 13:35:39.082602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.262 [2024-11-07 13:35:39.082615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.262 [2024-11-07 13:35:39.082627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.262 [2024-11-07 13:35:39.082641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.262 [2024-11-07 13:35:39.082659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.262 [2024-11-07 13:35:39.082672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.262 [2024-11-07 13:35:39.082684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.262 [2024-11-07 13:35:39.082699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.262 [2024-11-07 13:35:39.082711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.262 [2024-11-07 13:35:39.082725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.262 [2024-11-07 13:35:39.082737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.262 [2024-11-07 13:35:39.082752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.262 [2024-11-07 13:35:39.082764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.262 [2024-11-07 13:35:39.082778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.262 [2024-11-07 13:35:39.082789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.262 [2024-11-07 13:35:39.082804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.262 [2024-11-07 13:35:39.082815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.262 [2024-11-07 13:35:39.082829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.262 [2024-11-07 13:35:39.082841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.262 [2024-11-07 13:35:39.082856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.262 [2024-11-07 13:35:39.082872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.262 [2024-11-07 13:35:39.082887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.262 [2024-11-07 13:35:39.082899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.262 [2024-11-07 13:35:39.082914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.262 [2024-11-07 13:35:39.082925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.262 [2024-11-07 13:35:39.082940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.262 [2024-11-07 13:35:39.082952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.262 [2024-11-07 13:35:39.082966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.262 [2024-11-07 13:35:39.082978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.262 [2024-11-07 13:35:39.082992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.262 [2024-11-07 13:35:39.083003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.262 [2024-11-07 13:35:39.083018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.262 [2024-11-07 13:35:39.083030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.262 [2024-11-07 13:35:39.083045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.262 [2024-11-07 13:35:39.083057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.262 [2024-11-07 13:35:39.083070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.262 [2024-11-07 13:35:39.083082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.262 [2024-11-07 13:35:39.083097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.262 [2024-11-07 13:35:39.083109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.262 [2024-11-07 13:35:39.083122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.262 [2024-11-07 13:35:39.083134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.262 [2024-11-07 13:35:39.083149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.262 [2024-11-07 13:35:39.083161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.262 [2024-11-07 13:35:39.083176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.262 [2024-11-07 13:35:39.083187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.262 [2024-11-07 13:35:39.083202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000424080 is same with the state(6) to be set 00:30:31.262 [2024-11-07 13:35:39.087490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:30:31.262 [2024-11-07 13:35:39.087519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:30:31.262 [2024-11-07 13:35:39.087533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:30:31.262 [2024-11-07 13:35:39.087546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:30:31.262 [2024-11-07 13:35:39.087563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:30:31.263 [2024-11-07 13:35:39.087577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:30:31.263 [2024-11-07 13:35:39.087642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000418a00 (9): Bad file descriptor 00:30:31.263 [2024-11-07 13:35:39.087660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000419900 (9): Bad file descriptor 00:30:31.263 [2024-11-07 13:35:39.087673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500041f300 (9): Bad file descriptor 00:30:31.263 [2024-11-07 13:35:39.087709] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:30:31.263 [2024-11-07 13:35:39.087730] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:30:31.263 [2024-11-07 13:35:39.087745] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:30:31.263 [2024-11-07 13:35:39.087760] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:30:31.263 00:30:31.263 Latency(us) 00:30:31.263 [2024-11-07T12:35:39.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:31.263 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:31.263 Job: Nvme1n1 ended in about 1.05 seconds with error 00:30:31.263 Verification LBA range: start 0x0 length 0x400 00:30:31.263 Nvme1n1 : 1.05 182.30 11.39 60.77 0.00 260511.79 14308.69 279620.27 00:30:31.263 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:31.263 Job: Nvme2n1 ended in about 1.03 seconds with error 00:30:31.263 Verification LBA range: start 0x0 length 0x400 00:30:31.263 Nvme2n1 : 1.03 186.87 11.68 62.29 0.00 249128.32 25122.13 270882.13 00:30:31.263 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:31.263 Job: Nvme3n1 ended in about 1.06 seconds with error 00:30:31.263 Verification LBA range: start 0x0 length 0x400 00:30:31.263 Nvme3n1 : 1.06 181.02 11.31 60.34 0.00 252486.61 20753.07 283115.52 00:30:31.263 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:31.263 Job: Nvme4n1 ended in about 1.06 seconds with error 00:30:31.263 Verification LBA range: start 0x0 length 0x400 00:30:31.263 Nvme4n1 : 1.06 180.48 11.28 60.16 0.00 248435.63 24466.77 269134.51 00:30:31.263 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:31.263 Job: Nvme5n1 ended in about 1.07 seconds with error 00:30:31.263 Verification LBA range: start 0x0 length 0x400 00:30:31.263 Nvme5n1 : 1.07 119.37 7.46 59.68 0.00 327544.04 17476.27 276125.01 00:30:31.263 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:31.263 Job: Nvme6n1 ended in about 1.08 seconds with error 00:30:31.263 Verification LBA range: start 0x0 length 0x400 00:30:31.263 Nvme6n1 : 1.08 119.01 7.44 59.51 0.00 322145.56 22500.69 302339.41 00:30:31.263 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:31.263 Job: Nvme7n1 ended in about 1.05 seconds with error 00:30:31.263 Verification LBA range: start 0x0 length 0x400 00:30:31.263 Nvme7n1 : 1.05 182.76 11.42 60.92 0.00 230336.43 12888.75 251658.24 00:30:31.263 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:31.263 Job: Nvme8n1 ended in about 1.05 seconds with error 00:30:31.263 Verification LBA range: start 0x0 length 0x400 00:30:31.263 Nvme8n1 : 1.05 182.55 11.41 60.85 0.00 225749.12 10048.85 263891.63 00:30:31.263 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:31.263 Job: Nvme9n1 ended in about 1.08 seconds with error 00:30:31.263 Verification LBA range: start 0x0 length 0x400 00:30:31.263 Nvme9n1 : 1.08 178.00 11.12 59.33 0.00 227572.05 17148.59 248162.99 00:30:31.263 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:31.263 Job: Nvme10n1 ended in about 1.07 seconds with error 00:30:31.263 Verification LBA range: start 0x0 length 0x400 00:30:31.263 Nvme10n1 : 1.07 119.96 7.50 59.98 0.00 293039.22 15947.09 288358.40 00:30:31.263 [2024-11-07T12:35:39.270Z] =================================================================================================================== 00:30:31.263 [2024-11-07T12:35:39.270Z] Total : 1632.34 102.02 603.84 0.00 259596.38 10048.85 302339.41 00:30:31.263 [2024-11-07 13:35:39.160655] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:31.263 [2024-11-07 13:35:39.160715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:30:31.263 [2024-11-07 13:35:39.161076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-11-07 13:35:39.161104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:30:31.263 [2024-11-07 13:35:39.161121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000417b00 is same with the state(6) to be set 00:30:31.263 [2024-11-07 13:35:39.161487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-11-07 13:35:39.161503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:30:31.263 [2024-11-07 13:35:39.161514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:30:31.263 [2024-11-07 13:35:39.161887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-11-07 13:35:39.161903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041c600 with addr=10.0.0.2, port=4420 00:30:31.263 [2024-11-07 13:35:39.161914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500041c600 is same with the state(6) to be set 00:30:31.263 [2024-11-07 13:35:39.162123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-11-07 13:35:39.162139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041d500 with addr=10.0.0.2, port=4420 00:30:31.263 [2024-11-07 13:35:39.162151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500041d500 is same with the state(6) to be set 00:30:31.263 [2024-11-07 13:35:39.162436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-11-07 13:35:39.162451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041a800 with addr=10.0.0.2, port=4420 00:30:31.263 [2024-11-07 13:35:39.162461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500041a800 is same with the state(6) to be set 00:30:31.263 [2024-11-07 13:35:39.162792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-11-07 13:35:39.162808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041b700 with addr=10.0.0.2, port=4420 00:30:31.263 [2024-11-07 13:35:39.162819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500041b700 is same with the state(6) to be set 00:30:31.263 [2024-11-07 13:35:39.162836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:30:31.263 [2024-11-07 13:35:39.162848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:30:31.263 [2024-11-07 13:35:39.162867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:30:31.263 [2024-11-07 13:35:39.162884] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:30:31.263 [2024-11-07 13:35:39.162898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:30:31.263 [2024-11-07 13:35:39.162908] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:30:31.263 [2024-11-07 13:35:39.162918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:30:31.263 [2024-11-07 13:35:39.162927] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:30:31.263 [2024-11-07 13:35:39.162938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:30:31.263 [2024-11-07 13:35:39.162947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:30:31.263 [2024-11-07 13:35:39.162958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:30:31.263 [2024-11-07 13:35:39.162968] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:30:31.263 [2024-11-07 13:35:39.164761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.263 [2024-11-07 13:35:39.164791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041e400 with addr=10.0.0.2, port=4420 00:30:31.263 [2024-11-07 13:35:39.164804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500041e400 is same with the state(6) to be set 00:30:31.263 [2024-11-07 13:35:39.164824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000417b00 (9): Bad file descriptor 00:30:31.263 [2024-11-07 13:35:39.164842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:30:31.263 [2024-11-07 13:35:39.164855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500041c600 (9): Bad file descriptor 00:30:31.263 [2024-11-07 13:35:39.164875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500041d500 (9): Bad file descriptor 00:30:31.263 [2024-11-07 13:35:39.164888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500041a800 (9): Bad file descriptor 00:30:31.263 [2024-11-07 13:35:39.164902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500041b700 (9): Bad file descriptor 00:30:31.263 [2024-11-07 13:35:39.164963] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:30:31.263 [2024-11-07 13:35:39.164982] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:30:31.263 [2024-11-07 13:35:39.164996] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:30:31.263 [2024-11-07 13:35:39.165011] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:30:31.263 [2024-11-07 13:35:39.165024] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:30:31.263 [2024-11-07 13:35:39.165038] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:30:31.263 [2024-11-07 13:35:39.165381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500041e400 (9): Bad file descriptor 00:30:31.263 [2024-11-07 13:35:39.165405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:30:31.263 [2024-11-07 13:35:39.165416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:30:31.263 [2024-11-07 13:35:39.165427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:30:31.264 [2024-11-07 13:35:39.165438] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:30:31.264 [2024-11-07 13:35:39.165450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:30:31.264 [2024-11-07 13:35:39.165459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:30:31.264 [2024-11-07 13:35:39.165469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:30:31.264 [2024-11-07 13:35:39.165478] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:30:31.264 [2024-11-07 13:35:39.165488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:30:31.264 [2024-11-07 13:35:39.165498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:30:31.264 [2024-11-07 13:35:39.165508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:30:31.264 [2024-11-07 13:35:39.165517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:30:31.264 [2024-11-07 13:35:39.165527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:30:31.264 [2024-11-07 13:35:39.165536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:30:31.264 [2024-11-07 13:35:39.165545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:30:31.264 [2024-11-07 13:35:39.165555] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:30:31.264 [2024-11-07 13:35:39.165566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:30:31.264 [2024-11-07 13:35:39.165575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:30:31.264 [2024-11-07 13:35:39.165585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:30:31.264 [2024-11-07 13:35:39.165594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:30:31.264 [2024-11-07 13:35:39.165604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:30:31.264 [2024-11-07 13:35:39.165613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:30:31.264 [2024-11-07 13:35:39.165623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:30:31.264 [2024-11-07 13:35:39.165632] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:30:31.264 [2024-11-07 13:35:39.165712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:30:31.264 [2024-11-07 13:35:39.165736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:30:31.264 [2024-11-07 13:35:39.165749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:30:31.264 [2024-11-07 13:35:39.165790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:30:31.264 [2024-11-07 13:35:39.165803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:30:31.264 [2024-11-07 13:35:39.165814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:30:31.264 [2024-11-07 13:35:39.165824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:30:31.264 [2024-11-07 13:35:39.166196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.264 [2024-11-07 13:35:39.166216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041f300 with addr=10.0.0.2, port=4420 00:30:31.264 [2024-11-07 13:35:39.166229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500041f300 is same with the state(6) to be set 00:30:31.264 [2024-11-07 13:35:39.166559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.264 [2024-11-07 13:35:39.166574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000419900 with addr=10.0.0.2, port=4420 00:30:31.264 [2024-11-07 13:35:39.166585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000419900 is same with the state(6) to be set 00:30:31.264 [2024-11-07 13:35:39.166922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.264 [2024-11-07 13:35:39.166938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000418a00 with addr=10.0.0.2, port=4420 00:30:31.264 [2024-11-07 13:35:39.166949] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000418a00 is same with the state(6) to be set 00:30:31.264 [2024-11-07 13:35:39.166990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500041f300 (9): Bad file descriptor 00:30:31.264 [2024-11-07 13:35:39.167007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000419900 (9): Bad file descriptor 00:30:31.264 [2024-11-07 13:35:39.167020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000418a00 (9): Bad file descriptor 00:30:31.264 [2024-11-07 13:35:39.167059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:30:31.264 [2024-11-07 13:35:39.167071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:30:31.264 [2024-11-07 13:35:39.167082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:30:31.264 [2024-11-07 13:35:39.167092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:30:31.264 [2024-11-07 13:35:39.167103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:30:31.264 [2024-11-07 13:35:39.167112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:30:31.264 [2024-11-07 13:35:39.167121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:30:31.264 [2024-11-07 13:35:39.167130] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:30:31.264 [2024-11-07 13:35:39.167142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:30:31.264 [2024-11-07 13:35:39.167150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:30:31.264 [2024-11-07 13:35:39.167160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:30:31.264 [2024-11-07 13:35:39.167169] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:30:32.655 13:35:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3996805 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3996805 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 3996805 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:33.594 rmmod nvme_tcp 00:30:33.594 rmmod nvme_fabrics 00:30:33.594 rmmod nvme_keyring 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3996459 ']' 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3996459 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 3996459 ']' 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 3996459 00:30:33.594 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3996459) - No such process 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@979 -- # echo 'Process with pid 3996459 is not found' 00:30:33.594 Process with pid 3996459 is not found 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:33.594 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:36.135 00:30:36.135 real 0m9.729s 00:30:36.135 user 0m26.514s 00:30:36.135 sys 0m1.585s 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:36.135 ************************************ 00:30:36.135 END TEST nvmf_shutdown_tc3 00:30:36.135 ************************************ 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:36.135 ************************************ 00:30:36.135 START TEST nvmf_shutdown_tc4 00:30:36.135 ************************************ 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc4 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:36.135 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:36.135 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:36.135 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:36.136 Found net devices under 0000:31:00.0: cvl_0_0 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:36.136 Found net devices under 0000:31:00.1: cvl_0_1 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:36.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:36.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:30:36.136 00:30:36.136 --- 10.0.0.2 ping statistics --- 00:30:36.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:36.136 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:36.136 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:36.136 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:30:36.136 00:30:36.136 --- 10.0.0.1 ping statistics --- 00:30:36.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:36.136 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:36.136 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:36.136 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:30:36.136 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:36.136 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:36.136 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:36.136 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3998521 00:30:36.136 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3998521 00:30:36.136 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@833 -- # '[' -z 3998521 ']' 00:30:36.136 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:36.136 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:36.136 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:36.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:36.136 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:36.136 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:36.136 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:36.136 [2024-11-07 13:35:44.114477] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:30:36.136 [2024-11-07 13:35:44.114606] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:36.398 [2024-11-07 13:35:44.282580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:36.398 [2024-11-07 13:35:44.361431] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:36.398 [2024-11-07 13:35:44.361467] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:36.398 [2024-11-07 13:35:44.361475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:36.398 [2024-11-07 13:35:44.361484] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:36.398 [2024-11-07 13:35:44.361490] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:36.398 [2024-11-07 13:35:44.363330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:36.398 [2024-11-07 13:35:44.363470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:36.398 [2024-11-07 13:35:44.363567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:36.398 [2024-11-07 13:35:44.363593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:36.971 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:36.971 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@866 -- # return 0 00:30:36.971 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:36.971 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:36.971 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:36.971 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:36.971 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:36.971 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.971 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:36.971 [2024-11-07 13:35:44.910984] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:36.971 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.971 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:36.971 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:36.971 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:36.971 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:36.971 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:36.971 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:36.971 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:36.971 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:36.971 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:36.971 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:36.971 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:36.971 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:36.971 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:36.971 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:36.971 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:36.971 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:36.971 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:36.971 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:36.971 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:37.232 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:37.232 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:37.232 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:37.232 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:37.232 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:37.232 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:37.232 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:37.232 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.232 13:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:37.232 Malloc1 00:30:37.232 [2024-11-07 13:35:45.055592] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:37.232 Malloc2 00:30:37.232 Malloc3 00:30:37.232 Malloc4 00:30:37.491 Malloc5 00:30:37.491 Malloc6 00:30:37.491 Malloc7 00:30:37.751 Malloc8 00:30:37.751 Malloc9 00:30:37.751 Malloc10 00:30:37.751 13:35:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.751 13:35:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:37.751 13:35:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:37.751 13:35:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:37.751 13:35:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3998894 00:30:37.751 13:35:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:30:37.751 13:35:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:30:38.011 [2024-11-07 13:35:45.816220] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:43.289 13:35:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:43.289 13:35:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3998521 00:30:43.289 13:35:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 3998521 ']' 00:30:43.289 13:35:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 3998521 00:30:43.289 13:35:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # uname 00:30:43.289 13:35:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:43.289 13:35:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3998521 00:30:43.289 13:35:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:43.289 13:35:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:43.289 13:35:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3998521' 00:30:43.289 killing process with pid 3998521 00:30:43.289 13:35:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@971 -- # kill 3998521 00:30:43.289 13:35:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@976 -- # wait 3998521 00:30:43.289 Write completed with error (sct=0, sc=8) 00:30:43.289 Write completed with error (sct=0, sc=8) 00:30:43.289 starting I/O failed: -6 00:30:43.289 Write completed with error (sct=0, sc=8) 00:30:43.289 Write completed with error (sct=0, sc=8) 00:30:43.289 Write completed with error (sct=0, sc=8) 00:30:43.289 Write completed with error (sct=0, sc=8) 00:30:43.289 starting I/O failed: -6 00:30:43.289 [2024-11-07 13:35:50.789758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same Write completed with error (sct=0, sc=8) 00:30:43.289 with the state(6) to be set 00:30:43.289 [2024-11-07 13:35:50.789809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:43.289 Write completed with error (sct=0, sc=8) 00:30:43.289 [2024-11-07 13:35:50.789818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:43.290 [2024-11-07 13:35:50.789825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:43.290 [2024-11-07 13:35:50.789832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 [2024-11-07 13:35:50.789838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:43.290 [2024-11-07 13:35:50.789845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:43.290 [2024-11-07 13:35:50.789851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 [2024-11-07 13:35:50.789858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 [2024-11-07 13:35:50.790734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 [2024-11-07 13:35:50.792043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006480 is same Write completed with error (sct=0, sc=8) 00:30:43.290 with the state(6) to be set 00:30:43.290 starting I/O failed: -6 00:30:43.290 [2024-11-07 13:35:50.792073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006480 is same with the state(6) to be set 00:30:43.290 [2024-11-07 13:35:50.792082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006480 is same with the state(6) to be set 00:30:43.290 [2024-11-07 13:35:50.792089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006480 is same with the state(6) to be set 00:30:43.290 [2024-11-07 13:35:50.792096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006480 is same with the state(6) to be set 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 [2024-11-07 13:35:50.792102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006480 is same with the state(6) to be set 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 [2024-11-07 13:35:50.792406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.290 [2024-11-07 13:35:50.792632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006880 is same with the state(6) to be set 00:30:43.290 [2024-11-07 13:35:50.792651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006880 is same with the state(6) to be set 00:30:43.290 [2024-11-07 13:35:50.792657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006880 is same with the state(6) to be set 00:30:43.290 starting I/O failed: -6 00:30:43.290 starting I/O failed: -6 00:30:43.290 starting I/O failed: -6 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 [2024-11-07 13:35:50.793169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006c80 is same with the state(6) to be set 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 [2024-11-07 13:35:50.793206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006c80 is same with the state(6) to be set 00:30:43.290 [2024-11-07 13:35:50.793218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006c80 is same with the state(6) to be set 00:30:43.290 [2024-11-07 13:35:50.793227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006c80 is same with the state(6) to be set 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 [2024-11-07 13:35:50.793238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006c80 is same with the state(6) to be set 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 starting I/O failed: -6 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.290 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 [2024-11-07 13:35:50.794030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(6) to be set 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 [2024-11-07 13:35:50.794060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(6) to be set 00:30:43.291 [2024-11-07 13:35:50.794069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(6) to be set 00:30:43.291 [2024-11-07 13:35:50.794076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(6) to be set 00:30:43.291 [2024-11-07 13:35:50.794083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(6) to be set 00:30:43.291 [2024-11-07 13:35:50.794089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(6) to be set 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 [2024-11-07 13:35:50.794096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(6) to be set 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 [2024-11-07 13:35:50.794565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.291 starting I/O failed: -6 00:30:43.291 starting I/O failed: -6 00:30:43.291 starting I/O failed: -6 00:30:43.291 starting I/O failed: -6 00:30:43.291 starting I/O failed: -6 00:30:43.291 starting I/O failed: -6 00:30:43.291 starting I/O failed: -6 00:30:43.291 starting I/O failed: -6 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 [2024-11-07 13:35:50.802398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.291 NVMe io qpair process completion error 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 [2024-11-07 13:35:50.803905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 Write completed with error (sct=0, sc=8) 00:30:43.291 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 [2024-11-07 13:35:50.805286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 [2024-11-07 13:35:50.807220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.292 starting I/O failed: -6 00:30:43.292 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 [2024-11-07 13:35:50.817024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.293 NVMe io qpair process completion error 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 [2024-11-07 13:35:50.818589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 [2024-11-07 13:35:50.820273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.293 starting I/O failed: -6 00:30:43.293 Write completed with error (sct=0, sc=8) 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 [2024-11-07 13:35:50.822122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 [2024-11-07 13:35:50.831652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.294 NVMe io qpair process completion error 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 starting I/O failed: -6 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.294 Write completed with error (sct=0, sc=8) 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 [2024-11-07 13:35:50.833650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 [2024-11-07 13:35:50.835067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 [2024-11-07 13:35:50.837032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.295 starting I/O failed: -6 00:30:43.295 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 [2024-11-07 13:35:50.847456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.296 NVMe io qpair process completion error 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 [2024-11-07 13:35:50.849071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.296 starting I/O failed: -6 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 [2024-11-07 13:35:50.850506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.296 Write completed with error (sct=0, sc=8) 00:30:43.296 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 [2024-11-07 13:35:50.852494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 [2024-11-07 13:35:50.859819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.297 NVMe io qpair process completion error 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 [2024-11-07 13:35:50.861314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.297 Write completed with error (sct=0, sc=8) 00:30:43.297 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 [2024-11-07 13:35:50.862714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 [2024-11-07 13:35:50.864627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.298 Write completed with error (sct=0, sc=8) 00:30:43.298 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 [2024-11-07 13:35:50.874275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.299 NVMe io qpair process completion error 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 [2024-11-07 13:35:50.875883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 [2024-11-07 13:35:50.877274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 Write completed with error (sct=0, sc=8) 00:30:43.299 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 [2024-11-07 13:35:50.879188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 [2024-11-07 13:35:50.888928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.300 NVMe io qpair process completion error 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 starting I/O failed: -6 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 Write completed with error (sct=0, sc=8) 00:30:43.300 [2024-11-07 13:35:50.890343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.301 starting I/O failed: -6 00:30:43.301 starting I/O failed: -6 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 [2024-11-07 13:35:50.891961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 [2024-11-07 13:35:50.893857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.301 starting I/O failed: -6 00:30:43.301 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 [2024-11-07 13:35:50.905969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.302 NVMe io qpair process completion error 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 [2024-11-07 13:35:50.907614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 [2024-11-07 13:35:50.909206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.302 starting I/O failed: -6 00:30:43.302 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 [2024-11-07 13:35:50.911047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 [2024-11-07 13:35:50.918318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.303 NVMe io qpair process completion error 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 [2024-11-07 13:35:50.919888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 starting I/O failed: -6 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.303 Write completed with error (sct=0, sc=8) 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 [2024-11-07 13:35:50.921274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 [2024-11-07 13:35:50.923186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.304 Write completed with error (sct=0, sc=8) 00:30:43.304 starting I/O failed: -6 00:30:43.305 Write completed with error (sct=0, sc=8) 00:30:43.305 starting I/O failed: -6 00:30:43.305 Write completed with error (sct=0, sc=8) 00:30:43.305 starting I/O failed: -6 00:30:43.305 Write completed with error (sct=0, sc=8) 00:30:43.305 starting I/O failed: -6 00:30:43.305 Write completed with error (sct=0, sc=8) 00:30:43.305 starting I/O failed: -6 00:30:43.305 Write completed with error (sct=0, sc=8) 00:30:43.305 starting I/O failed: -6 00:30:43.305 Write completed with error (sct=0, sc=8) 00:30:43.305 starting I/O failed: -6 00:30:43.305 Write completed with error (sct=0, sc=8) 00:30:43.305 starting I/O failed: -6 00:30:43.305 Write completed with error (sct=0, sc=8) 00:30:43.305 starting I/O failed: -6 00:30:43.305 Write completed with error (sct=0, sc=8) 00:30:43.305 starting I/O failed: -6 00:30:43.305 Write completed with error (sct=0, sc=8) 00:30:43.305 starting I/O failed: -6 00:30:43.305 Write completed with error (sct=0, sc=8) 00:30:43.305 starting I/O failed: -6 00:30:43.305 Write completed with error (sct=0, sc=8) 00:30:43.305 starting I/O failed: -6 00:30:43.305 Write completed with error (sct=0, sc=8) 00:30:43.305 starting I/O failed: -6 00:30:43.305 Write completed with error (sct=0, sc=8) 00:30:43.305 starting I/O failed: -6 00:30:43.305 Write completed with error (sct=0, sc=8) 00:30:43.305 starting I/O failed: -6 00:30:43.305 Write completed with error (sct=0, sc=8) 00:30:43.305 starting I/O failed: -6 00:30:43.305 Write completed with error (sct=0, sc=8) 00:30:43.305 starting I/O failed: -6 00:30:43.305 Write completed with error (sct=0, sc=8) 00:30:43.305 starting I/O failed: -6 00:30:43.305 Write completed with error (sct=0, sc=8) 00:30:43.305 starting I/O failed: -6 00:30:43.305 Write completed with error (sct=0, sc=8) 00:30:43.305 starting I/O failed: -6 00:30:43.305 Write completed with error (sct=0, sc=8) 00:30:43.305 starting I/O failed: -6 00:30:43.305 Write completed with error (sct=0, sc=8) 00:30:43.305 starting I/O failed: -6 00:30:43.305 Write completed with error (sct=0, sc=8) 00:30:43.305 starting I/O failed: -6 00:30:43.305 Write completed with error (sct=0, sc=8) 00:30:43.305 starting I/O failed: -6 00:30:43.305 Write completed with error (sct=0, sc=8) 00:30:43.305 starting I/O failed: -6 00:30:43.305 [2024-11-07 13:35:50.932752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:43.305 NVMe io qpair process completion error 00:30:43.305 Initializing NVMe Controllers 00:30:43.305 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:30:43.305 Controller IO queue size 128, less than required. 00:30:43.305 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:43.305 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:30:43.305 Controller IO queue size 128, less than required. 00:30:43.305 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:43.305 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:30:43.305 Controller IO queue size 128, less than required. 00:30:43.305 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:43.305 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:30:43.305 Controller IO queue size 128, less than required. 00:30:43.305 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:43.305 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:30:43.305 Controller IO queue size 128, less than required. 00:30:43.305 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:43.305 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:30:43.305 Controller IO queue size 128, less than required. 00:30:43.305 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:43.305 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:43.305 Controller IO queue size 128, less than required. 00:30:43.305 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:43.305 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:30:43.305 Controller IO queue size 128, less than required. 00:30:43.305 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:43.305 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:30:43.305 Controller IO queue size 128, less than required. 00:30:43.305 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:43.305 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:30:43.305 Controller IO queue size 128, less than required. 00:30:43.305 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:43.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:30:43.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:30:43.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:30:43.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:30:43.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:30:43.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:30:43.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:43.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:30:43.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:30:43.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:30:43.305 Initialization complete. Launching workers. 00:30:43.305 ======================================================== 00:30:43.305 Latency(us) 00:30:43.305 Device Information : IOPS MiB/s Average min max 00:30:43.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1693.81 72.78 75590.03 1649.06 216515.71 00:30:43.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1652.59 71.01 77565.85 1208.80 225102.82 00:30:43.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1679.57 72.17 74132.87 1588.69 134618.10 00:30:43.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1662.57 71.44 74986.32 1164.14 157494.96 00:30:43.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1590.76 68.35 78514.11 1722.10 154624.12 00:30:43.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1679.78 72.18 74480.86 1221.75 130714.05 00:30:43.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1649.19 70.86 76005.47 1276.22 171222.67 00:30:43.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1670.01 71.76 75147.08 1378.39 150859.90 00:30:43.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1668.10 71.68 75364.22 1223.64 190978.06 00:30:43.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1658.54 71.27 75927.10 1134.30 173719.69 00:30:43.305 ======================================================== 00:30:43.305 Total : 16604.93 713.49 75754.45 1134.30 225102.82 00:30:43.305 00:30:43.305 [2024-11-07 13:35:50.955493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000027900 is same with the state(6) to be set 00:30:43.305 [2024-11-07 13:35:50.955559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000026a00 is same with the state(6) to be set 00:30:43.305 [2024-11-07 13:35:50.955601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000026280 is same with the state(6) to be set 00:30:43.305 [2024-11-07 13:35:50.955646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000028080 is same with the state(6) to be set 00:30:43.305 [2024-11-07 13:35:50.955688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000028f80 is same with the state(6) to be set 00:30:43.305 [2024-11-07 13:35:50.955728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000028800 is same with the state(6) to be set 00:30:43.305 [2024-11-07 13:35:50.955769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000025b00 is same with the state(6) to be set 00:30:43.305 [2024-11-07 13:35:50.955809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000029700 is same with the state(6) to be set 00:30:43.305 [2024-11-07 13:35:50.955849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000027180 is same with the state(6) to be set 00:30:43.305 [2024-11-07 13:35:50.955907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000029e80 is same with the state(6) to be set 00:30:43.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:30:44.685 13:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:30:45.253 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3998894 00:30:45.253 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:30:45.253 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3998894 00:30:45.253 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:30:45.512 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:45.512 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:30:45.512 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:45.512 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 3998894 00:30:45.512 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:30:45.512 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:45.512 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:45.512 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:45.512 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:30:45.512 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:45.512 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:45.512 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:45.512 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:45.512 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:45.512 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:30:45.512 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:45.512 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:30:45.512 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:45.512 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:45.512 rmmod nvme_tcp 00:30:45.512 rmmod nvme_fabrics 00:30:45.512 rmmod nvme_keyring 00:30:45.512 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:45.512 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:30:45.512 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:30:45.512 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3998521 ']' 00:30:45.512 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3998521 00:30:45.512 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 3998521 ']' 00:30:45.512 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 3998521 00:30:45.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3998521) - No such process 00:30:45.513 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@979 -- # echo 'Process with pid 3998521 is not found' 00:30:45.513 Process with pid 3998521 is not found 00:30:45.513 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:45.513 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:45.513 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:45.513 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:30:45.513 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:45.513 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:30:45.513 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:30:45.513 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:45.513 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:45.513 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:45.513 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:45.513 13:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:47.418 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:47.418 00:30:47.418 real 0m11.743s 00:30:47.418 user 0m33.042s 00:30:47.418 sys 0m3.909s 00:30:47.678 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:47.678 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:47.678 ************************************ 00:30:47.678 END TEST nvmf_shutdown_tc4 00:30:47.678 ************************************ 00:30:47.678 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:30:47.678 00:30:47.678 real 0m52.770s 00:30:47.678 user 2m18.599s 00:30:47.678 sys 0m15.189s 00:30:47.678 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:47.678 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:47.678 ************************************ 00:30:47.678 END TEST nvmf_shutdown 00:30:47.678 ************************************ 00:30:47.678 13:35:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:30:47.678 13:35:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:47.678 13:35:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:47.678 13:35:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:47.678 ************************************ 00:30:47.678 START TEST nvmf_nsid 00:30:47.678 ************************************ 00:30:47.678 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:30:47.678 * Looking for test storage... 00:30:47.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:47.678 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:47.678 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:30:47.678 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:47.939 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:47.939 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:47.939 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:47.939 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:47.939 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:30:47.939 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:30:47.939 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:30:47.939 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:30:47.939 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:30:47.939 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:30:47.939 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:30:47.939 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:47.939 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:30:47.939 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:30:47.939 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:47.939 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:47.939 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:30:47.939 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:30:47.939 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:47.939 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:30:47.939 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:30:47.939 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:30:47.939 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:30:47.939 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:47.939 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:47.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:47.940 --rc genhtml_branch_coverage=1 00:30:47.940 --rc genhtml_function_coverage=1 00:30:47.940 --rc genhtml_legend=1 00:30:47.940 --rc geninfo_all_blocks=1 00:30:47.940 --rc geninfo_unexecuted_blocks=1 00:30:47.940 00:30:47.940 ' 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:47.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:47.940 --rc genhtml_branch_coverage=1 00:30:47.940 --rc genhtml_function_coverage=1 00:30:47.940 --rc genhtml_legend=1 00:30:47.940 --rc geninfo_all_blocks=1 00:30:47.940 --rc geninfo_unexecuted_blocks=1 00:30:47.940 00:30:47.940 ' 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:47.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:47.940 --rc genhtml_branch_coverage=1 00:30:47.940 --rc genhtml_function_coverage=1 00:30:47.940 --rc genhtml_legend=1 00:30:47.940 --rc geninfo_all_blocks=1 00:30:47.940 --rc geninfo_unexecuted_blocks=1 00:30:47.940 00:30:47.940 ' 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:47.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:47.940 --rc genhtml_branch_coverage=1 00:30:47.940 --rc genhtml_function_coverage=1 00:30:47.940 --rc genhtml_legend=1 00:30:47.940 --rc geninfo_all_blocks=1 00:30:47.940 --rc geninfo_unexecuted_blocks=1 00:30:47.940 00:30:47.940 ' 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:47.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:30:47.940 13:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:56.181 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:56.181 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:56.181 Found net devices under 0000:31:00.0: cvl_0_0 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:56.181 Found net devices under 0000:31:00.1: cvl_0_1 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:56.181 13:36:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:56.181 13:36:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:56.181 13:36:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:56.181 13:36:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:56.181 13:36:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:56.182 13:36:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:56.182 13:36:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:56.182 13:36:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:56.441 13:36:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:56.441 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:56.441 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:30:56.441 00:30:56.441 --- 10.0.0.2 ping statistics --- 00:30:56.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:56.441 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:30:56.441 13:36:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:56.441 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:56.441 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:30:56.441 00:30:56.441 --- 10.0.0.1 ping statistics --- 00:30:56.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:56.441 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:30:56.441 13:36:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:56.441 13:36:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:30:56.441 13:36:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:56.441 13:36:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:56.441 13:36:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:56.441 13:36:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:56.441 13:36:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:56.441 13:36:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:56.441 13:36:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:56.441 13:36:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:30:56.441 13:36:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:56.441 13:36:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:56.441 13:36:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:56.441 13:36:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=4004996 00:30:56.441 13:36:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 4004996 00:30:56.441 13:36:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:30:56.441 13:36:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 4004996 ']' 00:30:56.441 13:36:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:56.441 13:36:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:56.441 13:36:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:56.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:56.441 13:36:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:56.441 13:36:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:56.441 [2024-11-07 13:36:04.358116] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:30:56.441 [2024-11-07 13:36:04.358250] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:56.700 [2024-11-07 13:36:04.523648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.700 [2024-11-07 13:36:04.621440] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:56.700 [2024-11-07 13:36:04.621484] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:56.700 [2024-11-07 13:36:04.621497] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:56.700 [2024-11-07 13:36:04.621509] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:56.700 [2024-11-07 13:36:04.621519] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:56.700 [2024-11-07 13:36:04.622753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:57.267 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:57.267 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:30:57.267 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:57.267 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:57.267 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:57.267 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:57.267 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:57.267 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=4005340 00:30:57.267 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:30:57.267 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:30:57.267 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:30:57.267 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:30:57.267 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:57.267 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:57.267 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:57.267 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:57.267 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:57.267 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:57.267 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:57.267 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:57.267 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:57.267 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:30:57.267 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:30:57.267 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=dc12c076-da1e-4fcd-a076-f8814062ba64 00:30:57.267 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:30:57.267 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=441db147-04c2-4139-b6b0-c2ae4cc26259 00:30:57.267 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:30:57.267 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=90d7bae7-ea04-430d-b95c-8b971552f8ed 00:30:57.267 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:30:57.267 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.267 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:57.267 null0 00:30:57.267 null1 00:30:57.267 null2 00:30:57.267 [2024-11-07 13:36:05.217351] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:57.267 [2024-11-07 13:36:05.241634] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:57.267 [2024-11-07 13:36:05.247280] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:30:57.267 [2024-11-07 13:36:05.247380] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4005340 ] 00:30:57.527 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.527 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 4005340 /var/tmp/tgt2.sock 00:30:57.527 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 4005340 ']' 00:30:57.527 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:30:57.527 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:57.527 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:30:57.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:30:57.527 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:57.527 13:36:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:57.527 [2024-11-07 13:36:05.400152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:57.527 [2024-11-07 13:36:05.498892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:58.465 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:58.465 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:30:58.465 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:30:58.466 [2024-11-07 13:36:06.419527] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:58.466 [2024-11-07 13:36:06.435708] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:30:58.466 nvme0n1 nvme0n2 00:30:58.466 nvme1n1 00:30:58.724 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:30:58.724 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:30:58.724 13:36:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:00.100 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:31:00.100 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:31:00.100 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:31:00.100 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:31:00.100 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:31:00.100 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:31:00.100 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:31:00.100 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:31:00.100 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:31:00.100 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:31:00.100 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:31:00.100 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:31:00.100 13:36:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:31:01.040 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:31:01.040 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:31:01.040 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:31:01.040 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:31:01.040 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:31:01.040 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid dc12c076-da1e-4fcd-a076-f8814062ba64 00:31:01.040 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:31:01.040 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:31:01.040 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:31:01.040 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:31:01.040 13:36:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:31:01.040 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=dc12c076da1e4fcda076f8814062ba64 00:31:01.040 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo DC12C076DA1E4FCDA076F8814062BA64 00:31:01.040 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ DC12C076DA1E4FCDA076F8814062BA64 == \D\C\1\2\C\0\7\6\D\A\1\E\4\F\C\D\A\0\7\6\F\8\8\1\4\0\6\2\B\A\6\4 ]] 00:31:01.040 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:31:01.040 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:31:01.040 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:31:01.040 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:31:01.040 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:31:01.040 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:31:01.300 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:31:01.300 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 441db147-04c2-4139-b6b0-c2ae4cc26259 00:31:01.300 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:31:01.300 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:31:01.300 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:31:01.300 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:31:01.300 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:31:01.300 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=441db14704c24139b6b0c2ae4cc26259 00:31:01.300 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 441DB14704C24139B6B0C2AE4CC26259 00:31:01.300 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 441DB14704C24139B6B0C2AE4CC26259 == \4\4\1\D\B\1\4\7\0\4\C\2\4\1\3\9\B\6\B\0\C\2\A\E\4\C\C\2\6\2\5\9 ]] 00:31:01.300 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:31:01.300 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:31:01.300 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:31:01.300 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:31:01.300 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:31:01.300 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:31:01.300 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:31:01.300 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 90d7bae7-ea04-430d-b95c-8b971552f8ed 00:31:01.300 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:31:01.300 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:31:01.300 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:31:01.300 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:31:01.300 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:31:01.300 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=90d7bae7ea04430db95c8b971552f8ed 00:31:01.300 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 90D7BAE7EA04430DB95C8B971552F8ED 00:31:01.300 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 90D7BAE7EA04430DB95C8B971552F8ED == \9\0\D\7\B\A\E\7\E\A\0\4\4\3\0\D\B\9\5\C\8\B\9\7\1\5\5\2\F\8\E\D ]] 00:31:01.300 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:31:01.559 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:31:01.559 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:31:01.559 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 4005340 00:31:01.559 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 4005340 ']' 00:31:01.559 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 4005340 00:31:01.559 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:31:01.559 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:01.559 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4005340 00:31:01.818 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:01.818 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:01.818 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4005340' 00:31:01.818 killing process with pid 4005340 00:31:01.818 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 4005340 00:31:01.818 13:36:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 4005340 00:31:02.755 13:36:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:31:02.755 13:36:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:02.755 13:36:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:31:02.755 13:36:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:02.755 13:36:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:31:02.755 13:36:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:02.755 13:36:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:02.755 rmmod nvme_tcp 00:31:02.755 rmmod nvme_fabrics 00:31:02.755 rmmod nvme_keyring 00:31:02.755 13:36:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:02.755 13:36:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:31:02.755 13:36:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:31:02.755 13:36:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 4004996 ']' 00:31:02.755 13:36:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 4004996 00:31:02.755 13:36:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 4004996 ']' 00:31:02.755 13:36:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 4004996 00:31:02.755 13:36:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:31:02.755 13:36:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:03.014 13:36:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4004996 00:31:03.014 13:36:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:03.014 13:36:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:03.014 13:36:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4004996' 00:31:03.014 killing process with pid 4004996 00:31:03.014 13:36:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 4004996 00:31:03.014 13:36:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 4004996 00:31:03.583 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:03.583 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:03.583 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:03.583 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:31:03.584 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:31:03.584 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:03.584 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:31:03.843 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:03.843 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:03.843 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:03.843 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:03.843 13:36:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:05.746 13:36:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:05.746 00:31:05.746 real 0m18.144s 00:31:05.746 user 0m15.421s 00:31:05.746 sys 0m7.681s 00:31:05.746 13:36:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:05.746 13:36:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:31:05.746 ************************************ 00:31:05.746 END TEST nvmf_nsid 00:31:05.747 ************************************ 00:31:05.747 13:36:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:31:05.747 00:31:05.747 real 19m45.750s 00:31:05.747 user 50m15.203s 00:31:05.747 sys 4m51.184s 00:31:05.747 13:36:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:05.747 13:36:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:31:05.747 ************************************ 00:31:05.747 END TEST nvmf_target_extra 00:31:05.747 ************************************ 00:31:06.007 13:36:13 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:31:06.007 13:36:13 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:06.007 13:36:13 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:06.007 13:36:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:06.007 ************************************ 00:31:06.007 START TEST nvmf_host 00:31:06.007 ************************************ 00:31:06.007 13:36:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:31:06.007 * Looking for test storage... 00:31:06.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:31:06.007 13:36:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:06.007 13:36:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:31:06.007 13:36:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:06.007 13:36:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:06.007 13:36:13 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:06.007 13:36:13 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:06.007 13:36:13 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:06.007 13:36:13 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:06.007 13:36:13 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:06.007 13:36:13 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:06.007 13:36:13 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:06.007 13:36:13 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:06.007 13:36:13 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:06.007 13:36:13 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:06.007 13:36:13 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:06.007 13:36:13 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:31:06.007 13:36:13 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:31:06.007 13:36:13 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:06.007 13:36:13 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:06.007 13:36:13 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:31:06.007 13:36:13 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:31:06.007 13:36:13 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:06.007 13:36:13 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:31:06.007 13:36:13 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:06.007 13:36:13 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:31:06.007 13:36:13 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:31:06.007 13:36:13 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:06.007 13:36:13 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:31:06.007 13:36:13 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:06.007 13:36:13 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:06.007 13:36:13 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:06.007 13:36:13 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:31:06.007 13:36:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:06.007 13:36:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:06.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.007 --rc genhtml_branch_coverage=1 00:31:06.007 --rc genhtml_function_coverage=1 00:31:06.007 --rc genhtml_legend=1 00:31:06.007 --rc geninfo_all_blocks=1 00:31:06.008 --rc geninfo_unexecuted_blocks=1 00:31:06.008 00:31:06.008 ' 00:31:06.008 13:36:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:06.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.008 --rc genhtml_branch_coverage=1 00:31:06.008 --rc genhtml_function_coverage=1 00:31:06.008 --rc genhtml_legend=1 00:31:06.008 --rc geninfo_all_blocks=1 00:31:06.008 --rc geninfo_unexecuted_blocks=1 00:31:06.008 00:31:06.008 ' 00:31:06.008 13:36:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:06.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.008 --rc genhtml_branch_coverage=1 00:31:06.008 --rc genhtml_function_coverage=1 00:31:06.008 --rc genhtml_legend=1 00:31:06.008 --rc geninfo_all_blocks=1 00:31:06.008 --rc geninfo_unexecuted_blocks=1 00:31:06.008 00:31:06.008 ' 00:31:06.008 13:36:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:06.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.008 --rc genhtml_branch_coverage=1 00:31:06.008 --rc genhtml_function_coverage=1 00:31:06.008 --rc genhtml_legend=1 00:31:06.008 --rc geninfo_all_blocks=1 00:31:06.008 --rc geninfo_unexecuted_blocks=1 00:31:06.008 00:31:06.008 ' 00:31:06.008 13:36:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:06.008 13:36:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:31:06.008 13:36:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:06.008 13:36:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:06.008 13:36:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:06.008 13:36:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:06.008 13:36:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:06.008 13:36:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:06.008 13:36:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:06.008 13:36:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:06.008 13:36:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:06.008 13:36:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:06.008 13:36:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:06.008 13:36:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:06.008 13:36:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:06.008 13:36:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:06.008 13:36:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:06.008 13:36:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:06.008 13:36:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:06.008 13:36:13 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:06.008 13:36:13 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:06.008 13:36:13 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:06.008 13:36:13 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:06.008 13:36:13 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.008 13:36:13 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.008 13:36:13 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.008 13:36:13 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:31:06.008 13:36:13 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.008 13:36:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:31:06.008 13:36:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:06.008 13:36:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:06.008 13:36:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:06.008 13:36:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:06.008 13:36:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:06.008 13:36:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:06.008 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:06.008 13:36:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:06.008 13:36:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:06.008 13:36:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:06.008 13:36:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:31:06.008 13:36:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:31:06.008 13:36:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:31:06.008 13:36:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:31:06.008 13:36:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:06.008 13:36:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:06.008 13:36:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.008 ************************************ 00:31:06.008 START TEST nvmf_multicontroller 00:31:06.008 ************************************ 00:31:06.008 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:31:06.269 * Looking for test storage... 00:31:06.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:06.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.269 --rc genhtml_branch_coverage=1 00:31:06.269 --rc genhtml_function_coverage=1 00:31:06.269 --rc genhtml_legend=1 00:31:06.269 --rc geninfo_all_blocks=1 00:31:06.269 --rc geninfo_unexecuted_blocks=1 00:31:06.269 00:31:06.269 ' 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:06.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.269 --rc genhtml_branch_coverage=1 00:31:06.269 --rc genhtml_function_coverage=1 00:31:06.269 --rc genhtml_legend=1 00:31:06.269 --rc geninfo_all_blocks=1 00:31:06.269 --rc geninfo_unexecuted_blocks=1 00:31:06.269 00:31:06.269 ' 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:06.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.269 --rc genhtml_branch_coverage=1 00:31:06.269 --rc genhtml_function_coverage=1 00:31:06.269 --rc genhtml_legend=1 00:31:06.269 --rc geninfo_all_blocks=1 00:31:06.269 --rc geninfo_unexecuted_blocks=1 00:31:06.269 00:31:06.269 ' 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:06.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.269 --rc genhtml_branch_coverage=1 00:31:06.269 --rc genhtml_function_coverage=1 00:31:06.269 --rc genhtml_legend=1 00:31:06.269 --rc geninfo_all_blocks=1 00:31:06.269 --rc geninfo_unexecuted_blocks=1 00:31:06.269 00:31:06.269 ' 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:06.269 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:06.270 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:06.270 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:06.270 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:06.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:06.270 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:06.270 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:06.270 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:06.270 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:06.270 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:06.270 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:31:06.270 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:31:06.270 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:06.270 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:31:06.270 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:31:06.270 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:06.270 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:06.270 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:06.270 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:06.270 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:06.270 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:06.270 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:06.270 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:06.270 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:06.270 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:06.270 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:31:06.270 13:36:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:14.396 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:14.396 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:14.396 Found net devices under 0000:31:00.0: cvl_0_0 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:14.396 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:14.656 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:14.656 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:14.656 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:14.656 Found net devices under 0000:31:00.1: cvl_0_1 00:31:14.656 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:14.656 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:14.656 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:31:14.656 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:14.656 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:14.656 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:14.656 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:14.656 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:14.656 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:14.656 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:14.656 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:14.656 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:14.656 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:14.656 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:14.656 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:14.656 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:14.656 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:14.656 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:14.656 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:14.657 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:14.657 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:14.657 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:14.657 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:14.657 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:14.657 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:14.916 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:14.916 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:14.916 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:14.916 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:14.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:14.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.551 ms 00:31:14.916 00:31:14.916 --- 10.0.0.2 ping statistics --- 00:31:14.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:14.916 rtt min/avg/max/mdev = 0.551/0.551/0.551/0.000 ms 00:31:14.916 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:14.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:14.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:31:14.916 00:31:14.916 --- 10.0.0.1 ping statistics --- 00:31:14.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:14.916 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:31:14.916 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:14.916 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:31:14.916 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:14.916 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:14.916 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:14.916 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:14.916 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:14.916 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:14.916 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:14.916 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:31:14.916 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:14.916 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:14.916 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:14.916 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=4011709 00:31:14.916 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 4011709 00:31:14.916 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:14.916 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 4011709 ']' 00:31:14.916 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:14.916 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:14.916 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:14.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:14.916 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:14.916 13:36:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:14.916 [2024-11-07 13:36:22.878409] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:31:14.916 [2024-11-07 13:36:22.878538] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:15.176 [2024-11-07 13:36:23.061396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:15.435 [2024-11-07 13:36:23.184755] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:15.435 [2024-11-07 13:36:23.184828] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:15.435 [2024-11-07 13:36:23.184842] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:15.435 [2024-11-07 13:36:23.184855] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:15.435 [2024-11-07 13:36:23.184882] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:15.435 [2024-11-07 13:36:23.188050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:15.435 [2024-11-07 13:36:23.188180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:15.435 [2024-11-07 13:36:23.188204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:15.695 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:15.695 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:31:15.695 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:15.695 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:15.695 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:15.695 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:15.695 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:15.954 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.954 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:15.954 [2024-11-07 13:36:23.704570] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:15.954 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.954 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:15.954 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.954 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:15.954 Malloc0 00:31:15.954 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.954 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:15.954 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.954 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:15.954 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.954 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:15.954 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.954 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:15.954 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.954 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:15.954 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.954 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:15.954 [2024-11-07 13:36:23.809826] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:15.954 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.954 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:15.954 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.954 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:15.954 [2024-11-07 13:36:23.821768] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:15.954 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.955 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:31:15.955 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.955 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:15.955 Malloc1 00:31:15.955 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.955 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:31:15.955 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.955 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:15.955 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.955 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:31:15.955 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.955 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:15.955 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.955 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:15.955 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.955 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:15.955 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.955 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:31:15.955 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.955 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:15.955 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.955 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=4011893 00:31:15.955 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:15.955 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:31:15.955 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 4011893 /var/tmp/bdevperf.sock 00:31:15.955 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 4011893 ']' 00:31:15.955 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:15.955 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:15.955 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:15.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:15.955 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:15.955 13:36:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:16.893 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:16.893 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:31:16.893 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:31:16.893 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.893 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:16.893 NVMe0n1 00:31:16.893 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.893 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:16.893 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:31:16.893 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.893 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:17.152 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.153 1 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:17.153 request: 00:31:17.153 { 00:31:17.153 "name": "NVMe0", 00:31:17.153 "trtype": "tcp", 00:31:17.153 "traddr": "10.0.0.2", 00:31:17.153 "adrfam": "ipv4", 00:31:17.153 "trsvcid": "4420", 00:31:17.153 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:17.153 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:31:17.153 "hostaddr": "10.0.0.1", 00:31:17.153 "prchk_reftag": false, 00:31:17.153 "prchk_guard": false, 00:31:17.153 "hdgst": false, 00:31:17.153 "ddgst": false, 00:31:17.153 "allow_unrecognized_csi": false, 00:31:17.153 "method": "bdev_nvme_attach_controller", 00:31:17.153 "req_id": 1 00:31:17.153 } 00:31:17.153 Got JSON-RPC error response 00:31:17.153 response: 00:31:17.153 { 00:31:17.153 "code": -114, 00:31:17.153 "message": "A controller named NVMe0 already exists with the specified network path" 00:31:17.153 } 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:17.153 request: 00:31:17.153 { 00:31:17.153 "name": "NVMe0", 00:31:17.153 "trtype": "tcp", 00:31:17.153 "traddr": "10.0.0.2", 00:31:17.153 "adrfam": "ipv4", 00:31:17.153 "trsvcid": "4420", 00:31:17.153 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:17.153 "hostaddr": "10.0.0.1", 00:31:17.153 "prchk_reftag": false, 00:31:17.153 "prchk_guard": false, 00:31:17.153 "hdgst": false, 00:31:17.153 "ddgst": false, 00:31:17.153 "allow_unrecognized_csi": false, 00:31:17.153 "method": "bdev_nvme_attach_controller", 00:31:17.153 "req_id": 1 00:31:17.153 } 00:31:17.153 Got JSON-RPC error response 00:31:17.153 response: 00:31:17.153 { 00:31:17.153 "code": -114, 00:31:17.153 "message": "A controller named NVMe0 already exists with the specified network path" 00:31:17.153 } 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:17.153 request: 00:31:17.153 { 00:31:17.153 "name": "NVMe0", 00:31:17.153 "trtype": "tcp", 00:31:17.153 "traddr": "10.0.0.2", 00:31:17.153 "adrfam": "ipv4", 00:31:17.153 "trsvcid": "4420", 00:31:17.153 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:17.153 "hostaddr": "10.0.0.1", 00:31:17.153 "prchk_reftag": false, 00:31:17.153 "prchk_guard": false, 00:31:17.153 "hdgst": false, 00:31:17.153 "ddgst": false, 00:31:17.153 "multipath": "disable", 00:31:17.153 "allow_unrecognized_csi": false, 00:31:17.153 "method": "bdev_nvme_attach_controller", 00:31:17.153 "req_id": 1 00:31:17.153 } 00:31:17.153 Got JSON-RPC error response 00:31:17.153 response: 00:31:17.153 { 00:31:17.153 "code": -114, 00:31:17.153 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:31:17.153 } 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:17.153 request: 00:31:17.153 { 00:31:17.153 "name": "NVMe0", 00:31:17.153 "trtype": "tcp", 00:31:17.153 "traddr": "10.0.0.2", 00:31:17.153 "adrfam": "ipv4", 00:31:17.153 "trsvcid": "4420", 00:31:17.153 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:17.153 "hostaddr": "10.0.0.1", 00:31:17.153 "prchk_reftag": false, 00:31:17.153 "prchk_guard": false, 00:31:17.153 "hdgst": false, 00:31:17.153 "ddgst": false, 00:31:17.153 "multipath": "failover", 00:31:17.153 "allow_unrecognized_csi": false, 00:31:17.153 "method": "bdev_nvme_attach_controller", 00:31:17.153 "req_id": 1 00:31:17.153 } 00:31:17.153 Got JSON-RPC error response 00:31:17.153 response: 00:31:17.153 { 00:31:17.153 "code": -114, 00:31:17.153 "message": "A controller named NVMe0 already exists with the specified network path" 00:31:17.153 } 00:31:17.153 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:17.154 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:31:17.154 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:17.154 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:17.154 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:17.154 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:17.154 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.154 13:36:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:17.154 NVMe0n1 00:31:17.154 13:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.154 13:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:17.154 13:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.154 13:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:17.154 13:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.154 13:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:31:17.154 13:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.154 13:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:17.412 00:31:17.412 13:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.413 13:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:17.413 13:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.413 13:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:31:17.413 13:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:17.413 13:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.413 13:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:31:17.413 13:36:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:18.792 { 00:31:18.792 "results": [ 00:31:18.792 { 00:31:18.792 "job": "NVMe0n1", 00:31:18.792 "core_mask": "0x1", 00:31:18.792 "workload": "write", 00:31:18.793 "status": "finished", 00:31:18.793 "queue_depth": 128, 00:31:18.793 "io_size": 4096, 00:31:18.793 "runtime": 1.006017, 00:31:18.793 "iops": 25396.191118042738, 00:31:18.793 "mibps": 99.20387155485444, 00:31:18.793 "io_failed": 0, 00:31:18.793 "io_timeout": 0, 00:31:18.793 "avg_latency_us": 5028.06057275562, 00:31:18.793 "min_latency_us": 2307.4133333333334, 00:31:18.793 "max_latency_us": 12888.746666666666 00:31:18.793 } 00:31:18.793 ], 00:31:18.793 "core_count": 1 00:31:18.793 } 00:31:18.793 13:36:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:31:18.793 13:36:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.793 13:36:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:18.793 13:36:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.793 13:36:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:31:18.793 13:36:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 4011893 00:31:18.793 13:36:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 4011893 ']' 00:31:18.793 13:36:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 4011893 00:31:18.793 13:36:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:31:18.793 13:36:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:18.793 13:36:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4011893 00:31:18.793 13:36:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:18.793 13:36:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:18.793 13:36:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4011893' 00:31:18.793 killing process with pid 4011893 00:31:18.793 13:36:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 4011893 00:31:18.793 13:36:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 4011893 00:31:19.363 13:36:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:19.363 13:36:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.363 13:36:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:19.363 13:36:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.363 13:36:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:19.363 13:36:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.363 13:36:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:19.363 13:36:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.363 13:36:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:31:19.363 13:36:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:19.363 13:36:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:31:19.363 13:36:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:31:19.363 13:36:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:31:19.363 13:36:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:31:19.363 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:31:19.364 [2024-11-07 13:36:24.012919] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:31:19.364 [2024-11-07 13:36:24.013025] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4011893 ] 00:31:19.364 [2024-11-07 13:36:24.152538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:19.364 [2024-11-07 13:36:24.250690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:19.364 [2024-11-07 13:36:25.344239] bdev.c:4691:bdev_name_add: *ERROR*: Bdev name 4d40c520-76a1-4b49-8819-e593b3606e95 already exists 00:31:19.364 [2024-11-07 13:36:25.344287] bdev.c:7842:bdev_register: *ERROR*: Unable to add uuid:4d40c520-76a1-4b49-8819-e593b3606e95 alias for bdev NVMe1n1 00:31:19.364 [2024-11-07 13:36:25.344302] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:31:19.364 Running I/O for 1 seconds... 00:31:19.364 25358.00 IOPS, 99.05 MiB/s 00:31:19.364 Latency(us) 00:31:19.364 [2024-11-07T12:36:27.371Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:19.364 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:31:19.364 NVMe0n1 : 1.01 25396.19 99.20 0.00 0.00 5028.06 2307.41 12888.75 00:31:19.364 [2024-11-07T12:36:27.371Z] =================================================================================================================== 00:31:19.364 [2024-11-07T12:36:27.371Z] Total : 25396.19 99.20 0.00 0.00 5028.06 2307.41 12888.75 00:31:19.364 Received shutdown signal, test time was about 1.000000 seconds 00:31:19.364 00:31:19.364 Latency(us) 00:31:19.364 [2024-11-07T12:36:27.371Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:19.364 [2024-11-07T12:36:27.371Z] =================================================================================================================== 00:31:19.364 [2024-11-07T12:36:27.371Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:19.364 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:31:19.364 13:36:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:19.364 13:36:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:31:19.364 13:36:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:31:19.364 13:36:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:19.364 13:36:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:31:19.364 13:36:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:19.364 13:36:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:31:19.364 13:36:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:19.364 13:36:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:19.364 rmmod nvme_tcp 00:31:19.364 rmmod nvme_fabrics 00:31:19.364 rmmod nvme_keyring 00:31:19.364 13:36:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:19.364 13:36:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:31:19.364 13:36:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:31:19.364 13:36:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 4011709 ']' 00:31:19.364 13:36:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 4011709 00:31:19.364 13:36:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 4011709 ']' 00:31:19.364 13:36:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 4011709 00:31:19.364 13:36:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:31:19.364 13:36:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:19.364 13:36:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4011709 00:31:19.364 13:36:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:19.364 13:36:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:19.364 13:36:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4011709' 00:31:19.364 killing process with pid 4011709 00:31:19.364 13:36:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 4011709 00:31:19.364 13:36:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 4011709 00:31:20.302 13:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:20.302 13:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:20.302 13:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:20.302 13:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:31:20.302 13:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:31:20.302 13:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:31:20.302 13:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:20.302 13:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:20.302 13:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:20.302 13:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:20.302 13:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:20.302 13:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:22.841 00:31:22.841 real 0m16.254s 00:31:22.841 user 0m20.616s 00:31:22.841 sys 0m7.457s 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:22.841 ************************************ 00:31:22.841 END TEST nvmf_multicontroller 00:31:22.841 ************************************ 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.841 ************************************ 00:31:22.841 START TEST nvmf_aer 00:31:22.841 ************************************ 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:31:22.841 * Looking for test storage... 00:31:22.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:22.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.841 --rc genhtml_branch_coverage=1 00:31:22.841 --rc genhtml_function_coverage=1 00:31:22.841 --rc genhtml_legend=1 00:31:22.841 --rc geninfo_all_blocks=1 00:31:22.841 --rc geninfo_unexecuted_blocks=1 00:31:22.841 00:31:22.841 ' 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:22.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.841 --rc genhtml_branch_coverage=1 00:31:22.841 --rc genhtml_function_coverage=1 00:31:22.841 --rc genhtml_legend=1 00:31:22.841 --rc geninfo_all_blocks=1 00:31:22.841 --rc geninfo_unexecuted_blocks=1 00:31:22.841 00:31:22.841 ' 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:22.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.841 --rc genhtml_branch_coverage=1 00:31:22.841 --rc genhtml_function_coverage=1 00:31:22.841 --rc genhtml_legend=1 00:31:22.841 --rc geninfo_all_blocks=1 00:31:22.841 --rc geninfo_unexecuted_blocks=1 00:31:22.841 00:31:22.841 ' 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:22.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.841 --rc genhtml_branch_coverage=1 00:31:22.841 --rc genhtml_function_coverage=1 00:31:22.841 --rc genhtml_legend=1 00:31:22.841 --rc geninfo_all_blocks=1 00:31:22.841 --rc geninfo_unexecuted_blocks=1 00:31:22.841 00:31:22.841 ' 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.841 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:31:22.842 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.842 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:31:22.842 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:22.842 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:22.842 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:22.842 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:22.842 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:22.842 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:22.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:22.842 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:22.842 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:22.842 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:22.842 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:31:22.842 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:22.842 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:22.842 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:22.842 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:22.842 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:22.842 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.842 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:22.842 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.842 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:22.842 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:22.842 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:31:22.842 13:36:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:30.967 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:30.967 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:31:30.967 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:30.967 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:30.967 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:30.967 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:30.967 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:30.967 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:31:30.967 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:30.967 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:31:30.967 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:31:30.967 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:31:30.967 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:31:30.967 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:31:30.967 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:31:30.967 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:30.967 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:30.967 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:30.967 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:30.968 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:30.968 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:30.968 Found net devices under 0000:31:00.0: cvl_0_0 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:30.968 Found net devices under 0000:31:00.1: cvl_0_1 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:30.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:30.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.497 ms 00:31:30.968 00:31:30.968 --- 10.0.0.2 ping statistics --- 00:31:30.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:30.968 rtt min/avg/max/mdev = 0.497/0.497/0.497/0.000 ms 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:30.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:30.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:31:30.968 00:31:30.968 --- 10.0.0.1 ping statistics --- 00:31:30.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:30.968 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=4017284 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 4017284 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # '[' -z 4017284 ']' 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:30.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:30.968 13:36:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:30.968 [2024-11-07 13:36:38.726335] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:31:30.968 [2024-11-07 13:36:38.726451] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:30.969 [2024-11-07 13:36:38.881258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:31.228 [2024-11-07 13:36:38.981186] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:31.228 [2024-11-07 13:36:38.981231] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:31.228 [2024-11-07 13:36:38.981243] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:31.228 [2024-11-07 13:36:38.981254] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:31.228 [2024-11-07 13:36:38.981263] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:31.228 [2024-11-07 13:36:38.983663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:31.228 [2024-11-07 13:36:38.983735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:31.228 [2024-11-07 13:36:38.983851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:31.228 [2024-11-07 13:36:38.983893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:31.797 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:31.797 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@866 -- # return 0 00:31:31.797 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:31.797 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:31.797 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:31.798 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:31.798 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:31.798 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.798 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:31.798 [2024-11-07 13:36:39.545185] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:31.798 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.798 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:31:31.798 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.798 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:31.798 Malloc0 00:31:31.798 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.798 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:31:31.798 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.798 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:31.798 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.798 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:31.798 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.798 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:31.798 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.798 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:31.798 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.798 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:31.798 [2024-11-07 13:36:39.653602] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:31.798 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.798 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:31:31.798 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.798 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:31.798 [ 00:31:31.798 { 00:31:31.798 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:31.798 "subtype": "Discovery", 00:31:31.798 "listen_addresses": [], 00:31:31.798 "allow_any_host": true, 00:31:31.798 "hosts": [] 00:31:31.798 }, 00:31:31.798 { 00:31:31.798 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:31.798 "subtype": "NVMe", 00:31:31.798 "listen_addresses": [ 00:31:31.798 { 00:31:31.798 "trtype": "TCP", 00:31:31.798 "adrfam": "IPv4", 00:31:31.798 "traddr": "10.0.0.2", 00:31:31.798 "trsvcid": "4420" 00:31:31.798 } 00:31:31.798 ], 00:31:31.798 "allow_any_host": true, 00:31:31.798 "hosts": [], 00:31:31.798 "serial_number": "SPDK00000000000001", 00:31:31.798 "model_number": "SPDK bdev Controller", 00:31:31.798 "max_namespaces": 2, 00:31:31.798 "min_cntlid": 1, 00:31:31.798 "max_cntlid": 65519, 00:31:31.798 "namespaces": [ 00:31:31.798 { 00:31:31.798 "nsid": 1, 00:31:31.798 "bdev_name": "Malloc0", 00:31:31.798 "name": "Malloc0", 00:31:31.798 "nguid": "F256803B4B974605A5D9AF4B8FF654E8", 00:31:31.798 "uuid": "f256803b-4b97-4605-a5d9-af4b8ff654e8" 00:31:31.798 } 00:31:31.798 ] 00:31:31.798 } 00:31:31.798 ] 00:31:31.798 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.798 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:31:31.798 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:31:31.798 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=4017575 00:31:31.798 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:31:31.798 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:31:31.798 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # local i=0 00:31:31.798 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:31.798 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:31:31.798 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=1 00:31:31.798 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:31:31.798 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:31.798 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:31:31.798 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=2 00:31:31.798 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:31:32.117 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:32.117 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 2 -lt 200 ']' 00:31:32.117 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=3 00:31:32.117 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:31:32.117 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:32.117 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:32.117 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1278 -- # return 0 00:31:32.117 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:31:32.117 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.117 13:36:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:32.432 Malloc1 00:31:32.432 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.432 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:31:32.432 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.432 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:32.432 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.432 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:31:32.432 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.432 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:32.432 [ 00:31:32.432 { 00:31:32.432 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:32.432 "subtype": "Discovery", 00:31:32.432 "listen_addresses": [], 00:31:32.432 "allow_any_host": true, 00:31:32.432 "hosts": [] 00:31:32.432 }, 00:31:32.432 { 00:31:32.432 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:32.432 "subtype": "NVMe", 00:31:32.432 "listen_addresses": [ 00:31:32.432 { 00:31:32.432 "trtype": "TCP", 00:31:32.432 "adrfam": "IPv4", 00:31:32.432 "traddr": "10.0.0.2", 00:31:32.432 "trsvcid": "4420" 00:31:32.432 } 00:31:32.432 ], 00:31:32.432 "allow_any_host": true, 00:31:32.432 "hosts": [], 00:31:32.432 "serial_number": "SPDK00000000000001", 00:31:32.432 "model_number": "SPDK bdev Controller", 00:31:32.432 "max_namespaces": 2, 00:31:32.432 "min_cntlid": 1, 00:31:32.432 "max_cntlid": 65519, 00:31:32.432 "namespaces": [ 00:31:32.432 { 00:31:32.432 "nsid": 1, 00:31:32.432 "bdev_name": "Malloc0", 00:31:32.432 "name": "Malloc0", 00:31:32.432 "nguid": "F256803B4B974605A5D9AF4B8FF654E8", 00:31:32.432 "uuid": "f256803b-4b97-4605-a5d9-af4b8ff654e8" 00:31:32.432 }, 00:31:32.432 { 00:31:32.432 "nsid": 2, 00:31:32.432 "bdev_name": "Malloc1", 00:31:32.432 "name": "Malloc1", 00:31:32.432 "nguid": "236A5713421546109350D95E69E4419A", 00:31:32.432 "uuid": "236a5713-4215-4610-9350-d95e69e4419a" 00:31:32.432 } 00:31:32.432 ] 00:31:32.432 } 00:31:32.432 ] 00:31:32.432 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.432 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 4017575 00:31:32.432 Asynchronous Event Request test 00:31:32.432 Attaching to 10.0.0.2 00:31:32.432 Attached to 10.0.0.2 00:31:32.432 Registering asynchronous event callbacks... 00:31:32.432 Starting namespace attribute notice tests for all controllers... 00:31:32.432 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:31:32.432 aer_cb - Changed Namespace 00:31:32.432 Cleaning up... 00:31:32.432 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:31:32.432 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.432 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:32.432 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.432 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:31:32.432 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.432 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:32.432 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.432 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:32.432 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.432 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:32.692 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.692 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:31:32.692 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:31:32.692 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:32.692 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:31:32.692 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:32.692 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:31:32.692 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:32.692 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:32.692 rmmod nvme_tcp 00:31:32.692 rmmod nvme_fabrics 00:31:32.692 rmmod nvme_keyring 00:31:32.692 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:32.692 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:31:32.692 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:31:32.692 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 4017284 ']' 00:31:32.692 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 4017284 00:31:32.692 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # '[' -z 4017284 ']' 00:31:32.692 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # kill -0 4017284 00:31:32.692 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # uname 00:31:32.692 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:32.692 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4017284 00:31:32.692 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:32.692 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:32.692 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4017284' 00:31:32.692 killing process with pid 4017284 00:31:32.692 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # kill 4017284 00:31:32.692 13:36:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@976 -- # wait 4017284 00:31:33.641 13:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:33.641 13:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:33.641 13:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:33.641 13:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:31:33.641 13:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:31:33.641 13:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:33.641 13:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:31:33.641 13:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:33.641 13:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:33.641 13:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:33.641 13:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:33.641 13:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:35.552 13:36:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:35.552 00:31:35.552 real 0m13.138s 00:31:35.552 user 0m11.174s 00:31:35.552 sys 0m6.692s 00:31:35.552 13:36:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:35.552 13:36:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:35.552 ************************************ 00:31:35.552 END TEST nvmf_aer 00:31:35.552 ************************************ 00:31:35.552 13:36:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:31:35.553 13:36:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:35.553 13:36:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:35.553 13:36:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.553 ************************************ 00:31:35.553 START TEST nvmf_async_init 00:31:35.553 ************************************ 00:31:35.553 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:31:35.813 * Looking for test storage... 00:31:35.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:35.813 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:35.813 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:31:35.813 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:35.813 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:35.813 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:35.813 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:35.813 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:35.813 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:31:35.813 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:31:35.813 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:31:35.813 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:31:35.813 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:31:35.813 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:31:35.813 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:31:35.813 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:35.813 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:31:35.813 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:31:35.813 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:35.813 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:35.813 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:31:35.813 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:31:35.813 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:35.813 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:31:35.813 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:31:35.813 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:31:35.813 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:31:35.813 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:35.813 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:35.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.814 --rc genhtml_branch_coverage=1 00:31:35.814 --rc genhtml_function_coverage=1 00:31:35.814 --rc genhtml_legend=1 00:31:35.814 --rc geninfo_all_blocks=1 00:31:35.814 --rc geninfo_unexecuted_blocks=1 00:31:35.814 00:31:35.814 ' 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:35.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.814 --rc genhtml_branch_coverage=1 00:31:35.814 --rc genhtml_function_coverage=1 00:31:35.814 --rc genhtml_legend=1 00:31:35.814 --rc geninfo_all_blocks=1 00:31:35.814 --rc geninfo_unexecuted_blocks=1 00:31:35.814 00:31:35.814 ' 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:35.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.814 --rc genhtml_branch_coverage=1 00:31:35.814 --rc genhtml_function_coverage=1 00:31:35.814 --rc genhtml_legend=1 00:31:35.814 --rc geninfo_all_blocks=1 00:31:35.814 --rc geninfo_unexecuted_blocks=1 00:31:35.814 00:31:35.814 ' 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:35.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.814 --rc genhtml_branch_coverage=1 00:31:35.814 --rc genhtml_function_coverage=1 00:31:35.814 --rc genhtml_legend=1 00:31:35.814 --rc geninfo_all_blocks=1 00:31:35.814 --rc geninfo_unexecuted_blocks=1 00:31:35.814 00:31:35.814 ' 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:35.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=bd5eefbe3a7b4a23ab0d8e538d98e58d 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:31:35.814 13:36:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:43.943 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:43.943 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:43.943 Found net devices under 0000:31:00.0: cvl_0_0 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:43.943 Found net devices under 0000:31:00.1: cvl_0_1 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:43.943 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:43.944 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:43.944 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:43.944 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:43.944 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:43.944 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:43.944 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:43.944 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:43.944 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:43.944 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:43.944 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:43.944 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:43.944 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:43.944 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:43.944 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:43.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:43.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:31:43.944 00:31:43.944 --- 10.0.0.2 ping statistics --- 00:31:43.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:43.944 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:31:43.944 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:43.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:43.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:31:43.944 00:31:43.944 --- 10.0.0.1 ping statistics --- 00:31:43.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:43.944 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:31:43.944 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:43.944 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:31:43.944 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:43.944 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:43.944 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:43.944 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:43.944 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:43.944 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:43.944 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:43.944 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:31:43.944 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:43.944 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:43.944 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:43.944 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=4022560 00:31:43.944 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 4022560 00:31:43.944 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:31:43.944 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # '[' -z 4022560 ']' 00:31:43.944 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:43.944 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:43.944 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:43.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:43.944 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:43.944 13:36:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:44.203 [2024-11-07 13:36:52.021312] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:31:44.203 [2024-11-07 13:36:52.021417] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:44.203 [2024-11-07 13:36:52.167193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:44.463 [2024-11-07 13:36:52.262366] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:44.463 [2024-11-07 13:36:52.262411] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:44.463 [2024-11-07 13:36:52.262423] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:44.463 [2024-11-07 13:36:52.262435] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:44.463 [2024-11-07 13:36:52.262445] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:44.463 [2024-11-07 13:36:52.263683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:45.032 13:36:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:45.032 13:36:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@866 -- # return 0 00:31:45.032 13:36:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:45.032 13:36:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:45.032 13:36:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:45.032 13:36:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:45.032 13:36:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:45.032 13:36:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.032 13:36:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:45.032 [2024-11-07 13:36:52.897281] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:45.032 13:36:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.032 13:36:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:31:45.032 13:36:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.032 13:36:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:45.032 null0 00:31:45.032 13:36:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.032 13:36:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:31:45.032 13:36:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.032 13:36:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:45.032 13:36:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.032 13:36:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:31:45.032 13:36:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.032 13:36:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:45.032 13:36:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.032 13:36:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g bd5eefbe3a7b4a23ab0d8e538d98e58d 00:31:45.032 13:36:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.032 13:36:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:45.032 13:36:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.033 13:36:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:45.033 13:36:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.033 13:36:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:45.033 [2024-11-07 13:36:52.937575] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:45.033 13:36:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.033 13:36:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:31:45.033 13:36:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.033 13:36:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:45.292 nvme0n1 00:31:45.292 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.292 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:45.292 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.292 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:45.292 [ 00:31:45.292 { 00:31:45.292 "name": "nvme0n1", 00:31:45.292 "aliases": [ 00:31:45.292 "bd5eefbe-3a7b-4a23-ab0d-8e538d98e58d" 00:31:45.292 ], 00:31:45.292 "product_name": "NVMe disk", 00:31:45.292 "block_size": 512, 00:31:45.292 "num_blocks": 2097152, 00:31:45.292 "uuid": "bd5eefbe-3a7b-4a23-ab0d-8e538d98e58d", 00:31:45.292 "numa_id": 0, 00:31:45.292 "assigned_rate_limits": { 00:31:45.292 "rw_ios_per_sec": 0, 00:31:45.292 "rw_mbytes_per_sec": 0, 00:31:45.292 "r_mbytes_per_sec": 0, 00:31:45.292 "w_mbytes_per_sec": 0 00:31:45.292 }, 00:31:45.292 "claimed": false, 00:31:45.292 "zoned": false, 00:31:45.292 "supported_io_types": { 00:31:45.292 "read": true, 00:31:45.292 "write": true, 00:31:45.292 "unmap": false, 00:31:45.292 "flush": true, 00:31:45.292 "reset": true, 00:31:45.292 "nvme_admin": true, 00:31:45.292 "nvme_io": true, 00:31:45.292 "nvme_io_md": false, 00:31:45.292 "write_zeroes": true, 00:31:45.292 "zcopy": false, 00:31:45.292 "get_zone_info": false, 00:31:45.292 "zone_management": false, 00:31:45.292 "zone_append": false, 00:31:45.292 "compare": true, 00:31:45.292 "compare_and_write": true, 00:31:45.292 "abort": true, 00:31:45.292 "seek_hole": false, 00:31:45.292 "seek_data": false, 00:31:45.292 "copy": true, 00:31:45.292 "nvme_iov_md": false 00:31:45.292 }, 00:31:45.292 "memory_domains": [ 00:31:45.292 { 00:31:45.292 "dma_device_id": "system", 00:31:45.292 "dma_device_type": 1 00:31:45.292 } 00:31:45.292 ], 00:31:45.292 "driver_specific": { 00:31:45.292 "nvme": [ 00:31:45.292 { 00:31:45.292 "trid": { 00:31:45.292 "trtype": "TCP", 00:31:45.292 "adrfam": "IPv4", 00:31:45.292 "traddr": "10.0.0.2", 00:31:45.292 "trsvcid": "4420", 00:31:45.292 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:45.292 }, 00:31:45.292 "ctrlr_data": { 00:31:45.292 "cntlid": 1, 00:31:45.292 "vendor_id": "0x8086", 00:31:45.292 "model_number": "SPDK bdev Controller", 00:31:45.292 "serial_number": "00000000000000000000", 00:31:45.292 "firmware_revision": "25.01", 00:31:45.292 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:45.292 "oacs": { 00:31:45.292 "security": 0, 00:31:45.292 "format": 0, 00:31:45.292 "firmware": 0, 00:31:45.292 "ns_manage": 0 00:31:45.292 }, 00:31:45.292 "multi_ctrlr": true, 00:31:45.292 "ana_reporting": false 00:31:45.292 }, 00:31:45.292 "vs": { 00:31:45.292 "nvme_version": "1.3" 00:31:45.292 }, 00:31:45.292 "ns_data": { 00:31:45.292 "id": 1, 00:31:45.292 "can_share": true 00:31:45.292 } 00:31:45.292 } 00:31:45.292 ], 00:31:45.292 "mp_policy": "active_passive" 00:31:45.292 } 00:31:45.292 } 00:31:45.292 ] 00:31:45.292 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.292 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:31:45.292 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.292 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:45.292 [2024-11-07 13:36:53.189288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:45.292 [2024-11-07 13:36:53.189381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:31:45.552 [2024-11-07 13:36:53.332002] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:31:45.552 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.552 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:45.552 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.552 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:45.552 [ 00:31:45.552 { 00:31:45.552 "name": "nvme0n1", 00:31:45.552 "aliases": [ 00:31:45.552 "bd5eefbe-3a7b-4a23-ab0d-8e538d98e58d" 00:31:45.552 ], 00:31:45.552 "product_name": "NVMe disk", 00:31:45.552 "block_size": 512, 00:31:45.552 "num_blocks": 2097152, 00:31:45.552 "uuid": "bd5eefbe-3a7b-4a23-ab0d-8e538d98e58d", 00:31:45.552 "numa_id": 0, 00:31:45.552 "assigned_rate_limits": { 00:31:45.552 "rw_ios_per_sec": 0, 00:31:45.552 "rw_mbytes_per_sec": 0, 00:31:45.552 "r_mbytes_per_sec": 0, 00:31:45.552 "w_mbytes_per_sec": 0 00:31:45.552 }, 00:31:45.552 "claimed": false, 00:31:45.552 "zoned": false, 00:31:45.552 "supported_io_types": { 00:31:45.552 "read": true, 00:31:45.552 "write": true, 00:31:45.552 "unmap": false, 00:31:45.552 "flush": true, 00:31:45.552 "reset": true, 00:31:45.552 "nvme_admin": true, 00:31:45.552 "nvme_io": true, 00:31:45.552 "nvme_io_md": false, 00:31:45.552 "write_zeroes": true, 00:31:45.552 "zcopy": false, 00:31:45.552 "get_zone_info": false, 00:31:45.552 "zone_management": false, 00:31:45.552 "zone_append": false, 00:31:45.552 "compare": true, 00:31:45.552 "compare_and_write": true, 00:31:45.552 "abort": true, 00:31:45.552 "seek_hole": false, 00:31:45.552 "seek_data": false, 00:31:45.552 "copy": true, 00:31:45.552 "nvme_iov_md": false 00:31:45.552 }, 00:31:45.552 "memory_domains": [ 00:31:45.552 { 00:31:45.552 "dma_device_id": "system", 00:31:45.552 "dma_device_type": 1 00:31:45.552 } 00:31:45.552 ], 00:31:45.552 "driver_specific": { 00:31:45.552 "nvme": [ 00:31:45.552 { 00:31:45.552 "trid": { 00:31:45.552 "trtype": "TCP", 00:31:45.552 "adrfam": "IPv4", 00:31:45.552 "traddr": "10.0.0.2", 00:31:45.552 "trsvcid": "4420", 00:31:45.552 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:45.552 }, 00:31:45.552 "ctrlr_data": { 00:31:45.552 "cntlid": 2, 00:31:45.552 "vendor_id": "0x8086", 00:31:45.552 "model_number": "SPDK bdev Controller", 00:31:45.552 "serial_number": "00000000000000000000", 00:31:45.552 "firmware_revision": "25.01", 00:31:45.552 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:45.552 "oacs": { 00:31:45.552 "security": 0, 00:31:45.552 "format": 0, 00:31:45.552 "firmware": 0, 00:31:45.552 "ns_manage": 0 00:31:45.552 }, 00:31:45.552 "multi_ctrlr": true, 00:31:45.552 "ana_reporting": false 00:31:45.552 }, 00:31:45.552 "vs": { 00:31:45.552 "nvme_version": "1.3" 00:31:45.552 }, 00:31:45.552 "ns_data": { 00:31:45.552 "id": 1, 00:31:45.552 "can_share": true 00:31:45.552 } 00:31:45.552 } 00:31:45.552 ], 00:31:45.552 "mp_policy": "active_passive" 00:31:45.552 } 00:31:45.552 } 00:31:45.552 ] 00:31:45.552 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.552 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:45.552 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.552 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:45.552 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.552 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:31:45.552 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.FXTy8zZdaA 00:31:45.552 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:31:45.552 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.FXTy8zZdaA 00:31:45.552 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.FXTy8zZdaA 00:31:45.552 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.552 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:45.552 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.552 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:31:45.552 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.552 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:45.552 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.552 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:31:45.552 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.552 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:45.552 [2024-11-07 13:36:53.398007] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:45.552 [2024-11-07 13:36:53.398168] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:45.552 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.552 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:31:45.552 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.552 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:45.552 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.553 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:31:45.553 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.553 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:45.553 [2024-11-07 13:36:53.414066] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:45.553 nvme0n1 00:31:45.553 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.553 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:45.553 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.553 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:45.553 [ 00:31:45.553 { 00:31:45.553 "name": "nvme0n1", 00:31:45.553 "aliases": [ 00:31:45.553 "bd5eefbe-3a7b-4a23-ab0d-8e538d98e58d" 00:31:45.553 ], 00:31:45.553 "product_name": "NVMe disk", 00:31:45.553 "block_size": 512, 00:31:45.553 "num_blocks": 2097152, 00:31:45.553 "uuid": "bd5eefbe-3a7b-4a23-ab0d-8e538d98e58d", 00:31:45.553 "numa_id": 0, 00:31:45.553 "assigned_rate_limits": { 00:31:45.553 "rw_ios_per_sec": 0, 00:31:45.553 "rw_mbytes_per_sec": 0, 00:31:45.553 "r_mbytes_per_sec": 0, 00:31:45.553 "w_mbytes_per_sec": 0 00:31:45.553 }, 00:31:45.553 "claimed": false, 00:31:45.553 "zoned": false, 00:31:45.553 "supported_io_types": { 00:31:45.553 "read": true, 00:31:45.553 "write": true, 00:31:45.553 "unmap": false, 00:31:45.553 "flush": true, 00:31:45.553 "reset": true, 00:31:45.553 "nvme_admin": true, 00:31:45.553 "nvme_io": true, 00:31:45.553 "nvme_io_md": false, 00:31:45.553 "write_zeroes": true, 00:31:45.553 "zcopy": false, 00:31:45.553 "get_zone_info": false, 00:31:45.553 "zone_management": false, 00:31:45.553 "zone_append": false, 00:31:45.553 "compare": true, 00:31:45.553 "compare_and_write": true, 00:31:45.553 "abort": true, 00:31:45.553 "seek_hole": false, 00:31:45.553 "seek_data": false, 00:31:45.553 "copy": true, 00:31:45.553 "nvme_iov_md": false 00:31:45.553 }, 00:31:45.553 "memory_domains": [ 00:31:45.553 { 00:31:45.553 "dma_device_id": "system", 00:31:45.553 "dma_device_type": 1 00:31:45.553 } 00:31:45.553 ], 00:31:45.553 "driver_specific": { 00:31:45.553 "nvme": [ 00:31:45.553 { 00:31:45.553 "trid": { 00:31:45.553 "trtype": "TCP", 00:31:45.553 "adrfam": "IPv4", 00:31:45.553 "traddr": "10.0.0.2", 00:31:45.553 "trsvcid": "4421", 00:31:45.553 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:45.553 }, 00:31:45.553 "ctrlr_data": { 00:31:45.553 "cntlid": 3, 00:31:45.553 "vendor_id": "0x8086", 00:31:45.553 "model_number": "SPDK bdev Controller", 00:31:45.553 "serial_number": "00000000000000000000", 00:31:45.553 "firmware_revision": "25.01", 00:31:45.553 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:45.553 "oacs": { 00:31:45.553 "security": 0, 00:31:45.553 "format": 0, 00:31:45.553 "firmware": 0, 00:31:45.553 "ns_manage": 0 00:31:45.553 }, 00:31:45.553 "multi_ctrlr": true, 00:31:45.553 "ana_reporting": false 00:31:45.553 }, 00:31:45.553 "vs": { 00:31:45.553 "nvme_version": "1.3" 00:31:45.553 }, 00:31:45.553 "ns_data": { 00:31:45.553 "id": 1, 00:31:45.553 "can_share": true 00:31:45.553 } 00:31:45.553 } 00:31:45.553 ], 00:31:45.553 "mp_policy": "active_passive" 00:31:45.553 } 00:31:45.553 } 00:31:45.553 ] 00:31:45.553 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.553 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:45.553 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.553 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:45.553 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.553 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.FXTy8zZdaA 00:31:45.553 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:31:45.553 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:31:45.553 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:45.553 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:31:45.553 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:45.553 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:31:45.553 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:45.553 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:45.553 rmmod nvme_tcp 00:31:45.553 rmmod nvme_fabrics 00:31:45.813 rmmod nvme_keyring 00:31:45.813 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:45.813 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:31:45.813 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:31:45.813 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 4022560 ']' 00:31:45.813 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 4022560 00:31:45.813 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' -z 4022560 ']' 00:31:45.813 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # kill -0 4022560 00:31:45.813 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # uname 00:31:45.813 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:45.813 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4022560 00:31:45.813 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:45.813 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:45.813 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4022560' 00:31:45.813 killing process with pid 4022560 00:31:45.813 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # kill 4022560 00:31:45.813 13:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@976 -- # wait 4022560 00:31:46.750 13:36:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:46.750 13:36:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:46.750 13:36:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:46.750 13:36:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:31:46.750 13:36:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:31:46.750 13:36:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:46.750 13:36:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:31:46.750 13:36:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:46.750 13:36:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:46.750 13:36:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:46.750 13:36:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:46.750 13:36:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:48.657 13:36:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:48.657 00:31:48.657 real 0m12.994s 00:31:48.657 user 0m4.873s 00:31:48.657 sys 0m6.635s 00:31:48.657 13:36:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:48.657 13:36:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:48.657 ************************************ 00:31:48.657 END TEST nvmf_async_init 00:31:48.657 ************************************ 00:31:48.657 13:36:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:31:48.657 13:36:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:48.657 13:36:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:48.657 13:36:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.657 ************************************ 00:31:48.657 START TEST dma 00:31:48.657 ************************************ 00:31:48.657 13:36:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:31:48.657 * Looking for test storage... 00:31:48.657 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:48.657 13:36:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:48.657 13:36:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:31:48.657 13:36:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:48.918 13:36:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:48.918 13:36:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:48.918 13:36:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:48.918 13:36:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:48.918 13:36:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:31:48.918 13:36:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:31:48.918 13:36:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:31:48.918 13:36:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:31:48.918 13:36:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:31:48.918 13:36:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:31:48.918 13:36:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:31:48.918 13:36:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:48.918 13:36:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:31:48.918 13:36:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:31:48.918 13:36:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:48.918 13:36:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:48.918 13:36:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:31:48.918 13:36:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:31:48.918 13:36:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:48.918 13:36:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:31:48.918 13:36:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:31:48.918 13:36:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:31:48.918 13:36:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:31:48.918 13:36:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:48.918 13:36:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:31:48.918 13:36:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:31:48.918 13:36:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:48.918 13:36:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:48.918 13:36:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:31:48.918 13:36:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:48.918 13:36:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:48.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:48.918 --rc genhtml_branch_coverage=1 00:31:48.918 --rc genhtml_function_coverage=1 00:31:48.918 --rc genhtml_legend=1 00:31:48.918 --rc geninfo_all_blocks=1 00:31:48.918 --rc geninfo_unexecuted_blocks=1 00:31:48.918 00:31:48.918 ' 00:31:48.918 13:36:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:48.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:48.918 --rc genhtml_branch_coverage=1 00:31:48.918 --rc genhtml_function_coverage=1 00:31:48.918 --rc genhtml_legend=1 00:31:48.918 --rc geninfo_all_blocks=1 00:31:48.918 --rc geninfo_unexecuted_blocks=1 00:31:48.918 00:31:48.918 ' 00:31:48.918 13:36:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:48.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:48.918 --rc genhtml_branch_coverage=1 00:31:48.918 --rc genhtml_function_coverage=1 00:31:48.918 --rc genhtml_legend=1 00:31:48.918 --rc geninfo_all_blocks=1 00:31:48.918 --rc geninfo_unexecuted_blocks=1 00:31:48.918 00:31:48.918 ' 00:31:48.918 13:36:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:48.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:48.918 --rc genhtml_branch_coverage=1 00:31:48.918 --rc genhtml_function_coverage=1 00:31:48.918 --rc genhtml_legend=1 00:31:48.918 --rc geninfo_all_blocks=1 00:31:48.919 --rc geninfo_unexecuted_blocks=1 00:31:48.919 00:31:48.919 ' 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:48.919 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:31:48.919 00:31:48.919 real 0m0.239s 00:31:48.919 user 0m0.137s 00:31:48.919 sys 0m0.118s 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:31:48.919 ************************************ 00:31:48.919 END TEST dma 00:31:48.919 ************************************ 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.919 ************************************ 00:31:48.919 START TEST nvmf_identify 00:31:48.919 ************************************ 00:31:48.919 13:36:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:31:49.180 * Looking for test storage... 00:31:49.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:49.180 13:36:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:49.180 13:36:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:31:49.180 13:36:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:49.180 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:49.180 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:49.180 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:49.180 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:49.180 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:31:49.180 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:31:49.180 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:31:49.180 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:31:49.180 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:31:49.180 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:31:49.180 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:31:49.180 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:49.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.181 --rc genhtml_branch_coverage=1 00:31:49.181 --rc genhtml_function_coverage=1 00:31:49.181 --rc genhtml_legend=1 00:31:49.181 --rc geninfo_all_blocks=1 00:31:49.181 --rc geninfo_unexecuted_blocks=1 00:31:49.181 00:31:49.181 ' 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:49.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.181 --rc genhtml_branch_coverage=1 00:31:49.181 --rc genhtml_function_coverage=1 00:31:49.181 --rc genhtml_legend=1 00:31:49.181 --rc geninfo_all_blocks=1 00:31:49.181 --rc geninfo_unexecuted_blocks=1 00:31:49.181 00:31:49.181 ' 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:49.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.181 --rc genhtml_branch_coverage=1 00:31:49.181 --rc genhtml_function_coverage=1 00:31:49.181 --rc genhtml_legend=1 00:31:49.181 --rc geninfo_all_blocks=1 00:31:49.181 --rc geninfo_unexecuted_blocks=1 00:31:49.181 00:31:49.181 ' 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:49.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.181 --rc genhtml_branch_coverage=1 00:31:49.181 --rc genhtml_function_coverage=1 00:31:49.181 --rc genhtml_legend=1 00:31:49.181 --rc geninfo_all_blocks=1 00:31:49.181 --rc geninfo_unexecuted_blocks=1 00:31:49.181 00:31:49.181 ' 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:49.181 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:31:49.181 13:36:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:57.306 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:57.306 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:57.306 Found net devices under 0000:31:00.0: cvl_0_0 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:57.306 Found net devices under 0000:31:00.1: cvl_0_1 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:57.306 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:57.307 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:57.307 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:57.307 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:57.307 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:57.307 13:37:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:57.307 13:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:57.307 13:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:57.307 13:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:57.307 13:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:57.307 13:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:57.307 13:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:57.307 13:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:57.307 13:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:57.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:57.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:31:57.307 00:31:57.307 --- 10.0.0.2 ping statistics --- 00:31:57.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:57.307 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:31:57.307 13:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:57.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:57.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:31:57.307 00:31:57.307 --- 10.0.0.1 ping statistics --- 00:31:57.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:57.307 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:31:57.307 13:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:57.307 13:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:31:57.307 13:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:57.307 13:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:57.307 13:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:57.307 13:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:57.307 13:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:57.307 13:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:57.307 13:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:57.307 13:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:31:57.307 13:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:57.307 13:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:57.307 13:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=4027636 00:31:57.307 13:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:57.307 13:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:57.307 13:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 4027636 00:31:57.307 13:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 4027636 ']' 00:31:57.307 13:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:57.307 13:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:57.307 13:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:57.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:57.307 13:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:57.307 13:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:57.567 [2024-11-07 13:37:05.346551] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:31:57.567 [2024-11-07 13:37:05.346660] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:57.567 [2024-11-07 13:37:05.493815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:57.827 [2024-11-07 13:37:05.595158] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:57.827 [2024-11-07 13:37:05.595199] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:57.827 [2024-11-07 13:37:05.595211] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:57.827 [2024-11-07 13:37:05.595222] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:57.827 [2024-11-07 13:37:05.595231] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:57.827 [2024-11-07 13:37:05.597733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:57.827 [2024-11-07 13:37:05.597819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:57.827 [2024-11-07 13:37:05.597968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:57.827 [2024-11-07 13:37:05.597991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:58.396 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:58.396 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:31:58.396 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:58.396 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.396 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:58.396 [2024-11-07 13:37:06.098827] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:58.396 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.396 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:31:58.396 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:58.396 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:58.396 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:58.396 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.396 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:58.396 Malloc0 00:31:58.396 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.396 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:58.396 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.396 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:58.396 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.396 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:31:58.396 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.396 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:58.396 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.396 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:58.396 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.396 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:58.397 [2024-11-07 13:37:06.244748] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:58.397 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.397 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:58.397 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.397 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:58.397 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.397 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:31:58.397 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.397 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:58.397 [ 00:31:58.397 { 00:31:58.397 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:58.397 "subtype": "Discovery", 00:31:58.397 "listen_addresses": [ 00:31:58.397 { 00:31:58.397 "trtype": "TCP", 00:31:58.397 "adrfam": "IPv4", 00:31:58.397 "traddr": "10.0.0.2", 00:31:58.397 "trsvcid": "4420" 00:31:58.397 } 00:31:58.397 ], 00:31:58.397 "allow_any_host": true, 00:31:58.397 "hosts": [] 00:31:58.397 }, 00:31:58.397 { 00:31:58.397 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:58.397 "subtype": "NVMe", 00:31:58.397 "listen_addresses": [ 00:31:58.397 { 00:31:58.397 "trtype": "TCP", 00:31:58.397 "adrfam": "IPv4", 00:31:58.397 "traddr": "10.0.0.2", 00:31:58.397 "trsvcid": "4420" 00:31:58.397 } 00:31:58.397 ], 00:31:58.397 "allow_any_host": true, 00:31:58.397 "hosts": [], 00:31:58.397 "serial_number": "SPDK00000000000001", 00:31:58.397 "model_number": "SPDK bdev Controller", 00:31:58.397 "max_namespaces": 32, 00:31:58.397 "min_cntlid": 1, 00:31:58.397 "max_cntlid": 65519, 00:31:58.397 "namespaces": [ 00:31:58.397 { 00:31:58.397 "nsid": 1, 00:31:58.397 "bdev_name": "Malloc0", 00:31:58.397 "name": "Malloc0", 00:31:58.397 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:31:58.397 "eui64": "ABCDEF0123456789", 00:31:58.397 "uuid": "20c97dcc-8a89-46e7-bab7-383c00cd5d2c" 00:31:58.397 } 00:31:58.397 ] 00:31:58.397 } 00:31:58.397 ] 00:31:58.397 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.397 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:31:58.397 [2024-11-07 13:37:06.329041] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:31:58.397 [2024-11-07 13:37:06.329133] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4027963 ] 00:31:58.660 [2024-11-07 13:37:06.401184] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:31:58.660 [2024-11-07 13:37:06.401285] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:31:58.660 [2024-11-07 13:37:06.401298] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:31:58.660 [2024-11-07 13:37:06.401319] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:31:58.660 [2024-11-07 13:37:06.401338] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:31:58.660 [2024-11-07 13:37:06.405260] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:31:58.660 [2024-11-07 13:37:06.405313] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000025600 0 00:31:58.660 [2024-11-07 13:37:06.412881] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:31:58.660 [2024-11-07 13:37:06.412909] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:31:58.660 [2024-11-07 13:37:06.412918] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:31:58.660 [2024-11-07 13:37:06.412925] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:31:58.660 [2024-11-07 13:37:06.412979] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.660 [2024-11-07 13:37:06.412990] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.660 [2024-11-07 13:37:06.413003] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:58.660 [2024-11-07 13:37:06.413026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:58.660 [2024-11-07 13:37:06.413052] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:58.660 [2024-11-07 13:37:06.420883] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.660 [2024-11-07 13:37:06.420905] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.660 [2024-11-07 13:37:06.420912] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.660 [2024-11-07 13:37:06.420921] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:58.660 [2024-11-07 13:37:06.420939] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:58.660 [2024-11-07 13:37:06.420958] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:31:58.660 [2024-11-07 13:37:06.420968] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:31:58.660 [2024-11-07 13:37:06.420985] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.660 [2024-11-07 13:37:06.420993] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.660 [2024-11-07 13:37:06.421000] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:58.660 [2024-11-07 13:37:06.421020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.660 [2024-11-07 13:37:06.421046] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:58.660 [2024-11-07 13:37:06.421292] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.660 [2024-11-07 13:37:06.421304] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.660 [2024-11-07 13:37:06.421310] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.660 [2024-11-07 13:37:06.421319] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:58.660 [2024-11-07 13:37:06.421332] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:31:58.660 [2024-11-07 13:37:06.421344] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:31:58.660 [2024-11-07 13:37:06.421355] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.660 [2024-11-07 13:37:06.421363] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.660 [2024-11-07 13:37:06.421369] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:58.660 [2024-11-07 13:37:06.421384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.660 [2024-11-07 13:37:06.421399] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:58.660 [2024-11-07 13:37:06.421640] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.660 [2024-11-07 13:37:06.421653] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.660 [2024-11-07 13:37:06.421659] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.660 [2024-11-07 13:37:06.421665] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:58.660 [2024-11-07 13:37:06.421674] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:31:58.660 [2024-11-07 13:37:06.421689] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:31:58.660 [2024-11-07 13:37:06.421700] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.661 [2024-11-07 13:37:06.421707] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.661 [2024-11-07 13:37:06.421714] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:58.661 [2024-11-07 13:37:06.421726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.661 [2024-11-07 13:37:06.421741] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:58.661 [2024-11-07 13:37:06.421943] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.661 [2024-11-07 13:37:06.421954] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.661 [2024-11-07 13:37:06.421959] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.661 [2024-11-07 13:37:06.421965] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:58.661 [2024-11-07 13:37:06.421974] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:58.661 [2024-11-07 13:37:06.421988] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.661 [2024-11-07 13:37:06.421997] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.661 [2024-11-07 13:37:06.422005] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:58.661 [2024-11-07 13:37:06.422018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.661 [2024-11-07 13:37:06.422033] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:58.661 [2024-11-07 13:37:06.422246] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.661 [2024-11-07 13:37:06.422255] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.661 [2024-11-07 13:37:06.422261] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.661 [2024-11-07 13:37:06.422267] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:58.661 [2024-11-07 13:37:06.422275] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:31:58.661 [2024-11-07 13:37:06.422284] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:31:58.661 [2024-11-07 13:37:06.422296] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:58.661 [2024-11-07 13:37:06.422405] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:31:58.661 [2024-11-07 13:37:06.422413] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:58.661 [2024-11-07 13:37:06.422432] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.661 [2024-11-07 13:37:06.422438] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.661 [2024-11-07 13:37:06.422445] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:58.661 [2024-11-07 13:37:06.422461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.661 [2024-11-07 13:37:06.422476] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:58.661 [2024-11-07 13:37:06.422687] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.661 [2024-11-07 13:37:06.422700] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.661 [2024-11-07 13:37:06.422706] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.661 [2024-11-07 13:37:06.422712] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:58.661 [2024-11-07 13:37:06.422720] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:58.661 [2024-11-07 13:37:06.422734] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.661 [2024-11-07 13:37:06.422741] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.661 [2024-11-07 13:37:06.422747] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:58.661 [2024-11-07 13:37:06.422759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.661 [2024-11-07 13:37:06.422773] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:58.661 [2024-11-07 13:37:06.422997] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.661 [2024-11-07 13:37:06.423007] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.661 [2024-11-07 13:37:06.423013] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.661 [2024-11-07 13:37:06.423019] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:58.661 [2024-11-07 13:37:06.423027] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:58.661 [2024-11-07 13:37:06.423035] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:31:58.661 [2024-11-07 13:37:06.423052] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:31:58.661 [2024-11-07 13:37:06.423062] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:31:58.661 [2024-11-07 13:37:06.423084] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.661 [2024-11-07 13:37:06.423092] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:58.661 [2024-11-07 13:37:06.423104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.661 [2024-11-07 13:37:06.423120] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:58.661 [2024-11-07 13:37:06.423367] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:58.661 [2024-11-07 13:37:06.423378] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:58.661 [2024-11-07 13:37:06.423384] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:58.661 [2024-11-07 13:37:06.423392] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=4096, cccid=0 00:31:58.661 [2024-11-07 13:37:06.423400] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000025600): expected_datao=0, payload_size=4096 00:31:58.661 [2024-11-07 13:37:06.423408] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.661 [2024-11-07 13:37:06.423422] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:58.661 [2024-11-07 13:37:06.423431] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:58.661 [2024-11-07 13:37:06.423556] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.661 [2024-11-07 13:37:06.423565] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.661 [2024-11-07 13:37:06.423571] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.661 [2024-11-07 13:37:06.423577] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:58.661 [2024-11-07 13:37:06.423594] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:31:58.661 [2024-11-07 13:37:06.423605] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:31:58.661 [2024-11-07 13:37:06.423613] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:31:58.661 [2024-11-07 13:37:06.423621] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:31:58.661 [2024-11-07 13:37:06.423629] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:31:58.661 [2024-11-07 13:37:06.423637] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:31:58.661 [2024-11-07 13:37:06.423652] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:31:58.661 [2024-11-07 13:37:06.423666] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.661 [2024-11-07 13:37:06.423682] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.661 [2024-11-07 13:37:06.423689] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:58.661 [2024-11-07 13:37:06.423702] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:58.661 [2024-11-07 13:37:06.423718] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:58.661 [2024-11-07 13:37:06.423937] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.661 [2024-11-07 13:37:06.423948] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.661 [2024-11-07 13:37:06.423953] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.661 [2024-11-07 13:37:06.423960] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:58.661 [2024-11-07 13:37:06.423974] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.661 [2024-11-07 13:37:06.423981] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.661 [2024-11-07 13:37:06.423988] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:58.661 [2024-11-07 13:37:06.424001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:58.661 [2024-11-07 13:37:06.424011] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.661 [2024-11-07 13:37:06.424017] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.661 [2024-11-07 13:37:06.424028] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000025600) 00:31:58.661 [2024-11-07 13:37:06.424038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:58.661 [2024-11-07 13:37:06.424047] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.661 [2024-11-07 13:37:06.424053] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.661 [2024-11-07 13:37:06.424058] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000025600) 00:31:58.661 [2024-11-07 13:37:06.424068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:58.661 [2024-11-07 13:37:06.424076] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.661 [2024-11-07 13:37:06.424082] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.661 [2024-11-07 13:37:06.424088] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:58.661 [2024-11-07 13:37:06.424097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:58.661 [2024-11-07 13:37:06.424105] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:31:58.661 [2024-11-07 13:37:06.424124] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:58.661 [2024-11-07 13:37:06.424140] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.662 [2024-11-07 13:37:06.424146] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:31:58.662 [2024-11-07 13:37:06.424158] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.662 [2024-11-07 13:37:06.424176] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:58.662 [2024-11-07 13:37:06.424184] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:31:58.662 [2024-11-07 13:37:06.424191] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:31:58.662 [2024-11-07 13:37:06.424198] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:58.662 [2024-11-07 13:37:06.424205] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:58.662 [2024-11-07 13:37:06.424475] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.662 [2024-11-07 13:37:06.424487] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.662 [2024-11-07 13:37:06.424493] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.662 [2024-11-07 13:37:06.424499] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:31:58.662 [2024-11-07 13:37:06.424508] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:31:58.662 [2024-11-07 13:37:06.424517] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:31:58.662 [2024-11-07 13:37:06.424537] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.662 [2024-11-07 13:37:06.424544] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:31:58.662 [2024-11-07 13:37:06.424556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.662 [2024-11-07 13:37:06.424571] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:58.662 [2024-11-07 13:37:06.424794] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:58.662 [2024-11-07 13:37:06.424806] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:58.662 [2024-11-07 13:37:06.424812] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:58.662 [2024-11-07 13:37:06.424819] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=4096, cccid=4 00:31:58.662 [2024-11-07 13:37:06.424827] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025600): expected_datao=0, payload_size=4096 00:31:58.662 [2024-11-07 13:37:06.424838] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.662 [2024-11-07 13:37:06.424860] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:58.662 [2024-11-07 13:37:06.428881] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:58.662 [2024-11-07 13:37:06.428896] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.662 [2024-11-07 13:37:06.428905] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.662 [2024-11-07 13:37:06.428911] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.662 [2024-11-07 13:37:06.428918] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:31:58.662 [2024-11-07 13:37:06.428946] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:31:58.662 [2024-11-07 13:37:06.428987] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.662 [2024-11-07 13:37:06.428997] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:31:58.662 [2024-11-07 13:37:06.429012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.662 [2024-11-07 13:37:06.429023] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.662 [2024-11-07 13:37:06.429030] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.662 [2024-11-07 13:37:06.429036] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000025600) 00:31:58.662 [2024-11-07 13:37:06.429047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:58.662 [2024-11-07 13:37:06.429068] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:58.662 [2024-11-07 13:37:06.429077] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:58.662 [2024-11-07 13:37:06.429373] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:58.662 [2024-11-07 13:37:06.429384] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:58.662 [2024-11-07 13:37:06.429392] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:58.662 [2024-11-07 13:37:06.429399] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=1024, cccid=4 00:31:58.662 [2024-11-07 13:37:06.429407] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025600): expected_datao=0, payload_size=1024 00:31:58.662 [2024-11-07 13:37:06.429414] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.662 [2024-11-07 13:37:06.429428] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:58.662 [2024-11-07 13:37:06.429435] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:58.662 [2024-11-07 13:37:06.429444] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.662 [2024-11-07 13:37:06.429454] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.662 [2024-11-07 13:37:06.429460] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.662 [2024-11-07 13:37:06.429467] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000025600 00:31:58.662 [2024-11-07 13:37:06.470097] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.662 [2024-11-07 13:37:06.470116] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.662 [2024-11-07 13:37:06.470122] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.662 [2024-11-07 13:37:06.470136] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:31:58.662 [2024-11-07 13:37:06.470162] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.662 [2024-11-07 13:37:06.470170] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:31:58.662 [2024-11-07 13:37:06.470186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.662 [2024-11-07 13:37:06.470209] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:58.662 [2024-11-07 13:37:06.470410] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:58.662 [2024-11-07 13:37:06.470419] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:58.662 [2024-11-07 13:37:06.470425] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:58.662 [2024-11-07 13:37:06.470432] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=3072, cccid=4 00:31:58.662 [2024-11-07 13:37:06.470439] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025600): expected_datao=0, payload_size=3072 00:31:58.662 [2024-11-07 13:37:06.470445] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.662 [2024-11-07 13:37:06.470456] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:58.662 [2024-11-07 13:37:06.470461] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:58.662 [2024-11-07 13:37:06.470611] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.662 [2024-11-07 13:37:06.470621] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.662 [2024-11-07 13:37:06.470626] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.662 [2024-11-07 13:37:06.470632] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:31:58.662 [2024-11-07 13:37:06.470648] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.662 [2024-11-07 13:37:06.470655] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:31:58.662 [2024-11-07 13:37:06.470667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.662 [2024-11-07 13:37:06.470686] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:58.662 [2024-11-07 13:37:06.470963] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:58.662 [2024-11-07 13:37:06.470973] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:58.662 [2024-11-07 13:37:06.470978] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:58.662 [2024-11-07 13:37:06.470985] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=8, cccid=4 00:31:58.662 [2024-11-07 13:37:06.470992] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025600): expected_datao=0, payload_size=8 00:31:58.662 [2024-11-07 13:37:06.470998] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.662 [2024-11-07 13:37:06.471010] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:58.662 [2024-11-07 13:37:06.471016] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:58.662 [2024-11-07 13:37:06.512059] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.662 [2024-11-07 13:37:06.512081] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.662 [2024-11-07 13:37:06.512087] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.662 [2024-11-07 13:37:06.512094] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:31:58.662 ===================================================== 00:31:58.662 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:58.662 ===================================================== 00:31:58.662 Controller Capabilities/Features 00:31:58.662 ================================ 00:31:58.662 Vendor ID: 0000 00:31:58.662 Subsystem Vendor ID: 0000 00:31:58.662 Serial Number: .................... 00:31:58.662 Model Number: ........................................ 00:31:58.662 Firmware Version: 25.01 00:31:58.662 Recommended Arb Burst: 0 00:31:58.662 IEEE OUI Identifier: 00 00 00 00:31:58.662 Multi-path I/O 00:31:58.662 May have multiple subsystem ports: No 00:31:58.662 May have multiple controllers: No 00:31:58.662 Associated with SR-IOV VF: No 00:31:58.662 Max Data Transfer Size: 131072 00:31:58.662 Max Number of Namespaces: 0 00:31:58.662 Max Number of I/O Queues: 1024 00:31:58.662 NVMe Specification Version (VS): 1.3 00:31:58.662 NVMe Specification Version (Identify): 1.3 00:31:58.662 Maximum Queue Entries: 128 00:31:58.662 Contiguous Queues Required: Yes 00:31:58.662 Arbitration Mechanisms Supported 00:31:58.662 Weighted Round Robin: Not Supported 00:31:58.662 Vendor Specific: Not Supported 00:31:58.662 Reset Timeout: 15000 ms 00:31:58.662 Doorbell Stride: 4 bytes 00:31:58.662 NVM Subsystem Reset: Not Supported 00:31:58.662 Command Sets Supported 00:31:58.662 NVM Command Set: Supported 00:31:58.663 Boot Partition: Not Supported 00:31:58.663 Memory Page Size Minimum: 4096 bytes 00:31:58.663 Memory Page Size Maximum: 4096 bytes 00:31:58.663 Persistent Memory Region: Not Supported 00:31:58.663 Optional Asynchronous Events Supported 00:31:58.663 Namespace Attribute Notices: Not Supported 00:31:58.663 Firmware Activation Notices: Not Supported 00:31:58.663 ANA Change Notices: Not Supported 00:31:58.663 PLE Aggregate Log Change Notices: Not Supported 00:31:58.663 LBA Status Info Alert Notices: Not Supported 00:31:58.663 EGE Aggregate Log Change Notices: Not Supported 00:31:58.663 Normal NVM Subsystem Shutdown event: Not Supported 00:31:58.663 Zone Descriptor Change Notices: Not Supported 00:31:58.663 Discovery Log Change Notices: Supported 00:31:58.663 Controller Attributes 00:31:58.663 128-bit Host Identifier: Not Supported 00:31:58.663 Non-Operational Permissive Mode: Not Supported 00:31:58.663 NVM Sets: Not Supported 00:31:58.663 Read Recovery Levels: Not Supported 00:31:58.663 Endurance Groups: Not Supported 00:31:58.663 Predictable Latency Mode: Not Supported 00:31:58.663 Traffic Based Keep ALive: Not Supported 00:31:58.663 Namespace Granularity: Not Supported 00:31:58.663 SQ Associations: Not Supported 00:31:58.663 UUID List: Not Supported 00:31:58.663 Multi-Domain Subsystem: Not Supported 00:31:58.663 Fixed Capacity Management: Not Supported 00:31:58.663 Variable Capacity Management: Not Supported 00:31:58.663 Delete Endurance Group: Not Supported 00:31:58.663 Delete NVM Set: Not Supported 00:31:58.663 Extended LBA Formats Supported: Not Supported 00:31:58.663 Flexible Data Placement Supported: Not Supported 00:31:58.663 00:31:58.663 Controller Memory Buffer Support 00:31:58.663 ================================ 00:31:58.663 Supported: No 00:31:58.663 00:31:58.663 Persistent Memory Region Support 00:31:58.663 ================================ 00:31:58.663 Supported: No 00:31:58.663 00:31:58.663 Admin Command Set Attributes 00:31:58.663 ============================ 00:31:58.663 Security Send/Receive: Not Supported 00:31:58.663 Format NVM: Not Supported 00:31:58.663 Firmware Activate/Download: Not Supported 00:31:58.663 Namespace Management: Not Supported 00:31:58.663 Device Self-Test: Not Supported 00:31:58.663 Directives: Not Supported 00:31:58.663 NVMe-MI: Not Supported 00:31:58.663 Virtualization Management: Not Supported 00:31:58.663 Doorbell Buffer Config: Not Supported 00:31:58.663 Get LBA Status Capability: Not Supported 00:31:58.663 Command & Feature Lockdown Capability: Not Supported 00:31:58.663 Abort Command Limit: 1 00:31:58.663 Async Event Request Limit: 4 00:31:58.663 Number of Firmware Slots: N/A 00:31:58.663 Firmware Slot 1 Read-Only: N/A 00:31:58.663 Firmware Activation Without Reset: N/A 00:31:58.663 Multiple Update Detection Support: N/A 00:31:58.663 Firmware Update Granularity: No Information Provided 00:31:58.663 Per-Namespace SMART Log: No 00:31:58.663 Asymmetric Namespace Access Log Page: Not Supported 00:31:58.663 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:58.663 Command Effects Log Page: Not Supported 00:31:58.663 Get Log Page Extended Data: Supported 00:31:58.663 Telemetry Log Pages: Not Supported 00:31:58.663 Persistent Event Log Pages: Not Supported 00:31:58.663 Supported Log Pages Log Page: May Support 00:31:58.663 Commands Supported & Effects Log Page: Not Supported 00:31:58.663 Feature Identifiers & Effects Log Page:May Support 00:31:58.663 NVMe-MI Commands & Effects Log Page: May Support 00:31:58.663 Data Area 4 for Telemetry Log: Not Supported 00:31:58.663 Error Log Page Entries Supported: 128 00:31:58.663 Keep Alive: Not Supported 00:31:58.663 00:31:58.663 NVM Command Set Attributes 00:31:58.663 ========================== 00:31:58.663 Submission Queue Entry Size 00:31:58.663 Max: 1 00:31:58.663 Min: 1 00:31:58.663 Completion Queue Entry Size 00:31:58.663 Max: 1 00:31:58.663 Min: 1 00:31:58.663 Number of Namespaces: 0 00:31:58.663 Compare Command: Not Supported 00:31:58.663 Write Uncorrectable Command: Not Supported 00:31:58.663 Dataset Management Command: Not Supported 00:31:58.663 Write Zeroes Command: Not Supported 00:31:58.663 Set Features Save Field: Not Supported 00:31:58.663 Reservations: Not Supported 00:31:58.663 Timestamp: Not Supported 00:31:58.663 Copy: Not Supported 00:31:58.663 Volatile Write Cache: Not Present 00:31:58.663 Atomic Write Unit (Normal): 1 00:31:58.663 Atomic Write Unit (PFail): 1 00:31:58.663 Atomic Compare & Write Unit: 1 00:31:58.663 Fused Compare & Write: Supported 00:31:58.663 Scatter-Gather List 00:31:58.663 SGL Command Set: Supported 00:31:58.663 SGL Keyed: Supported 00:31:58.663 SGL Bit Bucket Descriptor: Not Supported 00:31:58.663 SGL Metadata Pointer: Not Supported 00:31:58.663 Oversized SGL: Not Supported 00:31:58.663 SGL Metadata Address: Not Supported 00:31:58.663 SGL Offset: Supported 00:31:58.663 Transport SGL Data Block: Not Supported 00:31:58.663 Replay Protected Memory Block: Not Supported 00:31:58.663 00:31:58.663 Firmware Slot Information 00:31:58.663 ========================= 00:31:58.663 Active slot: 0 00:31:58.663 00:31:58.663 00:31:58.663 Error Log 00:31:58.663 ========= 00:31:58.663 00:31:58.663 Active Namespaces 00:31:58.663 ================= 00:31:58.663 Discovery Log Page 00:31:58.663 ================== 00:31:58.663 Generation Counter: 2 00:31:58.663 Number of Records: 2 00:31:58.663 Record Format: 0 00:31:58.663 00:31:58.663 Discovery Log Entry 0 00:31:58.663 ---------------------- 00:31:58.663 Transport Type: 3 (TCP) 00:31:58.663 Address Family: 1 (IPv4) 00:31:58.663 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:58.663 Entry Flags: 00:31:58.663 Duplicate Returned Information: 1 00:31:58.663 Explicit Persistent Connection Support for Discovery: 1 00:31:58.663 Transport Requirements: 00:31:58.663 Secure Channel: Not Required 00:31:58.663 Port ID: 0 (0x0000) 00:31:58.663 Controller ID: 65535 (0xffff) 00:31:58.663 Admin Max SQ Size: 128 00:31:58.663 Transport Service Identifier: 4420 00:31:58.663 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:58.663 Transport Address: 10.0.0.2 00:31:58.663 Discovery Log Entry 1 00:31:58.663 ---------------------- 00:31:58.663 Transport Type: 3 (TCP) 00:31:58.663 Address Family: 1 (IPv4) 00:31:58.663 Subsystem Type: 2 (NVM Subsystem) 00:31:58.663 Entry Flags: 00:31:58.663 Duplicate Returned Information: 0 00:31:58.663 Explicit Persistent Connection Support for Discovery: 0 00:31:58.663 Transport Requirements: 00:31:58.663 Secure Channel: Not Required 00:31:58.663 Port ID: 0 (0x0000) 00:31:58.663 Controller ID: 65535 (0xffff) 00:31:58.663 Admin Max SQ Size: 128 00:31:58.663 Transport Service Identifier: 4420 00:31:58.663 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:31:58.663 Transport Address: 10.0.0.2 [2024-11-07 13:37:06.512230] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:31:58.663 [2024-11-07 13:37:06.512247] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:58.663 [2024-11-07 13:37:06.512261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.663 [2024-11-07 13:37:06.512270] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000025600 00:31:58.663 [2024-11-07 13:37:06.512279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.663 [2024-11-07 13:37:06.512287] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000025600 00:31:58.663 [2024-11-07 13:37:06.512295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.663 [2024-11-07 13:37:06.512303] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:58.663 [2024-11-07 13:37:06.512311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.663 [2024-11-07 13:37:06.512329] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.663 [2024-11-07 13:37:06.512336] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.663 [2024-11-07 13:37:06.512346] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:58.663 [2024-11-07 13:37:06.512360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.663 [2024-11-07 13:37:06.512381] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:58.663 [2024-11-07 13:37:06.512591] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.663 [2024-11-07 13:37:06.512602] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.663 [2024-11-07 13:37:06.512608] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.663 [2024-11-07 13:37:06.512615] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:58.663 [2024-11-07 13:37:06.512628] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.664 [2024-11-07 13:37:06.512634] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.664 [2024-11-07 13:37:06.512644] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:58.664 [2024-11-07 13:37:06.512658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.664 [2024-11-07 13:37:06.512677] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:58.664 [2024-11-07 13:37:06.516874] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.664 [2024-11-07 13:37:06.516901] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.664 [2024-11-07 13:37:06.516907] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.664 [2024-11-07 13:37:06.516914] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:58.664 [2024-11-07 13:37:06.516927] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:31:58.664 [2024-11-07 13:37:06.516936] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:31:58.664 [2024-11-07 13:37:06.516953] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.664 [2024-11-07 13:37:06.516961] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.664 [2024-11-07 13:37:06.516972] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:58.664 [2024-11-07 13:37:06.516986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.664 [2024-11-07 13:37:06.517008] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:58.664 [2024-11-07 13:37:06.517233] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.664 [2024-11-07 13:37:06.517242] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.664 [2024-11-07 13:37:06.517248] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.664 [2024-11-07 13:37:06.517254] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:58.664 [2024-11-07 13:37:06.517267] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 0 milliseconds 00:31:58.664 00:31:58.664 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:31:58.664 [2024-11-07 13:37:06.611202] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:31:58.664 [2024-11-07 13:37:06.611292] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4027966 ] 00:31:58.928 [2024-11-07 13:37:06.685021] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:31:58.928 [2024-11-07 13:37:06.685123] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:31:58.928 [2024-11-07 13:37:06.685134] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:31:58.928 [2024-11-07 13:37:06.685156] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:31:58.928 [2024-11-07 13:37:06.685173] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:31:58.928 [2024-11-07 13:37:06.685881] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:31:58.929 [2024-11-07 13:37:06.685921] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000025600 0 00:31:58.929 [2024-11-07 13:37:06.691880] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:31:58.929 [2024-11-07 13:37:06.691904] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:31:58.929 [2024-11-07 13:37:06.691912] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:31:58.929 [2024-11-07 13:37:06.691918] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:31:58.929 [2024-11-07 13:37:06.691966] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.929 [2024-11-07 13:37:06.691979] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.929 [2024-11-07 13:37:06.691987] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:58.929 [2024-11-07 13:37:06.692008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:58.929 [2024-11-07 13:37:06.692033] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:58.929 [2024-11-07 13:37:06.699881] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.929 [2024-11-07 13:37:06.699900] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.929 [2024-11-07 13:37:06.699907] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.929 [2024-11-07 13:37:06.699919] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:58.929 [2024-11-07 13:37:06.699938] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:58.929 [2024-11-07 13:37:06.699957] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:31:58.929 [2024-11-07 13:37:06.699967] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:31:58.929 [2024-11-07 13:37:06.699982] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.929 [2024-11-07 13:37:06.699991] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.929 [2024-11-07 13:37:06.699998] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:58.929 [2024-11-07 13:37:06.700013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.929 [2024-11-07 13:37:06.700034] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:58.929 [2024-11-07 13:37:06.700244] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.929 [2024-11-07 13:37:06.700255] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.929 [2024-11-07 13:37:06.700261] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.929 [2024-11-07 13:37:06.700269] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:58.929 [2024-11-07 13:37:06.700280] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:31:58.929 [2024-11-07 13:37:06.700296] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:31:58.929 [2024-11-07 13:37:06.700308] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.929 [2024-11-07 13:37:06.700315] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.929 [2024-11-07 13:37:06.700321] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:58.929 [2024-11-07 13:37:06.700335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.929 [2024-11-07 13:37:06.700352] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:58.929 [2024-11-07 13:37:06.700505] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.929 [2024-11-07 13:37:06.700515] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.929 [2024-11-07 13:37:06.700520] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.929 [2024-11-07 13:37:06.700526] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:58.929 [2024-11-07 13:37:06.700535] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:31:58.929 [2024-11-07 13:37:06.700548] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:31:58.929 [2024-11-07 13:37:06.700558] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.929 [2024-11-07 13:37:06.700565] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.929 [2024-11-07 13:37:06.700574] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:58.929 [2024-11-07 13:37:06.700587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.929 [2024-11-07 13:37:06.700602] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:58.929 [2024-11-07 13:37:06.700795] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.929 [2024-11-07 13:37:06.700805] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.929 [2024-11-07 13:37:06.700810] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.929 [2024-11-07 13:37:06.700820] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:58.929 [2024-11-07 13:37:06.700831] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:58.929 [2024-11-07 13:37:06.700845] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.929 [2024-11-07 13:37:06.700852] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.929 [2024-11-07 13:37:06.700859] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:58.929 [2024-11-07 13:37:06.700881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.929 [2024-11-07 13:37:06.700896] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:58.929 [2024-11-07 13:37:06.701056] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.929 [2024-11-07 13:37:06.701065] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.929 [2024-11-07 13:37:06.701072] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.929 [2024-11-07 13:37:06.701079] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:58.929 [2024-11-07 13:37:06.701087] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:31:58.929 [2024-11-07 13:37:06.701095] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:31:58.929 [2024-11-07 13:37:06.701109] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:58.929 [2024-11-07 13:37:06.701218] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:31:58.929 [2024-11-07 13:37:06.701225] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:58.929 [2024-11-07 13:37:06.701244] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.929 [2024-11-07 13:37:06.701251] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.929 [2024-11-07 13:37:06.701257] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:58.929 [2024-11-07 13:37:06.701269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.929 [2024-11-07 13:37:06.701284] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:58.929 [2024-11-07 13:37:06.701472] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.929 [2024-11-07 13:37:06.701482] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.929 [2024-11-07 13:37:06.701487] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.929 [2024-11-07 13:37:06.701493] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:58.929 [2024-11-07 13:37:06.701501] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:58.929 [2024-11-07 13:37:06.701515] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.929 [2024-11-07 13:37:06.701524] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.929 [2024-11-07 13:37:06.701532] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:58.929 [2024-11-07 13:37:06.701544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.929 [2024-11-07 13:37:06.701559] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:58.929 [2024-11-07 13:37:06.701755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.929 [2024-11-07 13:37:06.701767] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.929 [2024-11-07 13:37:06.701774] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.929 [2024-11-07 13:37:06.701781] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:58.929 [2024-11-07 13:37:06.701789] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:58.929 [2024-11-07 13:37:06.701797] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:31:58.929 [2024-11-07 13:37:06.701809] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:31:58.929 [2024-11-07 13:37:06.701823] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:31:58.929 [2024-11-07 13:37:06.701840] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.929 [2024-11-07 13:37:06.701847] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:58.929 [2024-11-07 13:37:06.701860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.929 [2024-11-07 13:37:06.701881] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:58.929 [2024-11-07 13:37:06.702122] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:58.929 [2024-11-07 13:37:06.702133] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:58.929 [2024-11-07 13:37:06.702138] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:58.929 [2024-11-07 13:37:06.702146] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=4096, cccid=0 00:31:58.929 [2024-11-07 13:37:06.702154] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000025600): expected_datao=0, payload_size=4096 00:31:58.929 [2024-11-07 13:37:06.702163] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.929 [2024-11-07 13:37:06.702186] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:58.929 [2024-11-07 13:37:06.702194] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:58.929 [2024-11-07 13:37:06.743053] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.930 [2024-11-07 13:37:06.743073] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.930 [2024-11-07 13:37:06.743079] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.930 [2024-11-07 13:37:06.743086] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:58.930 [2024-11-07 13:37:06.743103] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:31:58.930 [2024-11-07 13:37:06.743112] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:31:58.930 [2024-11-07 13:37:06.743120] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:31:58.930 [2024-11-07 13:37:06.743131] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:31:58.930 [2024-11-07 13:37:06.743141] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:31:58.930 [2024-11-07 13:37:06.743149] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:31:58.930 [2024-11-07 13:37:06.743164] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:31:58.930 [2024-11-07 13:37:06.743176] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.930 [2024-11-07 13:37:06.743183] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.930 [2024-11-07 13:37:06.743190] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:58.930 [2024-11-07 13:37:06.743208] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:58.930 [2024-11-07 13:37:06.743227] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:58.930 [2024-11-07 13:37:06.743314] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.930 [2024-11-07 13:37:06.743324] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.930 [2024-11-07 13:37:06.743329] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.930 [2024-11-07 13:37:06.743336] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:58.930 [2024-11-07 13:37:06.743346] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.930 [2024-11-07 13:37:06.743353] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.930 [2024-11-07 13:37:06.743363] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:31:58.930 [2024-11-07 13:37:06.743374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:58.930 [2024-11-07 13:37:06.743384] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.930 [2024-11-07 13:37:06.743389] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.930 [2024-11-07 13:37:06.743395] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000025600) 00:31:58.930 [2024-11-07 13:37:06.743405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:58.930 [2024-11-07 13:37:06.743413] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.930 [2024-11-07 13:37:06.743419] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.930 [2024-11-07 13:37:06.743424] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000025600) 00:31:58.930 [2024-11-07 13:37:06.743433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:58.930 [2024-11-07 13:37:06.743442] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.930 [2024-11-07 13:37:06.743447] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.930 [2024-11-07 13:37:06.743453] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:58.930 [2024-11-07 13:37:06.743462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:58.930 [2024-11-07 13:37:06.743470] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:31:58.930 [2024-11-07 13:37:06.743485] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:58.930 [2024-11-07 13:37:06.743494] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.930 [2024-11-07 13:37:06.743501] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:31:58.930 [2024-11-07 13:37:06.743513] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.930 [2024-11-07 13:37:06.743530] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:58.930 [2024-11-07 13:37:06.743538] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:31:58.930 [2024-11-07 13:37:06.743545] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:31:58.930 [2024-11-07 13:37:06.743552] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:58.930 [2024-11-07 13:37:06.743559] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:58.930 [2024-11-07 13:37:06.743757] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.930 [2024-11-07 13:37:06.743766] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.930 [2024-11-07 13:37:06.743772] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.930 [2024-11-07 13:37:06.743778] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:31:58.930 [2024-11-07 13:37:06.743786] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:31:58.930 [2024-11-07 13:37:06.743795] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:31:58.930 [2024-11-07 13:37:06.743812] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:31:58.930 [2024-11-07 13:37:06.743821] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:31:58.930 [2024-11-07 13:37:06.743831] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.930 [2024-11-07 13:37:06.743838] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.930 [2024-11-07 13:37:06.743844] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:31:58.930 [2024-11-07 13:37:06.743856] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:58.930 [2024-11-07 13:37:06.747885] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:58.930 [2024-11-07 13:37:06.748072] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.930 [2024-11-07 13:37:06.748082] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.930 [2024-11-07 13:37:06.748088] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.930 [2024-11-07 13:37:06.748094] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:31:58.930 [2024-11-07 13:37:06.748176] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:31:58.930 [2024-11-07 13:37:06.748196] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:31:58.930 [2024-11-07 13:37:06.748210] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.930 [2024-11-07 13:37:06.748218] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:31:58.930 [2024-11-07 13:37:06.748233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.930 [2024-11-07 13:37:06.748249] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:58.930 [2024-11-07 13:37:06.748437] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:58.930 [2024-11-07 13:37:06.748446] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:58.930 [2024-11-07 13:37:06.748452] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:58.930 [2024-11-07 13:37:06.748459] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=4096, cccid=4 00:31:58.930 [2024-11-07 13:37:06.748466] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025600): expected_datao=0, payload_size=4096 00:31:58.930 [2024-11-07 13:37:06.748473] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.930 [2024-11-07 13:37:06.748486] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:58.930 [2024-11-07 13:37:06.748492] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:58.930 [2024-11-07 13:37:06.748625] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.930 [2024-11-07 13:37:06.748634] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.930 [2024-11-07 13:37:06.748643] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.930 [2024-11-07 13:37:06.748649] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:31:58.930 [2024-11-07 13:37:06.748675] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:31:58.930 [2024-11-07 13:37:06.748690] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:31:58.930 [2024-11-07 13:37:06.748704] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:31:58.930 [2024-11-07 13:37:06.748717] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.930 [2024-11-07 13:37:06.748724] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:31:58.930 [2024-11-07 13:37:06.748736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.930 [2024-11-07 13:37:06.748752] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:58.930 [2024-11-07 13:37:06.748944] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:58.930 [2024-11-07 13:37:06.748953] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:58.930 [2024-11-07 13:37:06.748959] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:58.930 [2024-11-07 13:37:06.748965] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=4096, cccid=4 00:31:58.930 [2024-11-07 13:37:06.748972] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025600): expected_datao=0, payload_size=4096 00:31:58.930 [2024-11-07 13:37:06.748984] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.930 [2024-11-07 13:37:06.749005] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:58.930 [2024-11-07 13:37:06.749012] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:58.930 [2024-11-07 13:37:06.749159] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.930 [2024-11-07 13:37:06.749168] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.930 [2024-11-07 13:37:06.749174] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.931 [2024-11-07 13:37:06.749180] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:31:58.931 [2024-11-07 13:37:06.749200] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:31:58.931 [2024-11-07 13:37:06.749214] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:31:58.931 [2024-11-07 13:37:06.749230] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.931 [2024-11-07 13:37:06.749236] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:31:58.931 [2024-11-07 13:37:06.749248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.931 [2024-11-07 13:37:06.749264] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:58.931 [2024-11-07 13:37:06.749464] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:58.931 [2024-11-07 13:37:06.749474] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:58.931 [2024-11-07 13:37:06.749479] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:58.931 [2024-11-07 13:37:06.749486] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=4096, cccid=4 00:31:58.931 [2024-11-07 13:37:06.749493] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025600): expected_datao=0, payload_size=4096 00:31:58.931 [2024-11-07 13:37:06.749499] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.931 [2024-11-07 13:37:06.749511] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:58.931 [2024-11-07 13:37:06.749517] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:58.931 [2024-11-07 13:37:06.749663] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.931 [2024-11-07 13:37:06.749672] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.931 [2024-11-07 13:37:06.749677] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.931 [2024-11-07 13:37:06.749684] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:31:58.931 [2024-11-07 13:37:06.749699] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:31:58.931 [2024-11-07 13:37:06.749711] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:31:58.931 [2024-11-07 13:37:06.749722] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:31:58.931 [2024-11-07 13:37:06.749732] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:31:58.931 [2024-11-07 13:37:06.749740] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:31:58.931 [2024-11-07 13:37:06.749748] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:31:58.931 [2024-11-07 13:37:06.749757] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:31:58.931 [2024-11-07 13:37:06.749764] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:31:58.931 [2024-11-07 13:37:06.749772] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:31:58.931 [2024-11-07 13:37:06.749805] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.931 [2024-11-07 13:37:06.749813] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:31:58.931 [2024-11-07 13:37:06.749825] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.931 [2024-11-07 13:37:06.749835] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.931 [2024-11-07 13:37:06.749841] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.931 [2024-11-07 13:37:06.749848] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000025600) 00:31:58.931 [2024-11-07 13:37:06.749858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:58.931 [2024-11-07 13:37:06.749883] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:58.931 [2024-11-07 13:37:06.749891] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:58.931 [2024-11-07 13:37:06.750100] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.931 [2024-11-07 13:37:06.750110] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.931 [2024-11-07 13:37:06.750116] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.931 [2024-11-07 13:37:06.750124] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:31:58.931 [2024-11-07 13:37:06.750135] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.931 [2024-11-07 13:37:06.750144] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.931 [2024-11-07 13:37:06.750149] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.931 [2024-11-07 13:37:06.750155] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000025600 00:31:58.931 [2024-11-07 13:37:06.750170] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.931 [2024-11-07 13:37:06.750177] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000025600) 00:31:58.931 [2024-11-07 13:37:06.750189] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.931 [2024-11-07 13:37:06.750203] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:58.931 [2024-11-07 13:37:06.750408] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.931 [2024-11-07 13:37:06.750417] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.931 [2024-11-07 13:37:06.750422] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.931 [2024-11-07 13:37:06.750428] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000025600 00:31:58.931 [2024-11-07 13:37:06.750441] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.931 [2024-11-07 13:37:06.750447] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000025600) 00:31:58.931 [2024-11-07 13:37:06.750457] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.931 [2024-11-07 13:37:06.750470] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:58.931 [2024-11-07 13:37:06.750624] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.931 [2024-11-07 13:37:06.750633] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.931 [2024-11-07 13:37:06.750639] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.931 [2024-11-07 13:37:06.750645] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000025600 00:31:58.931 [2024-11-07 13:37:06.750657] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.931 [2024-11-07 13:37:06.750663] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000025600) 00:31:58.931 [2024-11-07 13:37:06.750673] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.931 [2024-11-07 13:37:06.750686] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:58.931 [2024-11-07 13:37:06.750856] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.931 [2024-11-07 13:37:06.750869] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.931 [2024-11-07 13:37:06.750875] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.931 [2024-11-07 13:37:06.750880] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000025600 00:31:58.931 [2024-11-07 13:37:06.750904] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.931 [2024-11-07 13:37:06.750912] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000025600) 00:31:58.931 [2024-11-07 13:37:06.750923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.931 [2024-11-07 13:37:06.750935] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.931 [2024-11-07 13:37:06.750942] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:31:58.931 [2024-11-07 13:37:06.750952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.931 [2024-11-07 13:37:06.750967] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.931 [2024-11-07 13:37:06.750974] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x615000025600) 00:31:58.931 [2024-11-07 13:37:06.750984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.931 [2024-11-07 13:37:06.751001] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.931 [2024-11-07 13:37:06.751009] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000025600) 00:31:58.931 [2024-11-07 13:37:06.751020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.931 [2024-11-07 13:37:06.751036] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:58.931 [2024-11-07 13:37:06.751045] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:58.931 [2024-11-07 13:37:06.751052] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:31:58.931 [2024-11-07 13:37:06.751059] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:31:58.931 [2024-11-07 13:37:06.751304] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:58.931 [2024-11-07 13:37:06.751314] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:58.931 [2024-11-07 13:37:06.751320] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:58.931 [2024-11-07 13:37:06.751326] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=8192, cccid=5 00:31:58.931 [2024-11-07 13:37:06.751334] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x615000025600): expected_datao=0, payload_size=8192 00:31:58.931 [2024-11-07 13:37:06.751341] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.931 [2024-11-07 13:37:06.751401] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:58.931 [2024-11-07 13:37:06.751409] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:58.931 [2024-11-07 13:37:06.751420] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:58.931 [2024-11-07 13:37:06.751433] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:58.931 [2024-11-07 13:37:06.751439] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:58.931 [2024-11-07 13:37:06.751445] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=512, cccid=4 00:31:58.932 [2024-11-07 13:37:06.751452] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025600): expected_datao=0, payload_size=512 00:31:58.932 [2024-11-07 13:37:06.751458] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.932 [2024-11-07 13:37:06.751467] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:58.932 [2024-11-07 13:37:06.751472] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:58.932 [2024-11-07 13:37:06.751480] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:58.932 [2024-11-07 13:37:06.751488] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:58.932 [2024-11-07 13:37:06.751494] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:58.932 [2024-11-07 13:37:06.751499] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=512, cccid=6 00:31:58.932 [2024-11-07 13:37:06.751506] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x615000025600): expected_datao=0, payload_size=512 00:31:58.932 [2024-11-07 13:37:06.751512] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.932 [2024-11-07 13:37:06.751523] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:58.932 [2024-11-07 13:37:06.751529] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:58.932 [2024-11-07 13:37:06.751536] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:58.932 [2024-11-07 13:37:06.751544] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:58.932 [2024-11-07 13:37:06.751549] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:58.932 [2024-11-07 13:37:06.751555] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=4096, cccid=7 00:31:58.932 [2024-11-07 13:37:06.751562] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x615000025600): expected_datao=0, payload_size=4096 00:31:58.932 [2024-11-07 13:37:06.751570] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.932 [2024-11-07 13:37:06.751585] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:58.932 [2024-11-07 13:37:06.751591] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:58.932 [2024-11-07 13:37:06.751604] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.932 [2024-11-07 13:37:06.751613] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.932 [2024-11-07 13:37:06.751618] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.932 [2024-11-07 13:37:06.751625] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000025600 00:31:58.932 [2024-11-07 13:37:06.751647] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.932 [2024-11-07 13:37:06.751660] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.932 [2024-11-07 13:37:06.751665] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.932 [2024-11-07 13:37:06.751671] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:31:58.932 [2024-11-07 13:37:06.751686] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.932 [2024-11-07 13:37:06.751694] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.932 [2024-11-07 13:37:06.751699] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.932 [2024-11-07 13:37:06.751705] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x615000025600 00:31:58.932 [2024-11-07 13:37:06.751716] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.932 [2024-11-07 13:37:06.751724] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.932 [2024-11-07 13:37:06.751729] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.932 [2024-11-07 13:37:06.751735] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000025600 00:31:58.932 ===================================================== 00:31:58.932 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:58.932 ===================================================== 00:31:58.932 Controller Capabilities/Features 00:31:58.932 ================================ 00:31:58.932 Vendor ID: 8086 00:31:58.932 Subsystem Vendor ID: 8086 00:31:58.932 Serial Number: SPDK00000000000001 00:31:58.932 Model Number: SPDK bdev Controller 00:31:58.932 Firmware Version: 25.01 00:31:58.932 Recommended Arb Burst: 6 00:31:58.932 IEEE OUI Identifier: e4 d2 5c 00:31:58.932 Multi-path I/O 00:31:58.932 May have multiple subsystem ports: Yes 00:31:58.932 May have multiple controllers: Yes 00:31:58.932 Associated with SR-IOV VF: No 00:31:58.932 Max Data Transfer Size: 131072 00:31:58.932 Max Number of Namespaces: 32 00:31:58.932 Max Number of I/O Queues: 127 00:31:58.932 NVMe Specification Version (VS): 1.3 00:31:58.932 NVMe Specification Version (Identify): 1.3 00:31:58.932 Maximum Queue Entries: 128 00:31:58.932 Contiguous Queues Required: Yes 00:31:58.932 Arbitration Mechanisms Supported 00:31:58.932 Weighted Round Robin: Not Supported 00:31:58.932 Vendor Specific: Not Supported 00:31:58.932 Reset Timeout: 15000 ms 00:31:58.932 Doorbell Stride: 4 bytes 00:31:58.932 NVM Subsystem Reset: Not Supported 00:31:58.932 Command Sets Supported 00:31:58.932 NVM Command Set: Supported 00:31:58.932 Boot Partition: Not Supported 00:31:58.932 Memory Page Size Minimum: 4096 bytes 00:31:58.932 Memory Page Size Maximum: 4096 bytes 00:31:58.932 Persistent Memory Region: Not Supported 00:31:58.932 Optional Asynchronous Events Supported 00:31:58.932 Namespace Attribute Notices: Supported 00:31:58.932 Firmware Activation Notices: Not Supported 00:31:58.932 ANA Change Notices: Not Supported 00:31:58.932 PLE Aggregate Log Change Notices: Not Supported 00:31:58.932 LBA Status Info Alert Notices: Not Supported 00:31:58.932 EGE Aggregate Log Change Notices: Not Supported 00:31:58.932 Normal NVM Subsystem Shutdown event: Not Supported 00:31:58.932 Zone Descriptor Change Notices: Not Supported 00:31:58.932 Discovery Log Change Notices: Not Supported 00:31:58.932 Controller Attributes 00:31:58.932 128-bit Host Identifier: Supported 00:31:58.932 Non-Operational Permissive Mode: Not Supported 00:31:58.932 NVM Sets: Not Supported 00:31:58.932 Read Recovery Levels: Not Supported 00:31:58.932 Endurance Groups: Not Supported 00:31:58.932 Predictable Latency Mode: Not Supported 00:31:58.932 Traffic Based Keep ALive: Not Supported 00:31:58.932 Namespace Granularity: Not Supported 00:31:58.932 SQ Associations: Not Supported 00:31:58.932 UUID List: Not Supported 00:31:58.932 Multi-Domain Subsystem: Not Supported 00:31:58.932 Fixed Capacity Management: Not Supported 00:31:58.932 Variable Capacity Management: Not Supported 00:31:58.932 Delete Endurance Group: Not Supported 00:31:58.932 Delete NVM Set: Not Supported 00:31:58.932 Extended LBA Formats Supported: Not Supported 00:31:58.932 Flexible Data Placement Supported: Not Supported 00:31:58.932 00:31:58.932 Controller Memory Buffer Support 00:31:58.932 ================================ 00:31:58.932 Supported: No 00:31:58.932 00:31:58.932 Persistent Memory Region Support 00:31:58.932 ================================ 00:31:58.932 Supported: No 00:31:58.932 00:31:58.932 Admin Command Set Attributes 00:31:58.932 ============================ 00:31:58.932 Security Send/Receive: Not Supported 00:31:58.932 Format NVM: Not Supported 00:31:58.932 Firmware Activate/Download: Not Supported 00:31:58.932 Namespace Management: Not Supported 00:31:58.932 Device Self-Test: Not Supported 00:31:58.932 Directives: Not Supported 00:31:58.932 NVMe-MI: Not Supported 00:31:58.932 Virtualization Management: Not Supported 00:31:58.932 Doorbell Buffer Config: Not Supported 00:31:58.932 Get LBA Status Capability: Not Supported 00:31:58.932 Command & Feature Lockdown Capability: Not Supported 00:31:58.932 Abort Command Limit: 4 00:31:58.932 Async Event Request Limit: 4 00:31:58.932 Number of Firmware Slots: N/A 00:31:58.932 Firmware Slot 1 Read-Only: N/A 00:31:58.932 Firmware Activation Without Reset: N/A 00:31:58.932 Multiple Update Detection Support: N/A 00:31:58.932 Firmware Update Granularity: No Information Provided 00:31:58.932 Per-Namespace SMART Log: No 00:31:58.932 Asymmetric Namespace Access Log Page: Not Supported 00:31:58.932 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:31:58.932 Command Effects Log Page: Supported 00:31:58.932 Get Log Page Extended Data: Supported 00:31:58.932 Telemetry Log Pages: Not Supported 00:31:58.932 Persistent Event Log Pages: Not Supported 00:31:58.932 Supported Log Pages Log Page: May Support 00:31:58.932 Commands Supported & Effects Log Page: Not Supported 00:31:58.932 Feature Identifiers & Effects Log Page:May Support 00:31:58.932 NVMe-MI Commands & Effects Log Page: May Support 00:31:58.932 Data Area 4 for Telemetry Log: Not Supported 00:31:58.932 Error Log Page Entries Supported: 128 00:31:58.932 Keep Alive: Supported 00:31:58.932 Keep Alive Granularity: 10000 ms 00:31:58.932 00:31:58.932 NVM Command Set Attributes 00:31:58.932 ========================== 00:31:58.932 Submission Queue Entry Size 00:31:58.932 Max: 64 00:31:58.932 Min: 64 00:31:58.932 Completion Queue Entry Size 00:31:58.932 Max: 16 00:31:58.932 Min: 16 00:31:58.932 Number of Namespaces: 32 00:31:58.932 Compare Command: Supported 00:31:58.932 Write Uncorrectable Command: Not Supported 00:31:58.932 Dataset Management Command: Supported 00:31:58.932 Write Zeroes Command: Supported 00:31:58.932 Set Features Save Field: Not Supported 00:31:58.932 Reservations: Supported 00:31:58.932 Timestamp: Not Supported 00:31:58.933 Copy: Supported 00:31:58.933 Volatile Write Cache: Present 00:31:58.933 Atomic Write Unit (Normal): 1 00:31:58.933 Atomic Write Unit (PFail): 1 00:31:58.933 Atomic Compare & Write Unit: 1 00:31:58.933 Fused Compare & Write: Supported 00:31:58.933 Scatter-Gather List 00:31:58.933 SGL Command Set: Supported 00:31:58.933 SGL Keyed: Supported 00:31:58.933 SGL Bit Bucket Descriptor: Not Supported 00:31:58.933 SGL Metadata Pointer: Not Supported 00:31:58.933 Oversized SGL: Not Supported 00:31:58.933 SGL Metadata Address: Not Supported 00:31:58.933 SGL Offset: Supported 00:31:58.933 Transport SGL Data Block: Not Supported 00:31:58.933 Replay Protected Memory Block: Not Supported 00:31:58.933 00:31:58.933 Firmware Slot Information 00:31:58.933 ========================= 00:31:58.933 Active slot: 1 00:31:58.933 Slot 1 Firmware Revision: 25.01 00:31:58.933 00:31:58.933 00:31:58.933 Commands Supported and Effects 00:31:58.933 ============================== 00:31:58.933 Admin Commands 00:31:58.933 -------------- 00:31:58.933 Get Log Page (02h): Supported 00:31:58.933 Identify (06h): Supported 00:31:58.933 Abort (08h): Supported 00:31:58.933 Set Features (09h): Supported 00:31:58.933 Get Features (0Ah): Supported 00:31:58.933 Asynchronous Event Request (0Ch): Supported 00:31:58.933 Keep Alive (18h): Supported 00:31:58.933 I/O Commands 00:31:58.933 ------------ 00:31:58.933 Flush (00h): Supported LBA-Change 00:31:58.933 Write (01h): Supported LBA-Change 00:31:58.933 Read (02h): Supported 00:31:58.933 Compare (05h): Supported 00:31:58.933 Write Zeroes (08h): Supported LBA-Change 00:31:58.933 Dataset Management (09h): Supported LBA-Change 00:31:58.933 Copy (19h): Supported LBA-Change 00:31:58.933 00:31:58.933 Error Log 00:31:58.933 ========= 00:31:58.933 00:31:58.933 Arbitration 00:31:58.933 =========== 00:31:58.933 Arbitration Burst: 1 00:31:58.933 00:31:58.933 Power Management 00:31:58.933 ================ 00:31:58.933 Number of Power States: 1 00:31:58.933 Current Power State: Power State #0 00:31:58.933 Power State #0: 00:31:58.933 Max Power: 0.00 W 00:31:58.933 Non-Operational State: Operational 00:31:58.933 Entry Latency: Not Reported 00:31:58.933 Exit Latency: Not Reported 00:31:58.933 Relative Read Throughput: 0 00:31:58.933 Relative Read Latency: 0 00:31:58.933 Relative Write Throughput: 0 00:31:58.933 Relative Write Latency: 0 00:31:58.933 Idle Power: Not Reported 00:31:58.933 Active Power: Not Reported 00:31:58.933 Non-Operational Permissive Mode: Not Supported 00:31:58.933 00:31:58.933 Health Information 00:31:58.933 ================== 00:31:58.933 Critical Warnings: 00:31:58.933 Available Spare Space: OK 00:31:58.933 Temperature: OK 00:31:58.933 Device Reliability: OK 00:31:58.933 Read Only: No 00:31:58.933 Volatile Memory Backup: OK 00:31:58.933 Current Temperature: 0 Kelvin (-273 Celsius) 00:31:58.933 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:31:58.933 Available Spare: 0% 00:31:58.933 Available Spare Threshold: 0% 00:31:58.933 Life Percentage Used:[2024-11-07 13:37:06.755905] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.933 [2024-11-07 13:37:06.755919] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000025600) 00:31:58.933 [2024-11-07 13:37:06.755932] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.933 [2024-11-07 13:37:06.755953] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:31:58.933 [2024-11-07 13:37:06.756173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.933 [2024-11-07 13:37:06.756183] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.933 [2024-11-07 13:37:06.756190] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.933 [2024-11-07 13:37:06.756197] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000025600 00:31:58.933 [2024-11-07 13:37:06.756249] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:31:58.933 [2024-11-07 13:37:06.756266] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:31:58.933 [2024-11-07 13:37:06.756277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.933 [2024-11-07 13:37:06.756286] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000025600 00:31:58.933 [2024-11-07 13:37:06.756294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.933 [2024-11-07 13:37:06.756302] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000025600 00:31:58.933 [2024-11-07 13:37:06.756309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.933 [2024-11-07 13:37:06.756317] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:58.933 [2024-11-07 13:37:06.756327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.933 [2024-11-07 13:37:06.756340] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.933 [2024-11-07 13:37:06.756347] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.933 [2024-11-07 13:37:06.756354] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:58.933 [2024-11-07 13:37:06.756366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.933 [2024-11-07 13:37:06.756385] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:58.933 [2024-11-07 13:37:06.756556] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.933 [2024-11-07 13:37:06.756566] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.933 [2024-11-07 13:37:06.756572] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.933 [2024-11-07 13:37:06.756581] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:58.933 [2024-11-07 13:37:06.756593] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.933 [2024-11-07 13:37:06.756600] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.933 [2024-11-07 13:37:06.756607] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:58.933 [2024-11-07 13:37:06.756618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.933 [2024-11-07 13:37:06.756637] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:58.933 [2024-11-07 13:37:06.756814] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.933 [2024-11-07 13:37:06.756823] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.933 [2024-11-07 13:37:06.756828] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.933 [2024-11-07 13:37:06.756835] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:58.933 [2024-11-07 13:37:06.756843] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:31:58.933 [2024-11-07 13:37:06.756851] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:31:58.933 [2024-11-07 13:37:06.756870] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.933 [2024-11-07 13:37:06.756880] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.933 [2024-11-07 13:37:06.756887] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:58.933 [2024-11-07 13:37:06.756898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.933 [2024-11-07 13:37:06.756913] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:58.933 [2024-11-07 13:37:06.757111] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.933 [2024-11-07 13:37:06.757121] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.933 [2024-11-07 13:37:06.757126] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.933 [2024-11-07 13:37:06.757132] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:58.934 [2024-11-07 13:37:06.757146] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.934 [2024-11-07 13:37:06.757153] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.934 [2024-11-07 13:37:06.757158] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:58.934 [2024-11-07 13:37:06.757169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.934 [2024-11-07 13:37:06.757182] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:58.934 [2024-11-07 13:37:06.757384] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.934 [2024-11-07 13:37:06.757396] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.934 [2024-11-07 13:37:06.757401] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.934 [2024-11-07 13:37:06.757407] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:58.934 [2024-11-07 13:37:06.757421] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.934 [2024-11-07 13:37:06.757427] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.934 [2024-11-07 13:37:06.757433] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:58.934 [2024-11-07 13:37:06.757447] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.934 [2024-11-07 13:37:06.757461] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:58.934 [2024-11-07 13:37:06.757664] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.934 [2024-11-07 13:37:06.757673] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.934 [2024-11-07 13:37:06.757678] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.934 [2024-11-07 13:37:06.757684] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:58.934 [2024-11-07 13:37:06.757698] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.934 [2024-11-07 13:37:06.757704] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.934 [2024-11-07 13:37:06.757709] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:58.934 [2024-11-07 13:37:06.757720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.934 [2024-11-07 13:37:06.757733] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:58.934 [2024-11-07 13:37:06.757941] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.934 [2024-11-07 13:37:06.757951] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.934 [2024-11-07 13:37:06.757956] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.934 [2024-11-07 13:37:06.757962] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:58.934 [2024-11-07 13:37:06.757976] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.934 [2024-11-07 13:37:06.757982] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.934 [2024-11-07 13:37:06.757988] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:58.934 [2024-11-07 13:37:06.757998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.934 [2024-11-07 13:37:06.758012] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:58.934 [2024-11-07 13:37:06.758200] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.934 [2024-11-07 13:37:06.758208] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.934 [2024-11-07 13:37:06.758214] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.934 [2024-11-07 13:37:06.758220] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:58.934 [2024-11-07 13:37:06.758233] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.934 [2024-11-07 13:37:06.758239] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.934 [2024-11-07 13:37:06.758245] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:58.934 [2024-11-07 13:37:06.758256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.934 [2024-11-07 13:37:06.758269] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:58.934 [2024-11-07 13:37:06.758463] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.934 [2024-11-07 13:37:06.758472] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.934 [2024-11-07 13:37:06.758477] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.934 [2024-11-07 13:37:06.758483] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:58.934 [2024-11-07 13:37:06.758497] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.934 [2024-11-07 13:37:06.758503] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.934 [2024-11-07 13:37:06.758509] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:58.934 [2024-11-07 13:37:06.758519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.934 [2024-11-07 13:37:06.758533] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:58.934 [2024-11-07 13:37:06.758587] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.934 [2024-11-07 13:37:06.758596] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.934 [2024-11-07 13:37:06.758601] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.934 [2024-11-07 13:37:06.758607] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:58.934 [2024-11-07 13:37:06.758621] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.934 [2024-11-07 13:37:06.758627] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.934 [2024-11-07 13:37:06.758633] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:58.934 [2024-11-07 13:37:06.758643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.934 [2024-11-07 13:37:06.758656] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:58.934 [2024-11-07 13:37:06.758710] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.934 [2024-11-07 13:37:06.758719] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.934 [2024-11-07 13:37:06.758727] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.934 [2024-11-07 13:37:06.758733] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:58.934 [2024-11-07 13:37:06.758747] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.934 [2024-11-07 13:37:06.758753] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.934 [2024-11-07 13:37:06.758759] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:58.934 [2024-11-07 13:37:06.758769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.934 [2024-11-07 13:37:06.758782] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:58.934 [2024-11-07 13:37:06.758840] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.934 [2024-11-07 13:37:06.758849] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.934 [2024-11-07 13:37:06.758860] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.934 [2024-11-07 13:37:06.758872] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:58.934 [2024-11-07 13:37:06.758886] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.934 [2024-11-07 13:37:06.758892] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.934 [2024-11-07 13:37:06.758898] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:58.934 [2024-11-07 13:37:06.758908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.934 [2024-11-07 13:37:06.758922] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:58.934 [2024-11-07 13:37:06.758978] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.934 [2024-11-07 13:37:06.758990] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.934 [2024-11-07 13:37:06.758995] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.934 [2024-11-07 13:37:06.759001] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:58.934 [2024-11-07 13:37:06.759015] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.934 [2024-11-07 13:37:06.759021] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.934 [2024-11-07 13:37:06.759027] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:58.934 [2024-11-07 13:37:06.759039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.934 [2024-11-07 13:37:06.759052] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:58.934 [2024-11-07 13:37:06.759217] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.934 [2024-11-07 13:37:06.759226] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.934 [2024-11-07 13:37:06.759231] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.934 [2024-11-07 13:37:06.759237] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:58.934 [2024-11-07 13:37:06.759251] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.934 [2024-11-07 13:37:06.759257] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.934 [2024-11-07 13:37:06.759263] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:58.934 [2024-11-07 13:37:06.759273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.934 [2024-11-07 13:37:06.759286] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:58.934 [2024-11-07 13:37:06.759482] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.934 [2024-11-07 13:37:06.759491] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.934 [2024-11-07 13:37:06.759496] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.934 [2024-11-07 13:37:06.759502] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:58.934 [2024-11-07 13:37:06.759515] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.934 [2024-11-07 13:37:06.759521] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.934 [2024-11-07 13:37:06.759527] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:58.934 [2024-11-07 13:37:06.759538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.934 [2024-11-07 13:37:06.759551] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:58.935 [2024-11-07 13:37:06.759714] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.935 [2024-11-07 13:37:06.759723] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.935 [2024-11-07 13:37:06.759728] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.935 [2024-11-07 13:37:06.759734] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:58.935 [2024-11-07 13:37:06.759747] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:58.935 [2024-11-07 13:37:06.759753] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:58.935 [2024-11-07 13:37:06.759759] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:31:58.935 [2024-11-07 13:37:06.759769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.935 [2024-11-07 13:37:06.759783] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:58.935 [2024-11-07 13:37:06.763875] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:58.935 [2024-11-07 13:37:06.763895] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:58.935 [2024-11-07 13:37:06.763901] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:58.935 [2024-11-07 13:37:06.763908] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:31:58.935 [2024-11-07 13:37:06.763922] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:31:58.935 0% 00:31:58.935 Data Units Read: 0 00:31:58.935 Data Units Written: 0 00:31:58.935 Host Read Commands: 0 00:31:58.935 Host Write Commands: 0 00:31:58.935 Controller Busy Time: 0 minutes 00:31:58.935 Power Cycles: 0 00:31:58.935 Power On Hours: 0 hours 00:31:58.935 Unsafe Shutdowns: 0 00:31:58.935 Unrecoverable Media Errors: 0 00:31:58.935 Lifetime Error Log Entries: 0 00:31:58.935 Warning Temperature Time: 0 minutes 00:31:58.935 Critical Temperature Time: 0 minutes 00:31:58.935 00:31:58.935 Number of Queues 00:31:58.935 ================ 00:31:58.935 Number of I/O Submission Queues: 127 00:31:58.935 Number of I/O Completion Queues: 127 00:31:58.935 00:31:58.935 Active Namespaces 00:31:58.935 ================= 00:31:58.935 Namespace ID:1 00:31:58.935 Error Recovery Timeout: Unlimited 00:31:58.935 Command Set Identifier: NVM (00h) 00:31:58.935 Deallocate: Supported 00:31:58.935 Deallocated/Unwritten Error: Not Supported 00:31:58.935 Deallocated Read Value: Unknown 00:31:58.935 Deallocate in Write Zeroes: Not Supported 00:31:58.935 Deallocated Guard Field: 0xFFFF 00:31:58.935 Flush: Supported 00:31:58.935 Reservation: Supported 00:31:58.935 Namespace Sharing Capabilities: Multiple Controllers 00:31:58.935 Size (in LBAs): 131072 (0GiB) 00:31:58.935 Capacity (in LBAs): 131072 (0GiB) 00:31:58.935 Utilization (in LBAs): 131072 (0GiB) 00:31:58.935 NGUID: ABCDEF0123456789ABCDEF0123456789 00:31:58.935 EUI64: ABCDEF0123456789 00:31:58.935 UUID: 20c97dcc-8a89-46e7-bab7-383c00cd5d2c 00:31:58.935 Thin Provisioning: Not Supported 00:31:58.935 Per-NS Atomic Units: Yes 00:31:58.935 Atomic Boundary Size (Normal): 0 00:31:58.935 Atomic Boundary Size (PFail): 0 00:31:58.935 Atomic Boundary Offset: 0 00:31:58.935 Maximum Single Source Range Length: 65535 00:31:58.935 Maximum Copy Length: 65535 00:31:58.935 Maximum Source Range Count: 1 00:31:58.935 NGUID/EUI64 Never Reused: No 00:31:58.935 Namespace Write Protected: No 00:31:58.935 Number of LBA Formats: 1 00:31:58.935 Current LBA Format: LBA Format #00 00:31:58.935 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:58.935 00:31:58.935 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:31:58.935 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:58.935 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.935 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:58.935 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.935 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:31:58.935 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:31:58.935 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:58.935 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:31:58.935 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:58.935 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:31:58.935 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:58.935 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:58.935 rmmod nvme_tcp 00:31:58.935 rmmod nvme_fabrics 00:31:58.935 rmmod nvme_keyring 00:31:58.935 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:58.935 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:31:58.935 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:31:58.935 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 4027636 ']' 00:31:58.935 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 4027636 00:31:58.935 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 4027636 ']' 00:31:58.935 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 4027636 00:31:58.935 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:31:58.935 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:58.935 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4027636 00:31:59.195 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:59.195 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:59.195 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4027636' 00:31:59.195 killing process with pid 4027636 00:31:59.195 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 4027636 00:31:59.195 13:37:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 4027636 00:32:00.137 13:37:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:00.137 13:37:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:00.137 13:37:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:00.137 13:37:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:32:00.137 13:37:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:32:00.137 13:37:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:32:00.137 13:37:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:00.137 13:37:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:00.137 13:37:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:00.137 13:37:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:00.137 13:37:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:00.137 13:37:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:02.047 13:37:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:02.047 00:32:02.047 real 0m13.114s 00:32:02.047 user 0m10.835s 00:32:02.047 sys 0m6.658s 00:32:02.047 13:37:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:02.047 13:37:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:02.047 ************************************ 00:32:02.047 END TEST nvmf_identify 00:32:02.047 ************************************ 00:32:02.047 13:37:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:32:02.047 13:37:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:32:02.047 13:37:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:02.047 13:37:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.047 ************************************ 00:32:02.047 START TEST nvmf_perf 00:32:02.047 ************************************ 00:32:02.047 13:37:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:32:02.308 * Looking for test storage... 00:32:02.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:02.308 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:02.308 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:32:02.308 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:02.308 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:02.308 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:02.308 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:02.308 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:02.308 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:32:02.308 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:32:02.308 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:32:02.308 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:32:02.308 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:32:02.308 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:32:02.308 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:32:02.308 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:02.308 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:32:02.308 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:32:02.308 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:02.308 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:02.308 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:02.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.309 --rc genhtml_branch_coverage=1 00:32:02.309 --rc genhtml_function_coverage=1 00:32:02.309 --rc genhtml_legend=1 00:32:02.309 --rc geninfo_all_blocks=1 00:32:02.309 --rc geninfo_unexecuted_blocks=1 00:32:02.309 00:32:02.309 ' 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:02.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.309 --rc genhtml_branch_coverage=1 00:32:02.309 --rc genhtml_function_coverage=1 00:32:02.309 --rc genhtml_legend=1 00:32:02.309 --rc geninfo_all_blocks=1 00:32:02.309 --rc geninfo_unexecuted_blocks=1 00:32:02.309 00:32:02.309 ' 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:02.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.309 --rc genhtml_branch_coverage=1 00:32:02.309 --rc genhtml_function_coverage=1 00:32:02.309 --rc genhtml_legend=1 00:32:02.309 --rc geninfo_all_blocks=1 00:32:02.309 --rc geninfo_unexecuted_blocks=1 00:32:02.309 00:32:02.309 ' 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:02.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.309 --rc genhtml_branch_coverage=1 00:32:02.309 --rc genhtml_function_coverage=1 00:32:02.309 --rc genhtml_legend=1 00:32:02.309 --rc geninfo_all_blocks=1 00:32:02.309 --rc geninfo_unexecuted_blocks=1 00:32:02.309 00:32:02.309 ' 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:02.309 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:32:02.309 13:37:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:10.443 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:10.443 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:10.443 Found net devices under 0000:31:00.0: cvl_0_0 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:10.443 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:10.444 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:10.444 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:10.444 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:10.444 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:10.444 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:10.444 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:10.444 Found net devices under 0000:31:00.1: cvl_0_1 00:32:10.444 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:10.444 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:10.444 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:32:10.444 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:10.444 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:10.444 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:10.444 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:10.444 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:10.444 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:10.444 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:10.444 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:10.444 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:10.444 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:10.444 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:10.444 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:10.444 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:10.444 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:10.444 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:10.444 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:10.444 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:10.444 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:10.444 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:10.444 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:10.444 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:10.444 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:10.704 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:10.704 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:10.704 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:10.704 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:10.704 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:10.704 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:32:10.704 00:32:10.704 --- 10.0.0.2 ping statistics --- 00:32:10.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:10.704 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:32:10.704 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:10.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:10.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:32:10.704 00:32:10.704 --- 10.0.0.1 ping statistics --- 00:32:10.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:10.704 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:32:10.704 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:10.704 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:32:10.704 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:10.704 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:10.704 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:10.704 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:10.704 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:10.704 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:10.704 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:10.704 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:32:10.704 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:10.704 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:10.704 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:10.704 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=4032938 00:32:10.704 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 4032938 00:32:10.704 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:10.704 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 4032938 ']' 00:32:10.704 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:10.704 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:10.704 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:10.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:10.704 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:10.704 13:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:10.704 [2024-11-07 13:37:18.650577] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:32:10.704 [2024-11-07 13:37:18.650707] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:10.964 [2024-11-07 13:37:18.817124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:10.964 [2024-11-07 13:37:18.919691] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:10.964 [2024-11-07 13:37:18.919733] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:10.964 [2024-11-07 13:37:18.919745] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:10.964 [2024-11-07 13:37:18.919756] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:10.964 [2024-11-07 13:37:18.919765] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:10.964 [2024-11-07 13:37:18.921946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:10.964 [2024-11-07 13:37:18.924887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:10.964 [2024-11-07 13:37:18.924976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:10.964 [2024-11-07 13:37:18.924995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:11.643 13:37:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:11.643 13:37:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:32:11.643 13:37:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:11.643 13:37:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:11.643 13:37:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:11.643 13:37:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:11.643 13:37:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:11.643 13:37:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:32:12.010 13:37:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:32:12.010 13:37:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:32:12.270 13:37:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:32:12.270 13:37:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:12.530 13:37:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:32:12.530 13:37:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:32:12.530 13:37:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:32:12.530 13:37:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:32:12.530 13:37:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:32:12.790 [2024-11-07 13:37:20.580849] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:12.790 13:37:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:13.050 13:37:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:32:13.050 13:37:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:13.050 13:37:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:32:13.050 13:37:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:13.310 13:37:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:13.570 [2024-11-07 13:37:21.319659] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:13.570 13:37:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:13.570 13:37:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:32:13.570 13:37:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:32:13.570 13:37:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:32:13.570 13:37:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:32:15.482 Initializing NVMe Controllers 00:32:15.482 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:32:15.482 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:32:15.482 Initialization complete. Launching workers. 00:32:15.482 ======================================================== 00:32:15.482 Latency(us) 00:32:15.482 Device Information : IOPS MiB/s Average min max 00:32:15.482 PCIE (0000:65:00.0) NSID 1 from core 0: 74267.87 290.11 430.15 14.26 4846.66 00:32:15.482 ======================================================== 00:32:15.482 Total : 74267.87 290.11 430.15 14.26 4846.66 00:32:15.482 00:32:15.482 13:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:16.865 Initializing NVMe Controllers 00:32:16.865 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:16.865 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:16.865 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:16.865 Initialization complete. Launching workers. 00:32:16.865 ======================================================== 00:32:16.865 Latency(us) 00:32:16.865 Device Information : IOPS MiB/s Average min max 00:32:16.865 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 96.00 0.37 10724.18 255.83 45563.63 00:32:16.865 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 66.00 0.26 15235.75 5981.23 47908.71 00:32:16.865 ======================================================== 00:32:16.865 Total : 162.00 0.63 12562.23 255.83 47908.71 00:32:16.865 00:32:16.865 13:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:18.249 Initializing NVMe Controllers 00:32:18.249 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:18.249 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:18.249 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:18.249 Initialization complete. Launching workers. 00:32:18.249 ======================================================== 00:32:18.249 Latency(us) 00:32:18.249 Device Information : IOPS MiB/s Average min max 00:32:18.249 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9535.28 37.25 3356.60 587.92 6952.14 00:32:18.249 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3786.23 14.79 8478.57 5767.69 16066.92 00:32:18.249 ======================================================== 00:32:18.249 Total : 13321.50 52.04 4812.36 587.92 16066.92 00:32:18.249 00:32:18.249 13:37:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:32:18.249 13:37:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:32:18.249 13:37:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:21.547 Initializing NVMe Controllers 00:32:21.547 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:21.547 Controller IO queue size 128, less than required. 00:32:21.547 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:21.547 Controller IO queue size 128, less than required. 00:32:21.547 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:21.547 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:21.547 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:21.547 Initialization complete. Launching workers. 00:32:21.547 ======================================================== 00:32:21.547 Latency(us) 00:32:21.547 Device Information : IOPS MiB/s Average min max 00:32:21.547 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1808.77 452.19 73399.01 40132.86 232117.22 00:32:21.547 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 562.49 140.62 241373.89 102676.63 424021.77 00:32:21.547 ======================================================== 00:32:21.547 Total : 2371.26 592.82 113244.87 40132.86 424021.77 00:32:21.547 00:32:21.547 13:37:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:32:21.547 No valid NVMe controllers or AIO or URING devices found 00:32:21.547 Initializing NVMe Controllers 00:32:21.547 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:21.547 Controller IO queue size 128, less than required. 00:32:21.547 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:21.547 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:32:21.547 Controller IO queue size 128, less than required. 00:32:21.547 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:21.547 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:32:21.547 WARNING: Some requested NVMe devices were skipped 00:32:21.547 13:37:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:32:24.086 Initializing NVMe Controllers 00:32:24.086 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:24.086 Controller IO queue size 128, less than required. 00:32:24.086 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:24.086 Controller IO queue size 128, less than required. 00:32:24.086 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:24.086 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:24.086 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:24.086 Initialization complete. Launching workers. 00:32:24.086 00:32:24.086 ==================== 00:32:24.086 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:32:24.086 TCP transport: 00:32:24.086 polls: 15451 00:32:24.086 idle_polls: 7973 00:32:24.086 sock_completions: 7478 00:32:24.086 nvme_completions: 5723 00:32:24.086 submitted_requests: 8614 00:32:24.086 queued_requests: 1 00:32:24.086 00:32:24.086 ==================== 00:32:24.086 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:32:24.086 TCP transport: 00:32:24.086 polls: 18395 00:32:24.086 idle_polls: 10405 00:32:24.086 sock_completions: 7990 00:32:24.086 nvme_completions: 5991 00:32:24.086 submitted_requests: 8986 00:32:24.086 queued_requests: 1 00:32:24.086 ======================================================== 00:32:24.086 Latency(us) 00:32:24.086 Device Information : IOPS MiB/s Average min max 00:32:24.086 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1430.09 357.52 94401.25 49834.65 328716.31 00:32:24.086 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1497.07 374.27 86555.57 47502.86 267534.04 00:32:24.086 ======================================================== 00:32:24.086 Total : 2927.17 731.79 90388.65 47502.86 328716.31 00:32:24.086 00:32:24.086 13:37:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:32:24.086 13:37:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:24.347 13:37:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:32:24.347 13:37:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:65:00.0 ']' 00:32:24.347 13:37:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:32:25.730 13:37:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=3bbc728f-ab1d-4010-ac0f-6c1881d1b9a3 00:32:25.730 13:37:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 3bbc728f-ab1d-4010-ac0f-6c1881d1b9a3 00:32:25.730 13:37:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local lvs_uuid=3bbc728f-ab1d-4010-ac0f-6c1881d1b9a3 00:32:25.730 13:37:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local lvs_info 00:32:25.730 13:37:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local fc 00:32:25.730 13:37:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local cs 00:32:25.730 13:37:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:25.730 13:37:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # lvs_info='[ 00:32:25.730 { 00:32:25.730 "uuid": "3bbc728f-ab1d-4010-ac0f-6c1881d1b9a3", 00:32:25.730 "name": "lvs_0", 00:32:25.730 "base_bdev": "Nvme0n1", 00:32:25.730 "total_data_clusters": 457407, 00:32:25.730 "free_clusters": 457407, 00:32:25.730 "block_size": 512, 00:32:25.730 "cluster_size": 4194304 00:32:25.730 } 00:32:25.730 ]' 00:32:25.730 13:37:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # jq '.[] | select(.uuid=="3bbc728f-ab1d-4010-ac0f-6c1881d1b9a3") .free_clusters' 00:32:25.730 13:37:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # fc=457407 00:32:25.730 13:37:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # jq '.[] | select(.uuid=="3bbc728f-ab1d-4010-ac0f-6c1881d1b9a3") .cluster_size' 00:32:25.730 13:37:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # cs=4194304 00:32:25.730 13:37:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1375 -- # free_mb=1829628 00:32:25.730 13:37:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1376 -- # echo 1829628 00:32:25.730 1829628 00:32:25.730 13:37:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 1829628 -gt 20480 ']' 00:32:25.730 13:37:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:32:25.730 13:37:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3bbc728f-ab1d-4010-ac0f-6c1881d1b9a3 lbd_0 20480 00:32:25.990 13:37:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=bbe739d4-e0e1-4cf7-87fe-31611bae8bfb 00:32:25.990 13:37:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore bbe739d4-e0e1-4cf7-87fe-31611bae8bfb lvs_n_0 00:32:27.903 13:37:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=4330e741-ec80-44ff-a8b1-cc8cf0cd7a87 00:32:27.903 13:37:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 4330e741-ec80-44ff-a8b1-cc8cf0cd7a87 00:32:27.903 13:37:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local lvs_uuid=4330e741-ec80-44ff-a8b1-cc8cf0cd7a87 00:32:27.903 13:37:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local lvs_info 00:32:27.903 13:37:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local fc 00:32:27.903 13:37:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local cs 00:32:27.903 13:37:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:27.903 13:37:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # lvs_info='[ 00:32:27.903 { 00:32:27.903 "uuid": "3bbc728f-ab1d-4010-ac0f-6c1881d1b9a3", 00:32:27.903 "name": "lvs_0", 00:32:27.903 "base_bdev": "Nvme0n1", 00:32:27.903 "total_data_clusters": 457407, 00:32:27.903 "free_clusters": 452287, 00:32:27.903 "block_size": 512, 00:32:27.903 "cluster_size": 4194304 00:32:27.903 }, 00:32:27.903 { 00:32:27.903 "uuid": "4330e741-ec80-44ff-a8b1-cc8cf0cd7a87", 00:32:27.903 "name": "lvs_n_0", 00:32:27.903 "base_bdev": "bbe739d4-e0e1-4cf7-87fe-31611bae8bfb", 00:32:27.903 "total_data_clusters": 5114, 00:32:27.903 "free_clusters": 5114, 00:32:27.903 "block_size": 512, 00:32:27.903 "cluster_size": 4194304 00:32:27.903 } 00:32:27.903 ]' 00:32:27.903 13:37:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # jq '.[] | select(.uuid=="4330e741-ec80-44ff-a8b1-cc8cf0cd7a87") .free_clusters' 00:32:27.903 13:37:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # fc=5114 00:32:27.903 13:37:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # jq '.[] | select(.uuid=="4330e741-ec80-44ff-a8b1-cc8cf0cd7a87") .cluster_size' 00:32:27.903 13:37:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # cs=4194304 00:32:27.903 13:37:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1375 -- # free_mb=20456 00:32:27.903 13:37:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1376 -- # echo 20456 00:32:27.903 20456 00:32:27.903 13:37:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:32:27.903 13:37:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4330e741-ec80-44ff-a8b1-cc8cf0cd7a87 lbd_nest_0 20456 00:32:27.903 13:37:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=7bd3e390-9572-4886-8a3e-4ade2c44562e 00:32:27.903 13:37:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:28.164 13:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:32:28.164 13:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 7bd3e390-9572-4886-8a3e-4ade2c44562e 00:32:28.424 13:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:28.684 13:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:32:28.684 13:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:32:28.684 13:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:28.684 13:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:28.684 13:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:40.909 Initializing NVMe Controllers 00:32:40.909 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:40.909 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:40.909 Initialization complete. Launching workers. 00:32:40.909 ======================================================== 00:32:40.909 Latency(us) 00:32:40.909 Device Information : IOPS MiB/s Average min max 00:32:40.909 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 46.39 0.02 21571.76 122.11 47374.13 00:32:40.909 ======================================================== 00:32:40.909 Total : 46.39 0.02 21571.76 122.11 47374.13 00:32:40.909 00:32:40.909 13:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:40.909 13:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:50.912 Initializing NVMe Controllers 00:32:50.912 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:50.912 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:50.912 Initialization complete. Launching workers. 00:32:50.912 ======================================================== 00:32:50.912 Latency(us) 00:32:50.912 Device Information : IOPS MiB/s Average min max 00:32:50.912 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 57.78 7.22 17333.87 7831.09 51885.09 00:32:50.912 ======================================================== 00:32:50.912 Total : 57.78 7.22 17333.87 7831.09 51885.09 00:32:50.912 00:32:50.912 13:37:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:50.912 13:37:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:50.912 13:37:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:00.909 Initializing NVMe Controllers 00:33:00.909 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:00.909 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:00.909 Initialization complete. Launching workers. 00:33:00.909 ======================================================== 00:33:00.909 Latency(us) 00:33:00.909 Device Information : IOPS MiB/s Average min max 00:33:00.909 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8422.80 4.11 3799.14 557.19 7982.44 00:33:00.909 ======================================================== 00:33:00.909 Total : 8422.80 4.11 3799.14 557.19 7982.44 00:33:00.909 00:33:00.909 13:38:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:00.909 13:38:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:10.907 Initializing NVMe Controllers 00:33:10.907 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:10.907 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:10.907 Initialization complete. Launching workers. 00:33:10.907 ======================================================== 00:33:10.907 Latency(us) 00:33:10.907 Device Information : IOPS MiB/s Average min max 00:33:10.907 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3555.35 444.42 9001.65 585.15 22398.63 00:33:10.907 ======================================================== 00:33:10.907 Total : 3555.35 444.42 9001.65 585.15 22398.63 00:33:10.907 00:33:10.907 13:38:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:33:10.907 13:38:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:10.907 13:38:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:20.904 Initializing NVMe Controllers 00:33:20.904 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:20.904 Controller IO queue size 128, less than required. 00:33:20.904 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:20.904 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:20.904 Initialization complete. Launching workers. 00:33:20.904 ======================================================== 00:33:20.904 Latency(us) 00:33:20.904 Device Information : IOPS MiB/s Average min max 00:33:20.904 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15740.91 7.69 8133.46 1951.11 21884.41 00:33:20.904 ======================================================== 00:33:20.904 Total : 15740.91 7.69 8133.46 1951.11 21884.41 00:33:20.904 00:33:20.904 13:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:20.904 13:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:33.127 Initializing NVMe Controllers 00:33:33.127 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:33.127 Controller IO queue size 128, less than required. 00:33:33.127 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:33.127 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:33.127 Initialization complete. Launching workers. 00:33:33.127 ======================================================== 00:33:33.127 Latency(us) 00:33:33.127 Device Information : IOPS MiB/s Average min max 00:33:33.127 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1134.23 141.78 113921.10 15813.67 250344.47 00:33:33.127 ======================================================== 00:33:33.127 Total : 1134.23 141.78 113921.10 15813.67 250344.47 00:33:33.127 00:33:33.127 13:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:33.127 13:38:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7bd3e390-9572-4886-8a3e-4ade2c44562e 00:33:33.127 13:38:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:33:33.127 13:38:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bbe739d4-e0e1-4cf7-87fe-31611bae8bfb 00:33:33.387 13:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:33:33.387 13:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:33:33.387 13:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:33:33.388 13:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:33.388 13:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:33:33.388 13:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:33.388 13:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:33:33.388 13:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:33.388 13:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:33.388 rmmod nvme_tcp 00:33:33.388 rmmod nvme_fabrics 00:33:33.648 rmmod nvme_keyring 00:33:33.648 13:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:33.648 13:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:33:33.648 13:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:33:33.648 13:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 4032938 ']' 00:33:33.648 13:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 4032938 00:33:33.648 13:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 4032938 ']' 00:33:33.648 13:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 4032938 00:33:33.648 13:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:33:33.648 13:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:33.648 13:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4032938 00:33:33.648 13:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:33.648 13:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:33.648 13:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4032938' 00:33:33.648 killing process with pid 4032938 00:33:33.648 13:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 4032938 00:33:33.648 13:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 4032938 00:33:36.192 13:38:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:36.192 13:38:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:36.192 13:38:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:36.192 13:38:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:33:36.192 13:38:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:33:36.192 13:38:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:36.192 13:38:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:33:36.192 13:38:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:36.192 13:38:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:36.192 13:38:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:36.192 13:38:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:36.192 13:38:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:38.736 13:38:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:38.736 00:33:38.736 real 1m36.175s 00:33:38.736 user 5m37.197s 00:33:38.736 sys 0m16.328s 00:33:38.736 13:38:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:38.736 13:38:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:33:38.736 ************************************ 00:33:38.736 END TEST nvmf_perf 00:33:38.736 ************************************ 00:33:38.736 13:38:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:33:38.736 13:38:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:33:38.736 13:38:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:38.736 13:38:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.736 ************************************ 00:33:38.736 START TEST nvmf_fio_host 00:33:38.736 ************************************ 00:33:38.736 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:33:38.736 * Looking for test storage... 00:33:38.736 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:38.736 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:38.736 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:38.736 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:33:38.736 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:38.736 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:38.736 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:38.736 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:38.736 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:33:38.736 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:33:38.736 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:33:38.736 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:33:38.736 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:33:38.736 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:33:38.736 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:33:38.736 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:38.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.737 --rc genhtml_branch_coverage=1 00:33:38.737 --rc genhtml_function_coverage=1 00:33:38.737 --rc genhtml_legend=1 00:33:38.737 --rc geninfo_all_blocks=1 00:33:38.737 --rc geninfo_unexecuted_blocks=1 00:33:38.737 00:33:38.737 ' 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:38.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.737 --rc genhtml_branch_coverage=1 00:33:38.737 --rc genhtml_function_coverage=1 00:33:38.737 --rc genhtml_legend=1 00:33:38.737 --rc geninfo_all_blocks=1 00:33:38.737 --rc geninfo_unexecuted_blocks=1 00:33:38.737 00:33:38.737 ' 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:38.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.737 --rc genhtml_branch_coverage=1 00:33:38.737 --rc genhtml_function_coverage=1 00:33:38.737 --rc genhtml_legend=1 00:33:38.737 --rc geninfo_all_blocks=1 00:33:38.737 --rc geninfo_unexecuted_blocks=1 00:33:38.737 00:33:38.737 ' 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:38.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.737 --rc genhtml_branch_coverage=1 00:33:38.737 --rc genhtml_function_coverage=1 00:33:38.737 --rc genhtml_legend=1 00:33:38.737 --rc geninfo_all_blocks=1 00:33:38.737 --rc geninfo_unexecuted_blocks=1 00:33:38.737 00:33:38.737 ' 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.737 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.738 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.738 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:33:38.738 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.738 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:33:38.738 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:38.738 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:38.738 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:38.738 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:38.738 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:38.738 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:38.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:38.738 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:38.738 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:38.738 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:38.738 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:38.738 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:33:38.738 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:38.738 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:38.738 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:38.738 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:38.738 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:38.738 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:38.738 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:38.738 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:38.738 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:38.738 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:38.738 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:33:38.738 13:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.873 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:46.873 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:33:46.873 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:46.873 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:46.873 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:46.873 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:46.873 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:46.873 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:33:46.873 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:46.873 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:33:46.873 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:33:46.873 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:33:46.873 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:33:46.873 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:33:46.873 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:33:46.873 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:46.873 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:46.873 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:46.873 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:46.874 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:46.874 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:46.874 Found net devices under 0000:31:00.0: cvl_0_0 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:46.874 Found net devices under 0000:31:00.1: cvl_0_1 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:46.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:46.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:33:46.874 00:33:46.874 --- 10.0.0.2 ping statistics --- 00:33:46.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.874 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:46.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:46.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:33:46.874 00:33:46.874 --- 10.0.0.1 ping statistics --- 00:33:46.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.874 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=4053457 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 4053457 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 4053457 ']' 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:46.874 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:46.875 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:46.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:46.875 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:46.875 13:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.134 [2024-11-07 13:38:54.901989] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:33:47.134 [2024-11-07 13:38:54.902098] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:47.134 [2024-11-07 13:38:55.051290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:47.394 [2024-11-07 13:38:55.150240] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:47.394 [2024-11-07 13:38:55.150285] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:47.394 [2024-11-07 13:38:55.150297] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:47.394 [2024-11-07 13:38:55.150309] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:47.394 [2024-11-07 13:38:55.150318] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:47.394 [2024-11-07 13:38:55.152582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:47.394 [2024-11-07 13:38:55.152666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:47.395 [2024-11-07 13:38:55.152780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:47.395 [2024-11-07 13:38:55.152805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:47.966 13:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:47.966 13:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:33:47.966 13:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:47.966 [2024-11-07 13:38:55.825701] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:47.966 13:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:33:47.966 13:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:47.966 13:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.966 13:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:33:48.226 Malloc1 00:33:48.226 13:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:48.487 13:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:48.748 13:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:48.748 [2024-11-07 13:38:56.658999] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:48.748 13:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:49.008 13:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:49.008 13:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:49.008 13:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:49.008 13:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:33:49.008 13:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:49.008 13:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:33:49.008 13:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:49.008 13:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:33:49.008 13:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:33:49.008 13:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:33:49.008 13:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:33:49.008 13:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:49.008 13:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:33:49.008 13:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:49.008 13:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:49.008 13:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # break 00:33:49.008 13:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:49.008 13:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:49.577 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:49.577 fio-3.35 00:33:49.577 Starting 1 thread 00:33:52.121 00:33:52.121 test: (groupid=0, jobs=1): err= 0: pid=4054193: Thu Nov 7 13:38:59 2024 00:33:52.121 read: IOPS=8531, BW=33.3MiB/s (34.9MB/s)(66.9MiB/2006msec) 00:33:52.121 slat (usec): min=2, max=307, avg= 2.35, stdev= 3.11 00:33:52.121 clat (usec): min=4039, max=14752, avg=8276.65, stdev=632.09 00:33:52.121 lat (usec): min=4086, max=14755, avg=8279.00, stdev=631.95 00:33:52.121 clat percentiles (usec): 00:33:52.121 | 1.00th=[ 6783], 5.00th=[ 7308], 10.00th=[ 7504], 20.00th=[ 7767], 00:33:52.121 | 30.00th=[ 7963], 40.00th=[ 8160], 50.00th=[ 8291], 60.00th=[ 8455], 00:33:52.121 | 70.00th=[ 8586], 80.00th=[ 8717], 90.00th=[ 8979], 95.00th=[ 9241], 00:33:52.121 | 99.00th=[ 9634], 99.50th=[ 9896], 99.90th=[13173], 99.95th=[14222], 00:33:52.121 | 99.99th=[14746] 00:33:52.121 bw ( KiB/s): min=33077, max=34688, per=99.86%, avg=34079.25, stdev=703.15, samples=4 00:33:52.121 iops : min= 8269, max= 8672, avg=8519.75, stdev=175.91, samples=4 00:33:52.121 write: IOPS=8533, BW=33.3MiB/s (35.0MB/s)(66.9MiB/2006msec); 0 zone resets 00:33:52.121 slat (usec): min=2, max=264, avg= 2.45, stdev= 2.31 00:33:52.121 clat (usec): min=3116, max=12604, avg=6671.77, stdev=517.39 00:33:52.121 lat (usec): min=3141, max=12606, avg=6674.21, stdev=517.35 00:33:52.121 clat percentiles (usec): 00:33:52.121 | 1.00th=[ 5473], 5.00th=[ 5866], 10.00th=[ 6063], 20.00th=[ 6259], 00:33:52.121 | 30.00th=[ 6456], 40.00th=[ 6587], 50.00th=[ 6652], 60.00th=[ 6783], 00:33:52.121 | 70.00th=[ 6915], 80.00th=[ 7046], 90.00th=[ 7242], 95.00th=[ 7439], 00:33:52.121 | 99.00th=[ 7832], 99.50th=[ 8029], 99.90th=[10945], 99.95th=[11863], 00:33:52.121 | 99.99th=[12518] 00:33:52.121 bw ( KiB/s): min=33920, max=34216, per=99.89%, avg=34096.75, stdev=125.61, samples=4 00:33:52.121 iops : min= 8480, max= 8554, avg=8524.00, stdev=31.37, samples=4 00:33:52.121 lat (msec) : 4=0.03%, 10=99.72%, 20=0.25% 00:33:52.121 cpu : usr=74.91%, sys=24.04%, ctx=37, majf=0, minf=1537 00:33:52.121 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:52.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.121 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:52.121 issued rwts: total=17114,17118,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.121 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:52.121 00:33:52.121 Run status group 0 (all jobs): 00:33:52.121 READ: bw=33.3MiB/s (34.9MB/s), 33.3MiB/s-33.3MiB/s (34.9MB/s-34.9MB/s), io=66.9MiB (70.1MB), run=2006-2006msec 00:33:52.121 WRITE: bw=33.3MiB/s (35.0MB/s), 33.3MiB/s-33.3MiB/s (35.0MB/s-35.0MB/s), io=66.9MiB (70.1MB), run=2006-2006msec 00:33:52.383 ----------------------------------------------------- 00:33:52.383 Suppressions used: 00:33:52.383 count bytes template 00:33:52.383 1 57 /usr/src/fio/parse.c 00:33:52.383 1 8 libtcmalloc_minimal.so 00:33:52.383 ----------------------------------------------------- 00:33:52.383 00:33:52.383 13:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:52.383 13:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:52.383 13:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:33:52.383 13:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:52.383 13:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:33:52.383 13:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:52.383 13:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:33:52.383 13:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:33:52.383 13:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:33:52.383 13:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:33:52.383 13:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:52.383 13:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:33:52.383 13:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:52.383 13:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:52.383 13:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # break 00:33:52.383 13:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:52.383 13:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:52.645 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:33:52.645 fio-3.35 00:33:52.645 Starting 1 thread 00:33:55.192 00:33:55.192 test: (groupid=0, jobs=1): err= 0: pid=4054885: Thu Nov 7 13:39:03 2024 00:33:55.192 read: IOPS=8526, BW=133MiB/s (140MB/s)(267MiB/2002msec) 00:33:55.192 slat (usec): min=3, max=118, avg= 3.86, stdev= 1.48 00:33:55.192 clat (usec): min=2110, max=15968, avg=8897.60, stdev=2030.36 00:33:55.192 lat (usec): min=2113, max=15971, avg=8901.46, stdev=2030.42 00:33:55.192 clat percentiles (usec): 00:33:55.192 | 1.00th=[ 4817], 5.00th=[ 5735], 10.00th=[ 6259], 20.00th=[ 7046], 00:33:55.192 | 30.00th=[ 7701], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9372], 00:33:55.192 | 70.00th=[10028], 80.00th=[10945], 90.00th=[11338], 95.00th=[11863], 00:33:55.192 | 99.00th=[14091], 99.50th=[14877], 99.90th=[15664], 99.95th=[15795], 00:33:55.192 | 99.99th=[15926] 00:33:55.192 bw ( KiB/s): min=54624, max=80832, per=51.17%, avg=69816.00, stdev=11923.21, samples=4 00:33:55.192 iops : min= 3414, max= 5052, avg=4363.50, stdev=745.20, samples=4 00:33:55.192 write: IOPS=5176, BW=80.9MiB/s (84.8MB/s)(143MiB/1769msec); 0 zone resets 00:33:55.192 slat (usec): min=40, max=322, avg=41.62, stdev= 6.01 00:33:55.192 clat (usec): min=2142, max=18472, avg=10361.00, stdev=1692.38 00:33:55.192 lat (usec): min=2182, max=18512, avg=10402.61, stdev=1692.83 00:33:55.192 clat percentiles (usec): 00:33:55.192 | 1.00th=[ 7308], 5.00th=[ 8029], 10.00th=[ 8356], 20.00th=[ 8979], 00:33:55.192 | 30.00th=[ 9503], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10552], 00:33:55.192 | 70.00th=[10945], 80.00th=[11600], 90.00th=[12518], 95.00th=[13435], 00:33:55.192 | 99.00th=[15533], 99.50th=[16188], 99.90th=[16712], 99.95th=[17695], 00:33:55.192 | 99.99th=[18482] 00:33:55.192 bw ( KiB/s): min=56768, max=84352, per=87.77%, avg=72696.00, stdev=12515.62, samples=4 00:33:55.192 iops : min= 3548, max= 5272, avg=4543.50, stdev=782.23, samples=4 00:33:55.192 lat (msec) : 4=0.29%, 10=60.13%, 20=39.58% 00:33:55.192 cpu : usr=85.71%, sys=12.84%, ctx=17, majf=0, minf=2282 00:33:55.192 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:33:55.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:55.192 issued rwts: total=17071,9157,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.192 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:55.192 00:33:55.192 Run status group 0 (all jobs): 00:33:55.192 READ: bw=133MiB/s (140MB/s), 133MiB/s-133MiB/s (140MB/s-140MB/s), io=267MiB (280MB), run=2002-2002msec 00:33:55.192 WRITE: bw=80.9MiB/s (84.8MB/s), 80.9MiB/s-80.9MiB/s (84.8MB/s-84.8MB/s), io=143MiB (150MB), run=1769-1769msec 00:33:55.453 ----------------------------------------------------- 00:33:55.453 Suppressions used: 00:33:55.453 count bytes template 00:33:55.453 1 57 /usr/src/fio/parse.c 00:33:55.453 870 83520 /usr/src/fio/iolog.c 00:33:55.453 1 8 libtcmalloc_minimal.so 00:33:55.453 ----------------------------------------------------- 00:33:55.453 00:33:55.453 13:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:55.715 13:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:33:55.715 13:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:33:55.715 13:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:33:55.715 13:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:33:55.715 13:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:33:55.715 13:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:55.715 13:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:55.715 13:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:33:55.715 13:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:33:55.715 13:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:33:55.715 13:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 -i 10.0.0.2 00:33:56.287 Nvme0n1 00:33:56.287 13:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:33:56.861 13:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=26a67977-4b40-474a-abb4-4858c5064a1f 00:33:56.861 13:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 26a67977-4b40-474a-abb4-4858c5064a1f 00:33:56.861 13:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local lvs_uuid=26a67977-4b40-474a-abb4-4858c5064a1f 00:33:56.861 13:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local lvs_info 00:33:56.861 13:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local fc 00:33:56.861 13:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local cs 00:33:56.861 13:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:57.122 13:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # lvs_info='[ 00:33:57.122 { 00:33:57.122 "uuid": "26a67977-4b40-474a-abb4-4858c5064a1f", 00:33:57.122 "name": "lvs_0", 00:33:57.122 "base_bdev": "Nvme0n1", 00:33:57.122 "total_data_clusters": 1787, 00:33:57.122 "free_clusters": 1787, 00:33:57.122 "block_size": 512, 00:33:57.122 "cluster_size": 1073741824 00:33:57.122 } 00:33:57.122 ]' 00:33:57.122 13:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # jq '.[] | select(.uuid=="26a67977-4b40-474a-abb4-4858c5064a1f") .free_clusters' 00:33:57.122 13:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # fc=1787 00:33:57.122 13:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # jq '.[] | select(.uuid=="26a67977-4b40-474a-abb4-4858c5064a1f") .cluster_size' 00:33:57.122 13:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # cs=1073741824 00:33:57.122 13:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1375 -- # free_mb=1829888 00:33:57.122 13:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1376 -- # echo 1829888 00:33:57.122 1829888 00:33:57.122 13:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1829888 00:33:57.383 60fb0226-fcb9-4432-ace9-e74906faa568 00:33:57.383 13:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:33:57.383 13:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:33:57.644 13:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:57.905 13:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:57.905 13:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:57.905 13:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:33:57.905 13:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:57.905 13:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:33:57.905 13:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:57.905 13:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:33:57.905 13:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:33:57.905 13:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:33:57.905 13:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:57.906 13:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:33:57.906 13:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:33:57.906 13:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:57.906 13:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:57.906 13:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # break 00:33:57.906 13:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:57.906 13:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:58.166 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:58.166 fio-3.35 00:33:58.166 Starting 1 thread 00:34:00.712 00:34:00.712 test: (groupid=0, jobs=1): err= 0: pid=4056212: Thu Nov 7 13:39:08 2024 00:34:00.712 read: IOPS=9181, BW=35.9MiB/s (37.6MB/s)(71.9MiB/2006msec) 00:34:00.712 slat (usec): min=2, max=121, avg= 2.35, stdev= 1.29 00:34:00.712 clat (usec): min=2942, max=12967, avg=7665.09, stdev=595.00 00:34:00.712 lat (usec): min=2961, max=12969, avg=7667.44, stdev=594.94 00:34:00.712 clat percentiles (usec): 00:34:00.712 | 1.00th=[ 6325], 5.00th=[ 6718], 10.00th=[ 6915], 20.00th=[ 7177], 00:34:00.712 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7701], 60.00th=[ 7832], 00:34:00.712 | 70.00th=[ 7963], 80.00th=[ 8160], 90.00th=[ 8356], 95.00th=[ 8586], 00:34:00.712 | 99.00th=[ 8979], 99.50th=[ 9110], 99.90th=[10290], 99.95th=[11731], 00:34:00.712 | 99.99th=[12911] 00:34:00.712 bw ( KiB/s): min=35560, max=37288, per=99.91%, avg=36696.00, stdev=772.32, samples=4 00:34:00.712 iops : min= 8890, max= 9322, avg=9174.00, stdev=193.08, samples=4 00:34:00.712 write: IOPS=9189, BW=35.9MiB/s (37.6MB/s)(72.0MiB/2006msec); 0 zone resets 00:34:00.712 slat (nsec): min=2227, max=112722, avg=2432.95, stdev=892.37 00:34:00.712 clat (usec): min=1527, max=11071, avg=6174.26, stdev=507.01 00:34:00.712 lat (usec): min=1538, max=11073, avg=6176.69, stdev=506.97 00:34:00.712 clat percentiles (usec): 00:34:00.712 | 1.00th=[ 5014], 5.00th=[ 5407], 10.00th=[ 5604], 20.00th=[ 5800], 00:34:00.712 | 30.00th=[ 5932], 40.00th=[ 6063], 50.00th=[ 6194], 60.00th=[ 6259], 00:34:00.712 | 70.00th=[ 6456], 80.00th=[ 6587], 90.00th=[ 6783], 95.00th=[ 6915], 00:34:00.712 | 99.00th=[ 7308], 99.50th=[ 7439], 99.90th=[ 9241], 99.95th=[10290], 00:34:00.712 | 99.99th=[10945] 00:34:00.712 bw ( KiB/s): min=36376, max=37056, per=100.00%, avg=36758.00, stdev=300.57, samples=4 00:34:00.712 iops : min= 9094, max= 9264, avg=9189.50, stdev=75.14, samples=4 00:34:00.712 lat (msec) : 2=0.01%, 4=0.10%, 10=99.79%, 20=0.11% 00:34:00.712 cpu : usr=74.91%, sys=23.99%, ctx=24, majf=0, minf=1534 00:34:00.712 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:34:00.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:00.712 issued rwts: total=18419,18435,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.712 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:00.712 00:34:00.712 Run status group 0 (all jobs): 00:34:00.712 READ: bw=35.9MiB/s (37.6MB/s), 35.9MiB/s-35.9MiB/s (37.6MB/s-37.6MB/s), io=71.9MiB (75.4MB), run=2006-2006msec 00:34:00.712 WRITE: bw=35.9MiB/s (37.6MB/s), 35.9MiB/s-35.9MiB/s (37.6MB/s-37.6MB/s), io=72.0MiB (75.5MB), run=2006-2006msec 00:34:00.974 ----------------------------------------------------- 00:34:00.974 Suppressions used: 00:34:00.974 count bytes template 00:34:00.974 1 58 /usr/src/fio/parse.c 00:34:00.974 1 8 libtcmalloc_minimal.so 00:34:00.974 ----------------------------------------------------- 00:34:00.974 00:34:00.974 13:39:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:01.236 13:39:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:34:02.182 13:39:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=a2eb033b-9550-47af-ae6a-292de052ef20 00:34:02.182 13:39:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb a2eb033b-9550-47af-ae6a-292de052ef20 00:34:02.182 13:39:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local lvs_uuid=a2eb033b-9550-47af-ae6a-292de052ef20 00:34:02.182 13:39:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local lvs_info 00:34:02.182 13:39:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local fc 00:34:02.183 13:39:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local cs 00:34:02.183 13:39:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:02.183 13:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # lvs_info='[ 00:34:02.183 { 00:34:02.183 "uuid": "26a67977-4b40-474a-abb4-4858c5064a1f", 00:34:02.183 "name": "lvs_0", 00:34:02.183 "base_bdev": "Nvme0n1", 00:34:02.183 "total_data_clusters": 1787, 00:34:02.183 "free_clusters": 0, 00:34:02.183 "block_size": 512, 00:34:02.183 "cluster_size": 1073741824 00:34:02.183 }, 00:34:02.183 { 00:34:02.183 "uuid": "a2eb033b-9550-47af-ae6a-292de052ef20", 00:34:02.183 "name": "lvs_n_0", 00:34:02.183 "base_bdev": "60fb0226-fcb9-4432-ace9-e74906faa568", 00:34:02.183 "total_data_clusters": 457025, 00:34:02.183 "free_clusters": 457025, 00:34:02.183 "block_size": 512, 00:34:02.183 "cluster_size": 4194304 00:34:02.183 } 00:34:02.183 ]' 00:34:02.183 13:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # jq '.[] | select(.uuid=="a2eb033b-9550-47af-ae6a-292de052ef20") .free_clusters' 00:34:02.183 13:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # fc=457025 00:34:02.183 13:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # jq '.[] | select(.uuid=="a2eb033b-9550-47af-ae6a-292de052ef20") .cluster_size' 00:34:02.183 13:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # cs=4194304 00:34:02.183 13:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1375 -- # free_mb=1828100 00:34:02.183 13:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1376 -- # echo 1828100 00:34:02.183 1828100 00:34:02.183 13:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1828100 00:34:04.729 f0e025e4-1181-4d57-887c-59f7a5abf440 00:34:04.729 13:39:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:34:04.729 13:39:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:34:04.990 13:39:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:34:04.990 13:39:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:04.990 13:39:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:04.990 13:39:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:34:04.990 13:39:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:04.990 13:39:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:34:04.990 13:39:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:04.990 13:39:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:34:04.990 13:39:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:34:04.990 13:39:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:34:04.990 13:39:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:04.990 13:39:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:34:04.990 13:39:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:34:04.990 13:39:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:34:04.990 13:39:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:34:04.990 13:39:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # break 00:34:04.991 13:39:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:34:04.991 13:39:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:05.583 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:34:05.583 fio-3.35 00:34:05.583 Starting 1 thread 00:34:08.129 00:34:08.129 test: (groupid=0, jobs=1): err= 0: pid=4058204: Thu Nov 7 13:39:15 2024 00:34:08.129 read: IOPS=5659, BW=22.1MiB/s (23.2MB/s)(44.4MiB/2010msec) 00:34:08.129 slat (usec): min=2, max=123, avg= 2.39, stdev= 1.60 00:34:08.129 clat (usec): min=4373, max=20295, avg=12509.20, stdev=1055.44 00:34:08.129 lat (usec): min=4393, max=20297, avg=12511.59, stdev=1055.33 00:34:08.129 clat percentiles (usec): 00:34:08.129 | 1.00th=[10028], 5.00th=[10814], 10.00th=[11207], 20.00th=[11731], 00:34:08.129 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12518], 60.00th=[12780], 00:34:08.129 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13829], 95.00th=[14091], 00:34:08.129 | 99.00th=[14746], 99.50th=[15008], 99.90th=[18482], 99.95th=[20055], 00:34:08.129 | 99.99th=[20317] 00:34:08.129 bw ( KiB/s): min=21456, max=23088, per=99.89%, avg=22612.00, stdev=778.72, samples=4 00:34:08.129 iops : min= 5364, max= 5772, avg=5653.00, stdev=194.68, samples=4 00:34:08.129 write: IOPS=5625, BW=22.0MiB/s (23.0MB/s)(44.2MiB/2010msec); 0 zone resets 00:34:08.129 slat (usec): min=2, max=113, avg= 2.50, stdev= 1.13 00:34:08.129 clat (usec): min=2097, max=18679, avg=9976.12, stdev=925.99 00:34:08.129 lat (usec): min=2111, max=18681, avg=9978.61, stdev=925.92 00:34:08.129 clat percentiles (usec): 00:34:08.129 | 1.00th=[ 7832], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9372], 00:34:08.129 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10159], 00:34:08.129 | 70.00th=[10421], 80.00th=[10683], 90.00th=[10945], 95.00th=[11338], 00:34:08.129 | 99.00th=[11863], 99.50th=[12125], 99.90th=[17433], 99.95th=[18482], 00:34:08.129 | 99.99th=[18744] 00:34:08.129 bw ( KiB/s): min=22368, max=22720, per=99.98%, avg=22500.00, stdev=152.56, samples=4 00:34:08.129 iops : min= 5592, max= 5680, avg=5625.00, stdev=38.14, samples=4 00:34:08.129 lat (msec) : 4=0.04%, 10=25.68%, 20=74.24%, 50=0.04% 00:34:08.129 cpu : usr=72.82%, sys=26.38%, ctx=28, majf=0, minf=1536 00:34:08.129 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:34:08.129 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:08.129 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:08.129 issued rwts: total=11375,11308,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:08.129 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:08.129 00:34:08.129 Run status group 0 (all jobs): 00:34:08.129 READ: bw=22.1MiB/s (23.2MB/s), 22.1MiB/s-22.1MiB/s (23.2MB/s-23.2MB/s), io=44.4MiB (46.6MB), run=2010-2010msec 00:34:08.129 WRITE: bw=22.0MiB/s (23.0MB/s), 22.0MiB/s-22.0MiB/s (23.0MB/s-23.0MB/s), io=44.2MiB (46.3MB), run=2010-2010msec 00:34:08.129 ----------------------------------------------------- 00:34:08.129 Suppressions used: 00:34:08.129 count bytes template 00:34:08.129 1 58 /usr/src/fio/parse.c 00:34:08.129 1 8 libtcmalloc_minimal.so 00:34:08.129 ----------------------------------------------------- 00:34:08.129 00:34:08.129 13:39:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:34:08.391 13:39:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:34:08.391 13:39:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:34:11.699 13:39:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:34:11.963 13:39:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:34:12.539 13:39:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:34:12.867 13:39:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:34:14.914 13:39:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:34:14.914 13:39:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:34:14.914 13:39:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:34:14.914 13:39:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:14.914 13:39:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:34:14.914 13:39:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:14.914 13:39:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:34:14.914 13:39:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:14.914 13:39:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:14.914 rmmod nvme_tcp 00:34:14.914 rmmod nvme_fabrics 00:34:14.914 rmmod nvme_keyring 00:34:14.914 13:39:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:14.914 13:39:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:34:14.914 13:39:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:34:14.914 13:39:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 4053457 ']' 00:34:14.914 13:39:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 4053457 00:34:14.914 13:39:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 4053457 ']' 00:34:14.914 13:39:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 4053457 00:34:14.914 13:39:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:34:14.914 13:39:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:14.914 13:39:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4053457 00:34:14.914 13:39:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:14.914 13:39:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:14.914 13:39:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4053457' 00:34:14.914 killing process with pid 4053457 00:34:14.914 13:39:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 4053457 00:34:14.914 13:39:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 4053457 00:34:15.866 13:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:15.866 13:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:15.866 13:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:15.866 13:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:34:15.866 13:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:34:15.866 13:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:15.866 13:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:34:15.866 13:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:15.866 13:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:15.866 13:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:15.866 13:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:15.866 13:39:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:17.779 13:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:17.779 00:34:17.779 real 0m39.462s 00:34:17.779 user 3m0.380s 00:34:17.779 sys 0m13.699s 00:34:17.779 13:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:17.779 13:39:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.779 ************************************ 00:34:17.779 END TEST nvmf_fio_host 00:34:17.779 ************************************ 00:34:17.779 13:39:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:34:17.779 13:39:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:34:17.779 13:39:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:17.779 13:39:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.779 ************************************ 00:34:17.779 START TEST nvmf_failover 00:34:17.779 ************************************ 00:34:17.780 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:34:18.041 * Looking for test storage... 00:34:18.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:18.041 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:18.041 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:34:18.041 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:18.041 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:18.041 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:18.041 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:18.041 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:18.041 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:34:18.041 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:34:18.041 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:34:18.041 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:34:18.041 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:34:18.041 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:34:18.041 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:34:18.041 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:18.041 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:34:18.041 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:34:18.041 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:18.041 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:18.041 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:34:18.041 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:34:18.041 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:18.041 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:34:18.041 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:34:18.041 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:34:18.041 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:34:18.041 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:18.041 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:34:18.041 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:34:18.041 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:18.041 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:18.041 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:34:18.041 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:18.041 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:18.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:18.041 --rc genhtml_branch_coverage=1 00:34:18.041 --rc genhtml_function_coverage=1 00:34:18.041 --rc genhtml_legend=1 00:34:18.041 --rc geninfo_all_blocks=1 00:34:18.041 --rc geninfo_unexecuted_blocks=1 00:34:18.041 00:34:18.041 ' 00:34:18.041 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:18.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:18.041 --rc genhtml_branch_coverage=1 00:34:18.041 --rc genhtml_function_coverage=1 00:34:18.041 --rc genhtml_legend=1 00:34:18.041 --rc geninfo_all_blocks=1 00:34:18.041 --rc geninfo_unexecuted_blocks=1 00:34:18.041 00:34:18.041 ' 00:34:18.041 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:18.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:18.041 --rc genhtml_branch_coverage=1 00:34:18.041 --rc genhtml_function_coverage=1 00:34:18.041 --rc genhtml_legend=1 00:34:18.041 --rc geninfo_all_blocks=1 00:34:18.041 --rc geninfo_unexecuted_blocks=1 00:34:18.041 00:34:18.041 ' 00:34:18.041 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:18.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:18.041 --rc genhtml_branch_coverage=1 00:34:18.041 --rc genhtml_function_coverage=1 00:34:18.041 --rc genhtml_legend=1 00:34:18.041 --rc geninfo_all_blocks=1 00:34:18.041 --rc geninfo_unexecuted_blocks=1 00:34:18.041 00:34:18.041 ' 00:34:18.041 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:18.041 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:34:18.041 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:18.041 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:18.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:34:18.042 13:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:26.186 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:26.186 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:26.186 Found net devices under 0000:31:00.0: cvl_0_0 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:26.186 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:26.187 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:26.187 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:26.187 Found net devices under 0000:31:00.1: cvl_0_1 00:34:26.187 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:26.187 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:26.187 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:34:26.187 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:26.187 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:26.187 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:26.187 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:26.187 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:26.187 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:26.187 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:26.187 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:26.187 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:26.187 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:26.187 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:26.187 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:26.187 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:26.187 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:26.187 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:26.187 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:26.187 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:26.187 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:26.446 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:26.446 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:26.446 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:26.446 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:26.446 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:26.446 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:26.446 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:26.446 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:26.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:26.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:34:26.446 00:34:26.446 --- 10.0.0.2 ping statistics --- 00:34:26.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:26.446 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:34:26.446 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:26.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:26.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:34:26.446 00:34:26.446 --- 10.0.0.1 ping statistics --- 00:34:26.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:26.446 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:34:26.446 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:26.446 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:34:26.446 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:26.447 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:26.447 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:26.447 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:26.447 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:26.447 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:26.447 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:26.447 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:34:26.447 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:26.447 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:26.447 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:26.447 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=4064654 00:34:26.447 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 4064654 00:34:26.447 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:26.447 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 4064654 ']' 00:34:26.447 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:26.447 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:26.447 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:26.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:26.447 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:26.447 13:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:26.708 [2024-11-07 13:39:34.516315] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:34:26.708 [2024-11-07 13:39:34.516446] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:26.708 [2024-11-07 13:39:34.698903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:26.969 [2024-11-07 13:39:34.824324] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:26.969 [2024-11-07 13:39:34.824392] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:26.969 [2024-11-07 13:39:34.824406] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:26.969 [2024-11-07 13:39:34.824419] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:26.969 [2024-11-07 13:39:34.824428] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:26.969 [2024-11-07 13:39:34.826968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:26.969 [2024-11-07 13:39:34.827302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:26.969 [2024-11-07 13:39:34.827318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:27.540 13:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:27.540 13:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:34:27.540 13:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:27.540 13:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:27.540 13:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:27.540 13:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:27.540 13:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:27.540 [2024-11-07 13:39:35.473471] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:27.540 13:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:27.801 Malloc0 00:34:27.801 13:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:28.062 13:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:28.322 13:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:28.322 [2024-11-07 13:39:36.248286] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:28.322 13:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:28.582 [2024-11-07 13:39:36.424772] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:28.582 13:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:28.842 [2024-11-07 13:39:36.609332] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:34:28.842 13:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=4065131 00:34:28.842 13:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:28.842 13:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:34:28.842 13:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 4065131 /var/tmp/bdevperf.sock 00:34:28.842 13:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 4065131 ']' 00:34:28.842 13:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:28.842 13:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:28.842 13:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:28.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:28.842 13:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:28.842 13:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:29.785 13:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:29.785 13:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:34:29.785 13:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:29.785 NVMe0n1 00:34:29.785 13:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:30.046 00:34:30.046 13:39:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=4065352 00:34:30.046 13:39:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:30.046 13:39:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:34:31.430 13:39:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:31.431 [2024-11-07 13:39:39.189124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:34:31.431 [2024-11-07 13:39:39.189168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:34:31.431 [2024-11-07 13:39:39.189177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:34:31.431 [2024-11-07 13:39:39.189184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:34:31.431 [2024-11-07 13:39:39.189190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:34:31.431 [2024-11-07 13:39:39.189197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:34:31.431 [2024-11-07 13:39:39.189203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:34:31.431 [2024-11-07 13:39:39.189209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:34:31.431 [2024-11-07 13:39:39.189215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:34:31.431 [2024-11-07 13:39:39.189222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:34:31.431 [2024-11-07 13:39:39.189228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:34:31.431 [2024-11-07 13:39:39.189234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:34:31.431 [2024-11-07 13:39:39.189241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:34:31.431 [2024-11-07 13:39:39.189252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:34:31.431 [2024-11-07 13:39:39.189258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:34:31.431 [2024-11-07 13:39:39.189265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:34:31.431 [2024-11-07 13:39:39.189271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:34:31.431 [2024-11-07 13:39:39.189277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:34:31.431 [2024-11-07 13:39:39.189283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:34:31.431 [2024-11-07 13:39:39.189289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:34:31.431 [2024-11-07 13:39:39.189295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:34:31.431 [2024-11-07 13:39:39.189301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:34:31.431 [2024-11-07 13:39:39.189308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:34:31.431 [2024-11-07 13:39:39.189314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:34:31.431 [2024-11-07 13:39:39.189320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:34:31.431 [2024-11-07 13:39:39.189326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:34:31.431 [2024-11-07 13:39:39.189332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:34:31.431 [2024-11-07 13:39:39.189338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:34:31.431 [2024-11-07 13:39:39.189345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:34:31.431 [2024-11-07 13:39:39.189351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:34:31.431 [2024-11-07 13:39:39.189357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:34:31.431 [2024-11-07 13:39:39.189363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:34:31.431 [2024-11-07 13:39:39.189370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:34:31.431 [2024-11-07 13:39:39.189376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:34:31.431 [2024-11-07 13:39:39.189382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:34:31.431 [2024-11-07 13:39:39.189389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:34:31.431 [2024-11-07 13:39:39.189395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:34:31.431 13:39:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:34:34.732 13:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:34.732 00:34:34.732 13:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:34.993 [2024-11-07 13:39:42.788806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:34:34.993 [2024-11-07 13:39:42.788843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:34:34.993 [2024-11-07 13:39:42.788852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:34:34.993 [2024-11-07 13:39:42.788859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:34:34.993 [2024-11-07 13:39:42.788875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:34:34.993 [2024-11-07 13:39:42.788882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:34:34.993 [2024-11-07 13:39:42.788889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:34:34.993 [2024-11-07 13:39:42.788895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:34:34.993 13:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:34:38.293 13:39:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:38.293 [2024-11-07 13:39:45.980790] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:38.293 13:39:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:34:39.236 13:39:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:39.236 [2024-11-07 13:39:47.167479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:34:39.236 [2024-11-07 13:39:47.167524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:34:39.236 [2024-11-07 13:39:47.167532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:34:39.236 [2024-11-07 13:39:47.167538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:34:39.236 [2024-11-07 13:39:47.167545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:34:39.236 [2024-11-07 13:39:47.167552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:34:39.236 [2024-11-07 13:39:47.167558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:34:39.236 [2024-11-07 13:39:47.167565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:34:39.236 [2024-11-07 13:39:47.167571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:34:39.236 [2024-11-07 13:39:47.167577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:34:39.236 [2024-11-07 13:39:47.167584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:34:39.237 [2024-11-07 13:39:47.167590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:34:39.237 [2024-11-07 13:39:47.167601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:34:39.237 [2024-11-07 13:39:47.167607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:34:39.237 [2024-11-07 13:39:47.167613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:34:39.237 [2024-11-07 13:39:47.167619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:34:39.237 [2024-11-07 13:39:47.167626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:34:39.237 [2024-11-07 13:39:47.167633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:34:39.237 [2024-11-07 13:39:47.167639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:34:39.237 [2024-11-07 13:39:47.167646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:34:39.237 [2024-11-07 13:39:47.167652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:34:39.237 [2024-11-07 13:39:47.167658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:34:39.237 [2024-11-07 13:39:47.167665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:34:39.237 [2024-11-07 13:39:47.167671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:34:39.237 [2024-11-07 13:39:47.167677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:34:39.237 [2024-11-07 13:39:47.167683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:34:39.237 [2024-11-07 13:39:47.167690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:34:39.237 [2024-11-07 13:39:47.167696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:34:39.237 [2024-11-07 13:39:47.167703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:34:39.237 [2024-11-07 13:39:47.167711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:34:39.237 [2024-11-07 13:39:47.167717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:34:39.237 [2024-11-07 13:39:47.167723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:34:39.237 13:39:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 4065352 00:34:45.821 { 00:34:45.821 "results": [ 00:34:45.821 { 00:34:45.821 "job": "NVMe0n1", 00:34:45.821 "core_mask": "0x1", 00:34:45.821 "workload": "verify", 00:34:45.821 "status": "finished", 00:34:45.821 "verify_range": { 00:34:45.821 "start": 0, 00:34:45.821 "length": 16384 00:34:45.821 }, 00:34:45.821 "queue_depth": 128, 00:34:45.821 "io_size": 4096, 00:34:45.821 "runtime": 15.011775, 00:34:45.821 "iops": 10357.069700285276, 00:34:45.821 "mibps": 40.45730351673936, 00:34:45.821 "io_failed": 7773, 00:34:45.821 "io_timeout": 0, 00:34:45.821 "avg_latency_us": 11740.705774543494, 00:34:45.821 "min_latency_us": 573.44, 00:34:45.821 "max_latency_us": 17694.72 00:34:45.821 } 00:34:45.821 ], 00:34:45.821 "core_count": 1 00:34:45.821 } 00:34:45.821 13:39:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 4065131 00:34:45.821 13:39:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 4065131 ']' 00:34:45.821 13:39:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 4065131 00:34:45.821 13:39:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:34:45.821 13:39:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:45.821 13:39:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4065131 00:34:45.821 13:39:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:45.821 13:39:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:45.821 13:39:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4065131' 00:34:45.821 killing process with pid 4065131 00:34:45.821 13:39:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 4065131 00:34:45.821 13:39:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 4065131 00:34:46.089 13:39:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:46.089 [2024-11-07 13:39:36.721451] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:34:46.089 [2024-11-07 13:39:36.721566] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4065131 ] 00:34:46.089 [2024-11-07 13:39:36.858917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:46.089 [2024-11-07 13:39:36.956780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:46.089 Running I/O for 15 seconds... 00:34:46.089 9871.00 IOPS, 38.56 MiB/s [2024-11-07T12:39:54.096Z] [2024-11-07 13:39:39.191167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:85352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.089 [2024-11-07 13:39:39.191218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.089 [2024-11-07 13:39:39.191245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:85360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.089 [2024-11-07 13:39:39.191259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.089 [2024-11-07 13:39:39.191273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:85368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.089 [2024-11-07 13:39:39.191285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.089 [2024-11-07 13:39:39.191298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:85376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.089 [2024-11-07 13:39:39.191310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.089 [2024-11-07 13:39:39.191323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.089 [2024-11-07 13:39:39.191334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.089 [2024-11-07 13:39:39.191347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.089 [2024-11-07 13:39:39.191358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.089 [2024-11-07 13:39:39.191371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:85400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.089 [2024-11-07 13:39:39.191382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.089 [2024-11-07 13:39:39.191394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.089 [2024-11-07 13:39:39.191404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.089 [2024-11-07 13:39:39.191418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:85416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.089 [2024-11-07 13:39:39.191428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.089 [2024-11-07 13:39:39.191441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.089 [2024-11-07 13:39:39.191451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.089 [2024-11-07 13:39:39.191464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:85432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.089 [2024-11-07 13:39:39.191475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.089 [2024-11-07 13:39:39.191492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.089 [2024-11-07 13:39:39.191503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.089 [2024-11-07 13:39:39.191516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:85448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.089 [2024-11-07 13:39:39.191527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.089 [2024-11-07 13:39:39.191539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.089 [2024-11-07 13:39:39.191550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.089 [2024-11-07 13:39:39.191562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:85464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.089 [2024-11-07 13:39:39.191573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.089 [2024-11-07 13:39:39.191586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.089 [2024-11-07 13:39:39.191597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.089 [2024-11-07 13:39:39.191609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.089 [2024-11-07 13:39:39.191619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.191633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.191644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.191657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:85496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.191667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.191679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.191690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.191704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.191714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.191727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.191737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.191751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.191761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.191774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.191786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.191799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.191809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.191822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.191833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.191845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.191856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.191874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.191886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.191900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.191912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.191926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.191936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.191949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.191961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.191976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.191987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:85608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.192015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.192041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.192064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:85632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.192088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.192114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:85648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.192137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.192160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:85664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.192184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:85672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.192207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:85680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.192230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.192254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:85696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.192278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:85704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.192301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:85712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.192325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.192347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:85728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.192371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:85736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.192394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:85744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.192419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.192443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:85760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.192466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:85768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.192490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.192513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.192537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:85792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.192560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:85800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.192583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:85808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.192606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:85816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.192640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:85824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.192663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.192686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:85840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.192711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:85168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.090 [2024-11-07 13:39:39.192735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:85176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.090 [2024-11-07 13:39:39.192759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:85184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.090 [2024-11-07 13:39:39.192781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.090 [2024-11-07 13:39:39.192806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:85200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.090 [2024-11-07 13:39:39.192829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:85208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.090 [2024-11-07 13:39:39.192853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:85216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.090 [2024-11-07 13:39:39.192881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.192904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.192928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:85864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.192951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.090 [2024-11-07 13:39:39.192976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.090 [2024-11-07 13:39:39.192990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:85880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.091 [2024-11-07 13:39:39.193000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.193014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:85888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.091 [2024-11-07 13:39:39.193025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.193038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:85896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.091 [2024-11-07 13:39:39.193049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.193061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:85904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.091 [2024-11-07 13:39:39.193072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.193085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.091 [2024-11-07 13:39:39.193096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.193108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.091 [2024-11-07 13:39:39.193119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.193133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:85928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.091 [2024-11-07 13:39:39.193144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.193156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:85936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.091 [2024-11-07 13:39:39.193167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.193179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:85944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.091 [2024-11-07 13:39:39.193190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.193203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.091 [2024-11-07 13:39:39.193214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.193226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:85960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.091 [2024-11-07 13:39:39.193236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.193250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:85968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.091 [2024-11-07 13:39:39.193261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.193273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:85976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.091 [2024-11-07 13:39:39.193284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.193297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:85984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.091 [2024-11-07 13:39:39.193310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.193323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.091 [2024-11-07 13:39:39.193334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.193346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:86000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.091 [2024-11-07 13:39:39.193357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.193370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:85224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.091 [2024-11-07 13:39:39.193380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.193393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:85232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.091 [2024-11-07 13:39:39.193404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.193417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:85240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.091 [2024-11-07 13:39:39.193427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.193440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:85248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.091 [2024-11-07 13:39:39.193450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.193463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:85256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.091 [2024-11-07 13:39:39.193474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.193486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:85264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.091 [2024-11-07 13:39:39.193497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.193509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:85272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.091 [2024-11-07 13:39:39.193521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.193534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:85280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.091 [2024-11-07 13:39:39.193544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.193557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:86008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.091 [2024-11-07 13:39:39.193569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.193582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:86016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.091 [2024-11-07 13:39:39.193592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.193605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:86024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.091 [2024-11-07 13:39:39.193616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.193630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:86032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.091 [2024-11-07 13:39:39.193641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.193653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:86040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.091 [2024-11-07 13:39:39.193663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.193676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:86048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.091 [2024-11-07 13:39:39.193687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.193700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:86056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.091 [2024-11-07 13:39:39.193710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.193722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:86064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.091 [2024-11-07 13:39:39.193734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.193747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.091 [2024-11-07 13:39:39.193757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.193770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:86080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.091 [2024-11-07 13:39:39.193781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.193794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:86088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.091 [2024-11-07 13:39:39.193804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.193816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:86096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.091 [2024-11-07 13:39:39.193826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.193857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.091 [2024-11-07 13:39:39.193875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86104 len:8 PRP1 0x0 PRP2 0x0 00:34:46.091 [2024-11-07 13:39:39.193887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.193903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.091 [2024-11-07 13:39:39.193913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.091 [2024-11-07 13:39:39.193923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86112 len:8 PRP1 0x0 PRP2 0x0 00:34:46.091 [2024-11-07 13:39:39.193938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.193952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.091 [2024-11-07 13:39:39.193961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.091 [2024-11-07 13:39:39.193970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86120 len:8 PRP1 0x0 PRP2 0x0 00:34:46.091 [2024-11-07 13:39:39.193980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.193990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.091 [2024-11-07 13:39:39.193999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.091 [2024-11-07 13:39:39.194008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86128 len:8 PRP1 0x0 PRP2 0x0 00:34:46.091 [2024-11-07 13:39:39.194019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.194029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.091 [2024-11-07 13:39:39.194037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.091 [2024-11-07 13:39:39.194046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86136 len:8 PRP1 0x0 PRP2 0x0 00:34:46.091 [2024-11-07 13:39:39.194056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.194067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.091 [2024-11-07 13:39:39.194075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.091 [2024-11-07 13:39:39.194084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86144 len:8 PRP1 0x0 PRP2 0x0 00:34:46.091 [2024-11-07 13:39:39.194094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.194104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.091 [2024-11-07 13:39:39.194112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.091 [2024-11-07 13:39:39.194122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86152 len:8 PRP1 0x0 PRP2 0x0 00:34:46.091 [2024-11-07 13:39:39.194137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.194147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.091 [2024-11-07 13:39:39.194154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.091 [2024-11-07 13:39:39.194164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86160 len:8 PRP1 0x0 PRP2 0x0 00:34:46.091 [2024-11-07 13:39:39.194175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.194186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.091 [2024-11-07 13:39:39.194193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.091 [2024-11-07 13:39:39.194202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86168 len:8 PRP1 0x0 PRP2 0x0 00:34:46.091 [2024-11-07 13:39:39.194213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.194224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.091 [2024-11-07 13:39:39.194233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.091 [2024-11-07 13:39:39.194243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86176 len:8 PRP1 0x0 PRP2 0x0 00:34:46.091 [2024-11-07 13:39:39.194257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.194268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.091 [2024-11-07 13:39:39.194277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.091 [2024-11-07 13:39:39.194286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86184 len:8 PRP1 0x0 PRP2 0x0 00:34:46.091 [2024-11-07 13:39:39.194297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.194307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.091 [2024-11-07 13:39:39.194314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.091 [2024-11-07 13:39:39.194324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85288 len:8 PRP1 0x0 PRP2 0x0 00:34:46.091 [2024-11-07 13:39:39.194335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.194346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.091 [2024-11-07 13:39:39.194353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.091 [2024-11-07 13:39:39.194362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85296 len:8 PRP1 0x0 PRP2 0x0 00:34:46.091 [2024-11-07 13:39:39.194372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.091 [2024-11-07 13:39:39.194384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.091 [2024-11-07 13:39:39.194392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.091 [2024-11-07 13:39:39.194401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85304 len:8 PRP1 0x0 PRP2 0x0 00:34:46.092 [2024-11-07 13:39:39.194412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:39.194421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.092 [2024-11-07 13:39:39.194430] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.092 [2024-11-07 13:39:39.194440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85312 len:8 PRP1 0x0 PRP2 0x0 00:34:46.092 [2024-11-07 13:39:39.194451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:39.194460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.092 [2024-11-07 13:39:39.194468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.092 [2024-11-07 13:39:39.194477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85320 len:8 PRP1 0x0 PRP2 0x0 00:34:46.092 [2024-11-07 13:39:39.194488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:39.194498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.092 [2024-11-07 13:39:39.194507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.092 [2024-11-07 13:39:39.194516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85328 len:8 PRP1 0x0 PRP2 0x0 00:34:46.092 [2024-11-07 13:39:39.194526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:39.194537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.092 [2024-11-07 13:39:39.194545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.092 [2024-11-07 13:39:39.194556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85336 len:8 PRP1 0x0 PRP2 0x0 00:34:46.092 [2024-11-07 13:39:39.194566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:39.194576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.092 [2024-11-07 13:39:39.194584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.092 [2024-11-07 13:39:39.194595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85344 len:8 PRP1 0x0 PRP2 0x0 00:34:46.092 [2024-11-07 13:39:39.194605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:39.194800] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:34:46.092 [2024-11-07 13:39:39.194840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:46.092 [2024-11-07 13:39:39.194860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:39.194888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:46.092 [2024-11-07 13:39:39.194904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:39.194919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:46.092 [2024-11-07 13:39:39.194930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:39.194942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:46.092 [2024-11-07 13:39:39.194952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:39.194970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:34:46.092 [2024-11-07 13:39:39.195018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416700 (9): Bad file descriptor 00:34:46.092 [2024-11-07 13:39:39.198745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:34:46.092 [2024-11-07 13:39:39.269296] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:34:46.092 9570.00 IOPS, 37.38 MiB/s [2024-11-07T12:39:54.099Z] 9758.33 IOPS, 38.12 MiB/s [2024-11-07T12:39:54.099Z] 9850.25 IOPS, 38.48 MiB/s [2024-11-07T12:39:54.099Z] [2024-11-07 13:39:42.788319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:46.092 [2024-11-07 13:39:42.788377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:42.788394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:46.092 [2024-11-07 13:39:42.788406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:42.788418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:46.092 [2024-11-07 13:39:42.788428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:42.788440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:46.092 [2024-11-07 13:39:42.788456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:42.788466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416700 is same with the state(6) to be set 00:34:46.092 [2024-11-07 13:39:42.791080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.092 [2024-11-07 13:39:42.791110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:42.791132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:128792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.092 [2024-11-07 13:39:42.791145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:42.791159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.092 [2024-11-07 13:39:42.791170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:42.791183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.092 [2024-11-07 13:39:42.791194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:42.791207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.092 [2024-11-07 13:39:42.791218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:42.791231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.092 [2024-11-07 13:39:42.791241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:42.791254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.092 [2024-11-07 13:39:42.791264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:42.791279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.092 [2024-11-07 13:39:42.791289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:42.791302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.092 [2024-11-07 13:39:42.791313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:42.791326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.092 [2024-11-07 13:39:42.791337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:42.791350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.092 [2024-11-07 13:39:42.791361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:42.791373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.092 [2024-11-07 13:39:42.791389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:42.791402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.092 [2024-11-07 13:39:42.791412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:42.791426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.092 [2024-11-07 13:39:42.791436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:42.791450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.092 [2024-11-07 13:39:42.791461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:42.791473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.092 [2024-11-07 13:39:42.791484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:42.791497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.092 [2024-11-07 13:39:42.791508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:42.791521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.092 [2024-11-07 13:39:42.791532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:42.791544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.092 [2024-11-07 13:39:42.791556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:42.791569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.092 [2024-11-07 13:39:42.791579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:42.791592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.092 [2024-11-07 13:39:42.791604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:42.791617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.092 [2024-11-07 13:39:42.791627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:42.791640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.092 [2024-11-07 13:39:42.791650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:42.791664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.092 [2024-11-07 13:39:42.791674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:42.791687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.092 [2024-11-07 13:39:42.791699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:42.791713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.092 [2024-11-07 13:39:42.791733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:42.791746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.092 [2024-11-07 13:39:42.791756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:42.791770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.092 [2024-11-07 13:39:42.791780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:42.791793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.092 [2024-11-07 13:39:42.791804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:42.791816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.092 [2024-11-07 13:39:42.791828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:42.791841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.092 [2024-11-07 13:39:42.791851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.092 [2024-11-07 13:39:42.791870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.093 [2024-11-07 13:39:42.791881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.791895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.093 [2024-11-07 13:39:42.791905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.791918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.093 [2024-11-07 13:39:42.791929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.791942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.093 [2024-11-07 13:39:42.791953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.791966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:128392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.093 [2024-11-07 13:39:42.791977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.791989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:128400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.093 [2024-11-07 13:39:42.792001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.093 [2024-11-07 13:39:42.792026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.093 [2024-11-07 13:39:42.792050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.093 [2024-11-07 13:39:42.792073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.093 [2024-11-07 13:39:42.792096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.093 [2024-11-07 13:39:42.792121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.093 [2024-11-07 13:39:42.792144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.093 [2024-11-07 13:39:42.792168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:128464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.093 [2024-11-07 13:39:42.792192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.093 [2024-11-07 13:39:42.792215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.093 [2024-11-07 13:39:42.792238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.093 [2024-11-07 13:39:42.792261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:128488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.093 [2024-11-07 13:39:42.792285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:128496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.093 [2024-11-07 13:39:42.792310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.093 [2024-11-07 13:39:42.792334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.093 [2024-11-07 13:39:42.792357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:128520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.093 [2024-11-07 13:39:42.792381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:128528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.093 [2024-11-07 13:39:42.792404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.093 [2024-11-07 13:39:42.792427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:128544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.093 [2024-11-07 13:39:42.792451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:128552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.093 [2024-11-07 13:39:42.792473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.093 [2024-11-07 13:39:42.792498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:128568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.093 [2024-11-07 13:39:42.792521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:128576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.093 [2024-11-07 13:39:42.792545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:128584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.093 [2024-11-07 13:39:42.792568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.093 [2024-11-07 13:39:42.792591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.093 [2024-11-07 13:39:42.792616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.093 [2024-11-07 13:39:42.792639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.093 [2024-11-07 13:39:42.792663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.093 [2024-11-07 13:39:42.792686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.093 [2024-11-07 13:39:42.792710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.093 [2024-11-07 13:39:42.792733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.093 [2024-11-07 13:39:42.792756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.093 [2024-11-07 13:39:42.792779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.093 [2024-11-07 13:39:42.792802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.093 [2024-11-07 13:39:42.792825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.093 [2024-11-07 13:39:42.792848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.093 [2024-11-07 13:39:42.792877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.093 [2024-11-07 13:39:42.792903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.093 [2024-11-07 13:39:42.792926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.093 [2024-11-07 13:39:42.792950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.093 [2024-11-07 13:39:42.792973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.792985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.093 [2024-11-07 13:39:42.792997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.793009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.093 [2024-11-07 13:39:42.793020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.793032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.093 [2024-11-07 13:39:42.793043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.793056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.093 [2024-11-07 13:39:42.793067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.793079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.093 [2024-11-07 13:39:42.793090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.793102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.093 [2024-11-07 13:39:42.793113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.793127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:129216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.093 [2024-11-07 13:39:42.793137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.793149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.093 [2024-11-07 13:39:42.793160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.793173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.093 [2024-11-07 13:39:42.793184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.793199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.093 [2024-11-07 13:39:42.793209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.793223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.093 [2024-11-07 13:39:42.793233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.793246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:129256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.093 [2024-11-07 13:39:42.793284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.793298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.093 [2024-11-07 13:39:42.793308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.093 [2024-11-07 13:39:42.793320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:129272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.093 [2024-11-07 13:39:42.793331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.094 [2024-11-07 13:39:42.793345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.094 [2024-11-07 13:39:42.793355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.094 [2024-11-07 13:39:42.793368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:129288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.094 [2024-11-07 13:39:42.793378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.094 [2024-11-07 13:39:42.793406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.094 [2024-11-07 13:39:42.793418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129296 len:8 PRP1 0x0 PRP2 0x0 00:34:46.094 [2024-11-07 13:39:42.793429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.094 [2024-11-07 13:39:42.793445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.094 [2024-11-07 13:39:42.793455] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.094 [2024-11-07 13:39:42.793465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129304 len:8 PRP1 0x0 PRP2 0x0 00:34:46.094 [2024-11-07 13:39:42.793476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.094 [2024-11-07 13:39:42.793486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.094 [2024-11-07 13:39:42.793495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.094 [2024-11-07 13:39:42.793504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129312 len:8 PRP1 0x0 PRP2 0x0 00:34:46.094 [2024-11-07 13:39:42.793515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.094 [2024-11-07 13:39:42.793525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.094 [2024-11-07 13:39:42.793533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.094 [2024-11-07 13:39:42.793542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129320 len:8 PRP1 0x0 PRP2 0x0 00:34:46.094 [2024-11-07 13:39:42.793555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.094 [2024-11-07 13:39:42.793566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.094 [2024-11-07 13:39:42.793574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.094 [2024-11-07 13:39:42.793583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129328 len:8 PRP1 0x0 PRP2 0x0 00:34:46.094 [2024-11-07 13:39:42.793593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.094 [2024-11-07 13:39:42.793604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.094 [2024-11-07 13:39:42.793612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.094 [2024-11-07 13:39:42.793621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129336 len:8 PRP1 0x0 PRP2 0x0 00:34:46.094 [2024-11-07 13:39:42.793632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.094 [2024-11-07 13:39:42.793642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.094 [2024-11-07 13:39:42.793650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.094 [2024-11-07 13:39:42.793659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129344 len:8 PRP1 0x0 PRP2 0x0 00:34:46.094 [2024-11-07 13:39:42.793670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.094 [2024-11-07 13:39:42.793680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.094 [2024-11-07 13:39:42.793688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.094 [2024-11-07 13:39:42.793697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129352 len:8 PRP1 0x0 PRP2 0x0 00:34:46.094 [2024-11-07 13:39:42.793707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.094 [2024-11-07 13:39:42.793718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.094 [2024-11-07 13:39:42.793726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.094 [2024-11-07 13:39:42.793735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129360 len:8 PRP1 0x0 PRP2 0x0 00:34:46.094 [2024-11-07 13:39:42.793745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.094 [2024-11-07 13:39:42.793755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.094 [2024-11-07 13:39:42.793763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.094 [2024-11-07 13:39:42.793772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129368 len:8 PRP1 0x0 PRP2 0x0 00:34:46.094 [2024-11-07 13:39:42.793782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.094 [2024-11-07 13:39:42.793792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.094 [2024-11-07 13:39:42.793800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.094 [2024-11-07 13:39:42.793810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128600 len:8 PRP1 0x0 PRP2 0x0 00:34:46.094 [2024-11-07 13:39:42.793822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.094 [2024-11-07 13:39:42.793832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.094 [2024-11-07 13:39:42.793840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.094 [2024-11-07 13:39:42.793858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128608 len:8 PRP1 0x0 PRP2 0x0 00:34:46.094 [2024-11-07 13:39:42.793874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.094 [2024-11-07 13:39:42.793885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.094 [2024-11-07 13:39:42.793893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.094 [2024-11-07 13:39:42.793902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128616 len:8 PRP1 0x0 PRP2 0x0 00:34:46.094 [2024-11-07 13:39:42.793912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.094 [2024-11-07 13:39:42.793922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.094 [2024-11-07 13:39:42.793930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.094 [2024-11-07 13:39:42.793940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128624 len:8 PRP1 0x0 PRP2 0x0 00:34:46.094 [2024-11-07 13:39:42.793950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.094 [2024-11-07 13:39:42.793960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.094 [2024-11-07 13:39:42.793968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.094 [2024-11-07 13:39:42.793977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128632 len:8 PRP1 0x0 PRP2 0x0 00:34:46.094 [2024-11-07 13:39:42.793987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.094 [2024-11-07 13:39:42.793998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.094 [2024-11-07 13:39:42.794006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.094 [2024-11-07 13:39:42.794015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128640 len:8 PRP1 0x0 PRP2 0x0 00:34:46.094 [2024-11-07 13:39:42.794025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.094 [2024-11-07 13:39:42.794035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.094 [2024-11-07 13:39:42.794042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.094 [2024-11-07 13:39:42.794052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128648 len:8 PRP1 0x0 PRP2 0x0 00:34:46.094 [2024-11-07 13:39:42.794062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.094 [2024-11-07 13:39:42.794072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.094 [2024-11-07 13:39:42.794080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.094 [2024-11-07 13:39:42.794089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128656 len:8 PRP1 0x0 PRP2 0x0 00:34:46.094 [2024-11-07 13:39:42.794100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.094 [2024-11-07 13:39:42.794110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.094 [2024-11-07 13:39:42.794118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.094 [2024-11-07 13:39:42.794127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128664 len:8 PRP1 0x0 PRP2 0x0 00:34:46.094 [2024-11-07 13:39:42.794137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.094 [2024-11-07 13:39:42.794147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.094 [2024-11-07 13:39:42.794157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.094 [2024-11-07 13:39:42.794166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128672 len:8 PRP1 0x0 PRP2 0x0 00:34:46.094 [2024-11-07 13:39:42.794177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.094 [2024-11-07 13:39:42.794186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.094 [2024-11-07 13:39:42.794194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.094 [2024-11-07 13:39:42.794203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128680 len:8 PRP1 0x0 PRP2 0x0 00:34:46.094 [2024-11-07 13:39:42.794214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.094 [2024-11-07 13:39:42.794224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.094 [2024-11-07 13:39:42.794232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.094 [2024-11-07 13:39:42.794241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128688 len:8 PRP1 0x0 PRP2 0x0 00:34:46.094 [2024-11-07 13:39:42.794251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.094 [2024-11-07 13:39:42.794261] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.094 [2024-11-07 13:39:42.794270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.094 [2024-11-07 13:39:42.794280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128696 len:8 PRP1 0x0 PRP2 0x0 00:34:46.094 [2024-11-07 13:39:42.794290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.094 [2024-11-07 13:39:42.794300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.094 [2024-11-07 13:39:42.794308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.094 [2024-11-07 13:39:42.794316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128704 len:8 PRP1 0x0 PRP2 0x0 00:34:46.094 [2024-11-07 13:39:42.794327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.094 [2024-11-07 13:39:42.794338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.094 [2024-11-07 13:39:42.794345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.094 [2024-11-07 13:39:42.794354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128712 len:8 PRP1 0x0 PRP2 0x0 00:34:46.094 [2024-11-07 13:39:42.794364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.094 [2024-11-07 13:39:42.794375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.094 [2024-11-07 13:39:42.794383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.094 [2024-11-07 13:39:42.794392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128720 len:8 PRP1 0x0 PRP2 0x0 00:34:46.094 [2024-11-07 13:39:42.794403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.094 [2024-11-07 13:39:42.794413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.094 [2024-11-07 13:39:42.794421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.094 [2024-11-07 13:39:42.794430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128728 len:8 PRP1 0x0 PRP2 0x0 00:34:46.094 [2024-11-07 13:39:42.794440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.094 [2024-11-07 13:39:42.794452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.094 [2024-11-07 13:39:42.794460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.094 [2024-11-07 13:39:42.794469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128736 len:8 PRP1 0x0 PRP2 0x0 00:34:46.094 [2024-11-07 13:39:42.794480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.094 [2024-11-07 13:39:42.794490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.094 [2024-11-07 13:39:42.794497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.094 [2024-11-07 13:39:42.794506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128744 len:8 PRP1 0x0 PRP2 0x0 00:34:46.094 [2024-11-07 13:39:42.794516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.094 [2024-11-07 13:39:42.794526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.094 [2024-11-07 13:39:42.794535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.094 [2024-11-07 13:39:42.794543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128752 len:8 PRP1 0x0 PRP2 0x0 00:34:46.094 [2024-11-07 13:39:42.794554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.094 [2024-11-07 13:39:42.794569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.094 [2024-11-07 13:39:42.794577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.094 [2024-11-07 13:39:42.794586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128760 len:8 PRP1 0x0 PRP2 0x0 00:34:46.094 [2024-11-07 13:39:42.794597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.094 [2024-11-07 13:39:42.794607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.094 [2024-11-07 13:39:42.794615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.094 [2024-11-07 13:39:42.794624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128768 len:8 PRP1 0x0 PRP2 0x0 00:34:46.094 [2024-11-07 13:39:42.794634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.094 [2024-11-07 13:39:42.794645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.094 [2024-11-07 13:39:42.794652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.094 [2024-11-07 13:39:42.794662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128776 len:8 PRP1 0x0 PRP2 0x0 00:34:46.094 [2024-11-07 13:39:42.794672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.094 [2024-11-07 13:39:42.794682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.094 [2024-11-07 13:39:42.794689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.094 [2024-11-07 13:39:42.794699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129376 len:8 PRP1 0x0 PRP2 0x0 00:34:46.095 [2024-11-07 13:39:42.794709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:42.794909] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:34:46.095 [2024-11-07 13:39:42.794926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:34:46.095 [2024-11-07 13:39:42.798678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:34:46.095 [2024-11-07 13:39:42.798721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416700 (9): Bad file descriptor 00:34:46.095 [2024-11-07 13:39:42.864065] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:34:46.095 9792.60 IOPS, 38.25 MiB/s [2024-11-07T12:39:54.102Z] 9873.00 IOPS, 38.57 MiB/s [2024-11-07T12:39:54.102Z] 9905.86 IOPS, 38.69 MiB/s [2024-11-07T12:39:54.102Z] 9979.50 IOPS, 38.98 MiB/s [2024-11-07T12:39:54.102Z] 9994.44 IOPS, 39.04 MiB/s [2024-11-07T12:39:54.102Z] [2024-11-07 13:39:47.169414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:107928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.095 [2024-11-07 13:39:47.169457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.169485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:107936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.095 [2024-11-07 13:39:47.169498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.169512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:107944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.095 [2024-11-07 13:39:47.169523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.169538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:107952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.095 [2024-11-07 13:39:47.169548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.169562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:107960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.095 [2024-11-07 13:39:47.169572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.169585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.095 [2024-11-07 13:39:47.169597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.169610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:107976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.095 [2024-11-07 13:39:47.169620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.169633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:107984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.095 [2024-11-07 13:39:47.169644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.169658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:107992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.095 [2024-11-07 13:39:47.169668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.169682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:108000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.095 [2024-11-07 13:39:47.169693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.169706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:108008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.095 [2024-11-07 13:39:47.169716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.169734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:108016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.095 [2024-11-07 13:39:47.169745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.169758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:108024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.095 [2024-11-07 13:39:47.169768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.169781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.095 [2024-11-07 13:39:47.169793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.169806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:108040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.095 [2024-11-07 13:39:47.169817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.169829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:108048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.095 [2024-11-07 13:39:47.169840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.169853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:108056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.095 [2024-11-07 13:39:47.169869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.169883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:108064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.095 [2024-11-07 13:39:47.169894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.169906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.095 [2024-11-07 13:39:47.169918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.169932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:108080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.095 [2024-11-07 13:39:47.169944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.169958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:108088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.095 [2024-11-07 13:39:47.169970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.169983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.095 [2024-11-07 13:39:47.169993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.170008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:108104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.095 [2024-11-07 13:39:47.170019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.170032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.095 [2024-11-07 13:39:47.170048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.170063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:108312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.095 [2024-11-07 13:39:47.170075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.170103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:108120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.095 [2024-11-07 13:39:47.170114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.170127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:108128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.095 [2024-11-07 13:39:47.170138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.170152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.095 [2024-11-07 13:39:47.170163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.170175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:108144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.095 [2024-11-07 13:39:47.170186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.170198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.095 [2024-11-07 13:39:47.170209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.170222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:108160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.095 [2024-11-07 13:39:47.170233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.170246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:108168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.095 [2024-11-07 13:39:47.170257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.170269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:108176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.095 [2024-11-07 13:39:47.170281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.170294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:108184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.095 [2024-11-07 13:39:47.170305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.170319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:108192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.095 [2024-11-07 13:39:47.170330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.170344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.095 [2024-11-07 13:39:47.170354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.170370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:108328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.095 [2024-11-07 13:39:47.170380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.170394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:108336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.095 [2024-11-07 13:39:47.170405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.170417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:108344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.095 [2024-11-07 13:39:47.170428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.170441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:108352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.095 [2024-11-07 13:39:47.170452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.170465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:108360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.095 [2024-11-07 13:39:47.170475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.170488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:108368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.095 [2024-11-07 13:39:47.170499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.170512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:108376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.095 [2024-11-07 13:39:47.170523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.170535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:108384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.095 [2024-11-07 13:39:47.170545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.170558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:108392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.095 [2024-11-07 13:39:47.170569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.170582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:108400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.095 [2024-11-07 13:39:47.170592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.170604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:108408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.095 [2024-11-07 13:39:47.170616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.170629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:108416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.095 [2024-11-07 13:39:47.170639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.170651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:108424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.095 [2024-11-07 13:39:47.170663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.170677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:108432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.095 [2024-11-07 13:39:47.170688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.170702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:108440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.095 [2024-11-07 13:39:47.170712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.170725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:108448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.095 [2024-11-07 13:39:47.170736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.095 [2024-11-07 13:39:47.170749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:108456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.170759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.170772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:108464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.170791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.170804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:108472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.170814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.170828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:108480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.170838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.170851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:108488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.170865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.170878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:108496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.170888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.170901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:108504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.170911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.170924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:108512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.170935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.170948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:108520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.170958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.170972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:108528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.170983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.170996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:108536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.171007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.171019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:108544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.171030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.171043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:108552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.171053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.171065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:108560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.171076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.171089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:108200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.096 [2024-11-07 13:39:47.171100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.171113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:108208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.096 [2024-11-07 13:39:47.171124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.171138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:108216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.096 [2024-11-07 13:39:47.171149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.171162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:108224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.096 [2024-11-07 13:39:47.171173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.171185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:108232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.096 [2024-11-07 13:39:47.171197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.171209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:108240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.096 [2024-11-07 13:39:47.171219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.171232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:108248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.096 [2024-11-07 13:39:47.171243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.171256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:108568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.171269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.171281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:108576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.171292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.171305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:108584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.171316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.171328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:108592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.171339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.171351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:108600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.171363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.171376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:108608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.171386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.171398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:108616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.171409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.171422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:108624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.171433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.171445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:108632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.171455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.171469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:108640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.171479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.171492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:108648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.171502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.171515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:108656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.171526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.171539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.171549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.171562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:108672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.171575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.171588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:108680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.171599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.171611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:108688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.171621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.171634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:108696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.171645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.171657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:108704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.171667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.171680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:108712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.171691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.171704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:108720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.171714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.171727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:108728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.171737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.171750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:108736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.171761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.171774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:108744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.171784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.171796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:108752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.171807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.171820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:108760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.171830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.171843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:108768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.171855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.171879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:108776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.171890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.171903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:108784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:46.096 [2024-11-07 13:39:47.171915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.171947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.096 [2024-11-07 13:39:47.171960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108792 len:8 PRP1 0x0 PRP2 0x0 00:34:46.096 [2024-11-07 13:39:47.171973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.171988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.096 [2024-11-07 13:39:47.171997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.096 [2024-11-07 13:39:47.172008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108800 len:8 PRP1 0x0 PRP2 0x0 00:34:46.096 [2024-11-07 13:39:47.172019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.172030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.096 [2024-11-07 13:39:47.172039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.096 [2024-11-07 13:39:47.172048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108808 len:8 PRP1 0x0 PRP2 0x0 00:34:46.096 [2024-11-07 13:39:47.172058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.172069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.096 [2024-11-07 13:39:47.172077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.096 [2024-11-07 13:39:47.172087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108816 len:8 PRP1 0x0 PRP2 0x0 00:34:46.096 [2024-11-07 13:39:47.172097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.172107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.096 [2024-11-07 13:39:47.172115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.096 [2024-11-07 13:39:47.172124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108824 len:8 PRP1 0x0 PRP2 0x0 00:34:46.096 [2024-11-07 13:39:47.172135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.172146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.096 [2024-11-07 13:39:47.172154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.096 [2024-11-07 13:39:47.172162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108832 len:8 PRP1 0x0 PRP2 0x0 00:34:46.096 [2024-11-07 13:39:47.172173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.172183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.096 [2024-11-07 13:39:47.172191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.096 [2024-11-07 13:39:47.172201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108840 len:8 PRP1 0x0 PRP2 0x0 00:34:46.096 [2024-11-07 13:39:47.172213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.172223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.096 [2024-11-07 13:39:47.172233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.096 [2024-11-07 13:39:47.172243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108848 len:8 PRP1 0x0 PRP2 0x0 00:34:46.096 [2024-11-07 13:39:47.172253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.096 [2024-11-07 13:39:47.172263] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.096 [2024-11-07 13:39:47.172270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.096 [2024-11-07 13:39:47.172285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108856 len:8 PRP1 0x0 PRP2 0x0 00:34:46.096 [2024-11-07 13:39:47.172296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.097 [2024-11-07 13:39:47.172307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.097 [2024-11-07 13:39:47.172315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.097 [2024-11-07 13:39:47.172324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108864 len:8 PRP1 0x0 PRP2 0x0 00:34:46.097 [2024-11-07 13:39:47.172334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.097 [2024-11-07 13:39:47.172345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.097 [2024-11-07 13:39:47.172353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.097 [2024-11-07 13:39:47.172362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108872 len:8 PRP1 0x0 PRP2 0x0 00:34:46.097 [2024-11-07 13:39:47.172372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.097 [2024-11-07 13:39:47.172382] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.097 [2024-11-07 13:39:47.172390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.097 [2024-11-07 13:39:47.172399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108880 len:8 PRP1 0x0 PRP2 0x0 00:34:46.097 [2024-11-07 13:39:47.172410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.097 [2024-11-07 13:39:47.172420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.097 [2024-11-07 13:39:47.172427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.097 [2024-11-07 13:39:47.172436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108888 len:8 PRP1 0x0 PRP2 0x0 00:34:46.097 [2024-11-07 13:39:47.172447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.097 [2024-11-07 13:39:47.172457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.097 [2024-11-07 13:39:47.172465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.097 [2024-11-07 13:39:47.172479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108896 len:8 PRP1 0x0 PRP2 0x0 00:34:46.097 [2024-11-07 13:39:47.172489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.097 [2024-11-07 13:39:47.172499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.097 [2024-11-07 13:39:47.172509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.097 [2024-11-07 13:39:47.172519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108904 len:8 PRP1 0x0 PRP2 0x0 00:34:46.097 [2024-11-07 13:39:47.172530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.097 [2024-11-07 13:39:47.172540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.097 [2024-11-07 13:39:47.172548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.097 [2024-11-07 13:39:47.172557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108912 len:8 PRP1 0x0 PRP2 0x0 00:34:46.097 [2024-11-07 13:39:47.172568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.097 [2024-11-07 13:39:47.172578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.097 [2024-11-07 13:39:47.172586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.097 [2024-11-07 13:39:47.172595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108920 len:8 PRP1 0x0 PRP2 0x0 00:34:46.097 [2024-11-07 13:39:47.172605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.097 [2024-11-07 13:39:47.172616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.097 [2024-11-07 13:39:47.172624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.097 [2024-11-07 13:39:47.172633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108928 len:8 PRP1 0x0 PRP2 0x0 00:34:46.097 [2024-11-07 13:39:47.172644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.097 [2024-11-07 13:39:47.172654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.097 [2024-11-07 13:39:47.172661] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.097 [2024-11-07 13:39:47.172670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108936 len:8 PRP1 0x0 PRP2 0x0 00:34:46.097 [2024-11-07 13:39:47.172681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.097 [2024-11-07 13:39:47.172691] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.097 [2024-11-07 13:39:47.172699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.097 [2024-11-07 13:39:47.172708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108944 len:8 PRP1 0x0 PRP2 0x0 00:34:46.097 [2024-11-07 13:39:47.172718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.097 [2024-11-07 13:39:47.172728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.097 [2024-11-07 13:39:47.172736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.097 [2024-11-07 13:39:47.172745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108256 len:8 PRP1 0x0 PRP2 0x0 00:34:46.097 [2024-11-07 13:39:47.172755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.097 [2024-11-07 13:39:47.172765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.097 [2024-11-07 13:39:47.172773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.097 [2024-11-07 13:39:47.172782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108264 len:8 PRP1 0x0 PRP2 0x0 00:34:46.097 [2024-11-07 13:39:47.172792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.097 [2024-11-07 13:39:47.172805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.097 [2024-11-07 13:39:47.172813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.097 [2024-11-07 13:39:47.172821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108272 len:8 PRP1 0x0 PRP2 0x0 00:34:46.097 [2024-11-07 13:39:47.172831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.097 [2024-11-07 13:39:47.172842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.097 [2024-11-07 13:39:47.172850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.097 [2024-11-07 13:39:47.172859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108280 len:8 PRP1 0x0 PRP2 0x0 00:34:46.097 [2024-11-07 13:39:47.172875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.097 [2024-11-07 13:39:47.172884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.097 [2024-11-07 13:39:47.172892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.097 [2024-11-07 13:39:47.172901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108288 len:8 PRP1 0x0 PRP2 0x0 00:34:46.097 [2024-11-07 13:39:47.172912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.097 [2024-11-07 13:39:47.172922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.097 [2024-11-07 13:39:47.172930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.097 [2024-11-07 13:39:47.172939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108296 len:8 PRP1 0x0 PRP2 0x0 00:34:46.097 [2024-11-07 13:39:47.172949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.097 [2024-11-07 13:39:47.172959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:46.097 [2024-11-07 13:39:47.172967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:46.097 [2024-11-07 13:39:47.172976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108304 len:8 PRP1 0x0 PRP2 0x0 00:34:46.097 [2024-11-07 13:39:47.172986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.097 [2024-11-07 13:39:47.173196] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:34:46.097 [2024-11-07 13:39:47.173231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:46.097 [2024-11-07 13:39:47.173245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.097 [2024-11-07 13:39:47.173257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:46.097 [2024-11-07 13:39:47.173267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.097 [2024-11-07 13:39:47.173280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:46.097 [2024-11-07 13:39:47.173290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.097 [2024-11-07 13:39:47.173302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:46.097 [2024-11-07 13:39:47.173312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:46.097 [2024-11-07 13:39:47.173325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:34:46.097 [2024-11-07 13:39:47.173366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416700 (9): Bad file descriptor 00:34:46.097 [2024-11-07 13:39:47.177115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:34:46.097 [2024-11-07 13:39:47.242578] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:34:46.097 10028.00 IOPS, 39.17 MiB/s [2024-11-07T12:39:54.104Z] 10124.82 IOPS, 39.55 MiB/s [2024-11-07T12:39:54.104Z] 10173.92 IOPS, 39.74 MiB/s [2024-11-07T12:39:54.104Z] 10256.69 IOPS, 40.07 MiB/s [2024-11-07T12:39:54.104Z] 10320.57 IOPS, 40.31 MiB/s 00:34:46.097 Latency(us) 00:34:46.097 [2024-11-07T12:39:54.104Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:46.097 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:46.097 Verification LBA range: start 0x0 length 0x4000 00:34:46.097 NVMe0n1 : 15.01 10357.07 40.46 517.79 0.00 11740.71 573.44 17694.72 00:34:46.097 [2024-11-07T12:39:54.104Z] =================================================================================================================== 00:34:46.097 [2024-11-07T12:39:54.104Z] Total : 10357.07 40.46 517.79 0.00 11740.71 573.44 17694.72 00:34:46.097 Received shutdown signal, test time was about 15.000000 seconds 00:34:46.097 00:34:46.097 Latency(us) 00:34:46.097 [2024-11-07T12:39:54.104Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:46.097 [2024-11-07T12:39:54.104Z] =================================================================================================================== 00:34:46.097 [2024-11-07T12:39:54.104Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:46.097 13:39:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:34:46.097 13:39:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:34:46.097 13:39:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:34:46.097 13:39:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=4068325 00:34:46.097 13:39:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 4068325 /var/tmp/bdevperf.sock 00:34:46.097 13:39:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:34:46.097 13:39:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 4068325 ']' 00:34:46.097 13:39:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:46.097 13:39:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:46.097 13:39:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:46.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:46.097 13:39:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:46.097 13:39:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:47.041 13:39:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:47.041 13:39:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:34:47.041 13:39:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:47.041 [2024-11-07 13:39:54.870658] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:47.041 13:39:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:47.302 [2024-11-07 13:39:55.055118] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:34:47.302 13:39:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:47.562 NVMe0n1 00:34:47.562 13:39:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:47.562 00:34:47.823 13:39:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:48.084 00:34:48.084 13:39:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:34:48.084 13:39:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:48.084 13:39:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:48.345 13:39:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:34:51.647 13:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:51.647 13:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:34:51.647 13:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=4069332 00:34:51.647 13:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 4069332 00:34:51.647 13:39:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:52.587 { 00:34:52.587 "results": [ 00:34:52.587 { 00:34:52.587 "job": "NVMe0n1", 00:34:52.587 "core_mask": "0x1", 00:34:52.587 "workload": "verify", 00:34:52.587 "status": "finished", 00:34:52.587 "verify_range": { 00:34:52.587 "start": 0, 00:34:52.587 "length": 16384 00:34:52.587 }, 00:34:52.587 "queue_depth": 128, 00:34:52.587 "io_size": 4096, 00:34:52.587 "runtime": 1.013614, 00:34:52.587 "iops": 10001.835018064075, 00:34:52.588 "mibps": 39.069668039312795, 00:34:52.588 "io_failed": 0, 00:34:52.588 "io_timeout": 0, 00:34:52.588 "avg_latency_us": 12736.434157953574, 00:34:52.588 "min_latency_us": 2839.8933333333334, 00:34:52.588 "max_latency_us": 10704.213333333333 00:34:52.588 } 00:34:52.588 ], 00:34:52.588 "core_count": 1 00:34:52.588 } 00:34:52.588 13:40:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:52.588 [2024-11-07 13:39:53.945650] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:34:52.588 [2024-11-07 13:39:53.945762] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4068325 ] 00:34:52.588 [2024-11-07 13:39:54.086561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:52.588 [2024-11-07 13:39:54.185782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:52.588 [2024-11-07 13:39:56.197795] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:34:52.588 [2024-11-07 13:39:56.197874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:52.588 [2024-11-07 13:39:56.197894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:52.588 [2024-11-07 13:39:56.197911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:52.588 [2024-11-07 13:39:56.197922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:52.588 [2024-11-07 13:39:56.197934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:52.588 [2024-11-07 13:39:56.197946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:52.588 [2024-11-07 13:39:56.197957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:52.588 [2024-11-07 13:39:56.197968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:52.588 [2024-11-07 13:39:56.197984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:34:52.588 [2024-11-07 13:39:56.198037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:34:52.588 [2024-11-07 13:39:56.198067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416700 (9): Bad file descriptor 00:34:52.588 [2024-11-07 13:39:56.209485] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:34:52.588 Running I/O for 1 seconds... 00:34:52.588 9953.00 IOPS, 38.88 MiB/s 00:34:52.588 Latency(us) 00:34:52.588 [2024-11-07T12:40:00.595Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:52.588 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:52.588 Verification LBA range: start 0x0 length 0x4000 00:34:52.588 NVMe0n1 : 1.01 10001.84 39.07 0.00 0.00 12736.43 2839.89 10704.21 00:34:52.588 [2024-11-07T12:40:00.595Z] =================================================================================================================== 00:34:52.588 [2024-11-07T12:40:00.595Z] Total : 10001.84 39.07 0.00 0.00 12736.43 2839.89 10704.21 00:34:52.588 13:40:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:52.588 13:40:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:34:52.848 13:40:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:53.109 13:40:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:53.109 13:40:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:34:53.109 13:40:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:53.370 13:40:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:34:56.674 13:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:56.674 13:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:34:56.674 13:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 4068325 00:34:56.674 13:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 4068325 ']' 00:34:56.674 13:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 4068325 00:34:56.674 13:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:34:56.674 13:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:56.674 13:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4068325 00:34:56.674 13:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:56.674 13:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:56.674 13:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4068325' 00:34:56.674 killing process with pid 4068325 00:34:56.674 13:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 4068325 00:34:56.674 13:40:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 4068325 00:34:57.247 13:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:34:57.247 13:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:57.508 13:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:34:57.508 13:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:57.508 13:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:34:57.508 13:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:57.508 13:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:34:57.508 13:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:57.508 13:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:34:57.508 13:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:57.508 13:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:57.508 rmmod nvme_tcp 00:34:57.508 rmmod nvme_fabrics 00:34:57.508 rmmod nvme_keyring 00:34:57.509 13:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:57.509 13:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:34:57.509 13:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:34:57.509 13:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 4064654 ']' 00:34:57.509 13:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 4064654 00:34:57.509 13:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 4064654 ']' 00:34:57.509 13:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 4064654 00:34:57.509 13:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:34:57.509 13:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:57.509 13:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4064654 00:34:57.509 13:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:34:57.509 13:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:34:57.509 13:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4064654' 00:34:57.509 killing process with pid 4064654 00:34:57.509 13:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 4064654 00:34:57.509 13:40:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 4064654 00:34:58.451 13:40:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:58.451 13:40:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:58.451 13:40:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:58.451 13:40:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:34:58.451 13:40:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:34:58.451 13:40:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:58.451 13:40:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:34:58.451 13:40:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:58.451 13:40:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:58.451 13:40:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:58.451 13:40:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:58.451 13:40:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:00.429 13:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:00.429 00:35:00.429 real 0m42.473s 00:35:00.429 user 2m7.121s 00:35:00.429 sys 0m9.698s 00:35:00.429 13:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:00.429 13:40:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:35:00.429 ************************************ 00:35:00.429 END TEST nvmf_failover 00:35:00.429 ************************************ 00:35:00.429 13:40:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:35:00.429 13:40:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:35:00.429 13:40:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:00.429 13:40:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.429 ************************************ 00:35:00.429 START TEST nvmf_host_discovery 00:35:00.429 ************************************ 00:35:00.429 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:35:00.429 * Looking for test storage... 00:35:00.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:00.429 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:00.429 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:35:00.429 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:00.429 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:00.429 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:00.429 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:00.429 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:00.429 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:35:00.429 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:35:00.429 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:35:00.429 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:35:00.429 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:35:00.429 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:35:00.429 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:35:00.429 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:00.429 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:35:00.429 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:35:00.429 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:00.429 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:00.429 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:35:00.430 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:35:00.430 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:00.430 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:35:00.430 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:35:00.430 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:35:00.430 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:35:00.430 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:00.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.691 --rc genhtml_branch_coverage=1 00:35:00.691 --rc genhtml_function_coverage=1 00:35:00.691 --rc genhtml_legend=1 00:35:00.691 --rc geninfo_all_blocks=1 00:35:00.691 --rc geninfo_unexecuted_blocks=1 00:35:00.691 00:35:00.691 ' 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:00.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.691 --rc genhtml_branch_coverage=1 00:35:00.691 --rc genhtml_function_coverage=1 00:35:00.691 --rc genhtml_legend=1 00:35:00.691 --rc geninfo_all_blocks=1 00:35:00.691 --rc geninfo_unexecuted_blocks=1 00:35:00.691 00:35:00.691 ' 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:00.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.691 --rc genhtml_branch_coverage=1 00:35:00.691 --rc genhtml_function_coverage=1 00:35:00.691 --rc genhtml_legend=1 00:35:00.691 --rc geninfo_all_blocks=1 00:35:00.691 --rc geninfo_unexecuted_blocks=1 00:35:00.691 00:35:00.691 ' 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:00.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.691 --rc genhtml_branch_coverage=1 00:35:00.691 --rc genhtml_function_coverage=1 00:35:00.691 --rc genhtml_legend=1 00:35:00.691 --rc geninfo_all_blocks=1 00:35:00.691 --rc geninfo_unexecuted_blocks=1 00:35:00.691 00:35:00.691 ' 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:00.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:00.691 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:00.692 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:00.692 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:00.692 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:00.692 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:00.692 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:00.692 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:00.692 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:00.692 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:35:00.692 13:40:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:08.828 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:08.828 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:08.828 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:08.829 Found net devices under 0000:31:00.0: cvl_0_0 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:08.829 Found net devices under 0000:31:00.1: cvl_0_1 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:08.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:08.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:35:08.829 00:35:08.829 --- 10.0.0.2 ping statistics --- 00:35:08.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:08.829 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:08.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:08.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:35:08.829 00:35:08.829 --- 10.0.0.1 ping statistics --- 00:35:08.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:08.829 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=4075310 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 4075310 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 4075310 ']' 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:08.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:08.829 13:40:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:08.829 [2024-11-07 13:40:16.503228] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:35:08.829 [2024-11-07 13:40:16.503337] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:08.829 [2024-11-07 13:40:16.663268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:08.829 [2024-11-07 13:40:16.760939] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:08.829 [2024-11-07 13:40:16.760982] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:08.829 [2024-11-07 13:40:16.760993] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:08.829 [2024-11-07 13:40:16.761005] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:08.829 [2024-11-07 13:40:16.761016] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:08.829 [2024-11-07 13:40:16.762110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:09.400 13:40:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:09.400 13:40:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:35:09.400 13:40:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:09.400 13:40:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:09.400 13:40:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:09.400 13:40:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:09.400 13:40:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:09.400 13:40:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.400 13:40:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:09.400 [2024-11-07 13:40:17.293238] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:09.400 13:40:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.400 13:40:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:35:09.400 13:40:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.400 13:40:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:09.400 [2024-11-07 13:40:17.305547] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:35:09.400 13:40:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.400 13:40:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:35:09.400 13:40:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.400 13:40:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:09.400 null0 00:35:09.400 13:40:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.400 13:40:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:35:09.400 13:40:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.400 13:40:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:09.400 null1 00:35:09.400 13:40:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.400 13:40:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:35:09.400 13:40:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.400 13:40:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:09.400 13:40:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.400 13:40:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=4075357 00:35:09.400 13:40:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:35:09.400 13:40:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 4075357 /tmp/host.sock 00:35:09.400 13:40:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 4075357 ']' 00:35:09.400 13:40:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:35:09.400 13:40:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:09.400 13:40:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:35:09.400 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:35:09.400 13:40:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:09.400 13:40:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:09.660 [2024-11-07 13:40:17.443621] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:35:09.660 [2024-11-07 13:40:17.443756] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4075357 ] 00:35:09.660 [2024-11-07 13:40:17.596298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:09.921 [2024-11-07 13:40:17.694421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:10.493 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:10.755 [2024-11-07 13:40:18.504575] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:35:10.755 13:40:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:35:11.326 [2024-11-07 13:40:19.232568] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:11.326 [2024-11-07 13:40:19.232601] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:11.326 [2024-11-07 13:40:19.232633] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:11.587 [2024-11-07 13:40:19.362099] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:35:11.587 [2024-11-07 13:40:19.544511] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:35:11.587 [2024-11-07 13:40:19.545836] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x615000417600:1 started. 00:35:11.587 [2024-11-07 13:40:19.547846] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:11.587 [2024-11-07 13:40:19.547878] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:11.587 [2024-11-07 13:40:19.552044] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x615000417600 was disconnected and freed. delete nvme_qpair. 00:35:11.852 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:35:11.852 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:35:11.852 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:35:11.852 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:11.852 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:11.852 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:11.852 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:11.853 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:11.853 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:11.853 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:11.853 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:11.853 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:35:11.853 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:35:11.853 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:35:11.853 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:35:11.853 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:35:11.853 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:35:11.854 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:35:11.854 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:11.854 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:11.854 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:11.854 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:11.854 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:11.854 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:11.854 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:11.854 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:35:11.854 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:35:11.854 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:35:11.855 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:35:11.855 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:35:11.855 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:35:11.855 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:35:11.855 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:35:11.855 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:35:11.855 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:35:11.855 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:11.855 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:35:11.855 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:11.855 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:35:11.855 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:11.855 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:35:11.855 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:35:11.855 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:35:11.855 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:35:11.855 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:11.856 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:11.856 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:35:11.856 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:35:11.856 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:11.856 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:35:11.856 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:35:11.856 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:11.856 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:11.856 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:12.119 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.119 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:35:12.119 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:35:12.119 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:35:12.119 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:35:12.119 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:35:12.119 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.119 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:12.119 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.119 [2024-11-07 13:40:19.909475] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x615000417b00:1 started. 00:35:12.119 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:12.119 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:12.119 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:35:12.119 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:35:12.119 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:35:12.119 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:35:12.119 [2024-11-07 13:40:19.914088] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x6 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:12.119 15000417b00 was disconnected and freed. delete nvme_qpair. 00:35:12.119 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:12.119 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:12.119 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:12.119 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.119 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:12.119 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.119 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:35:12.119 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:35:12.119 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:35:12.119 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:35:12.119 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:12.119 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:12.119 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:35:12.119 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:35:12.119 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:12.119 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:35:12.119 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:35:12.119 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:12.119 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.119 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:12.119 13:40:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.119 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:35:12.119 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:35:12.119 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:35:12.119 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:35:12.119 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:35:12.119 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.120 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:12.120 [2024-11-07 13:40:20.016690] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:12.120 [2024-11-07 13:40:20.016984] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:35:12.120 [2024-11-07 13:40:20.017021] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:12.120 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.120 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:12.120 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:12.120 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:35:12.120 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:35:12.120 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:35:12.120 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:35:12.120 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:12.120 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:12.120 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.120 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:12.120 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:12.120 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:12.120 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.120 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:12.120 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:35:12.120 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:12.120 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:12.120 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:35:12.120 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:35:12.120 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:35:12.120 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:35:12.120 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:12.120 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:12.120 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.120 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:12.120 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:12.120 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:12.120 [2024-11-07 13:40:20.104845] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:35:12.120 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.379 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:35:12.380 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:35:12.380 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:35:12.380 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:35:12.380 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:35:12.380 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:35:12.380 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:35:12.380 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:35:12.380 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:35:12.380 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:35:12.380 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.380 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:35:12.380 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:12.380 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:35:12.380 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.380 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:35:12.380 13:40:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:35:12.640 [2024-11-07 13:40:20.411792] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:35:12.640 [2024-11-07 13:40:20.411858] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:12.640 [2024-11-07 13:40:20.411882] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:12.640 [2024-11-07 13:40:20.411897] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:35:13.210 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:35:13.210 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:35:13.210 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:35:13.210 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:35:13.210 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:35:13.210 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.210 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:35:13.210 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:13.210 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:35:13.210 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.472 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:35:13.472 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:35:13.472 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:35:13.472 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:35:13.472 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:13.472 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:13.472 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:35:13.472 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:35:13.472 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:13.472 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:35:13.472 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:35:13.472 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:13.472 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.472 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:13.472 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.472 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:35:13.472 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:35:13.472 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:35:13.472 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:35:13.472 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:13.472 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.472 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:13.472 [2024-11-07 13:40:21.292230] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:35:13.472 [2024-11-07 13:40:21.292262] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:13.472 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.472 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:13.472 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:13.472 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:35:13.472 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:35:13.472 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:35:13.472 [2024-11-07 13:40:21.298875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:13.472 [2024-11-07 13:40:21.298908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:13.472 [2024-11-07 13:40:21.298923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:13.472 [2024-11-07 13:40:21.298935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:13.472 [2024-11-07 13:40:21.298946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:13.473 [2024-11-07 13:40:21.298957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:13.473 [2024-11-07 13:40:21.298969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:13.473 [2024-11-07 13:40:21.298979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:13.473 [2024-11-07 13:40:21.298995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416e80 is same with the state(6) to be set 00:35:13.473 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:35:13.473 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:13.473 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:13.473 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.473 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:13.473 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:13.473 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:13.473 [2024-11-07 13:40:21.308879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416e80 (9): Bad file descriptor 00:35:13.473 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.473 [2024-11-07 13:40:21.318914] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:35:13.473 [2024-11-07 13:40:21.318940] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:35:13.473 [2024-11-07 13:40:21.318953] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:13.473 [2024-11-07 13:40:21.318962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:13.473 [2024-11-07 13:40:21.318998] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:13.473 [2024-11-07 13:40:21.319252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.473 [2024-11-07 13:40:21.319276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416e80 with addr=10.0.0.2, port=4420 00:35:13.473 [2024-11-07 13:40:21.319290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416e80 is same with the state(6) to be set 00:35:13.473 [2024-11-07 13:40:21.319310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416e80 (9): Bad file descriptor 00:35:13.473 [2024-11-07 13:40:21.319327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:13.473 [2024-11-07 13:40:21.319342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:13.473 [2024-11-07 13:40:21.319354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:13.473 [2024-11-07 13:40:21.319365] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:13.473 [2024-11-07 13:40:21.319375] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:13.473 [2024-11-07 13:40:21.319383] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:13.473 [2024-11-07 13:40:21.329034] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:35:13.473 [2024-11-07 13:40:21.329057] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:35:13.473 [2024-11-07 13:40:21.329065] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:13.473 [2024-11-07 13:40:21.329073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:13.473 [2024-11-07 13:40:21.329097] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:13.473 [2024-11-07 13:40:21.329416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.473 [2024-11-07 13:40:21.329439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416e80 with addr=10.0.0.2, port=4420 00:35:13.473 [2024-11-07 13:40:21.329451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416e80 is same with the state(6) to be set 00:35:13.473 [2024-11-07 13:40:21.329467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416e80 (9): Bad file descriptor 00:35:13.473 [2024-11-07 13:40:21.329492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:13.473 [2024-11-07 13:40:21.329502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:13.473 [2024-11-07 13:40:21.329513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:13.473 [2024-11-07 13:40:21.329522] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:13.473 [2024-11-07 13:40:21.329530] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:13.473 [2024-11-07 13:40:21.329536] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:13.473 [2024-11-07 13:40:21.339134] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:35:13.473 [2024-11-07 13:40:21.339159] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:35:13.473 [2024-11-07 13:40:21.339167] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:13.473 [2024-11-07 13:40:21.339174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:13.473 [2024-11-07 13:40:21.339198] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:13.473 [2024-11-07 13:40:21.339378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.473 [2024-11-07 13:40:21.339396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416e80 with addr=10.0.0.2, port=4420 00:35:13.473 [2024-11-07 13:40:21.339407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416e80 is same with the state(6) to be set 00:35:13.473 [2024-11-07 13:40:21.339424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416e80 (9): Bad file descriptor 00:35:13.473 [2024-11-07 13:40:21.339439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:13.473 [2024-11-07 13:40:21.339449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:13.473 [2024-11-07 13:40:21.339459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:13.473 [2024-11-07 13:40:21.339475] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:13.473 [2024-11-07 13:40:21.339483] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:13.473 [2024-11-07 13:40:21.339490] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:13.473 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:13.473 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:35:13.473 [2024-11-07 13:40:21.349231] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:35:13.473 [2024-11-07 13:40:21.349253] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:35:13.473 [2024-11-07 13:40:21.349260] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:13.473 [2024-11-07 13:40:21.349271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:13.473 [2024-11-07 13:40:21.349298] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:13.473 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:13.473 [2024-11-07 13:40:21.349621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.473 [2024-11-07 13:40:21.349641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416e80 with addr=10.0.0.2, port=4420 00:35:13.473 [2024-11-07 13:40:21.349652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416e80 is same with the state(6) to be set 00:35:13.473 [2024-11-07 13:40:21.349670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416e80 (9): Bad file descriptor 00:35:13.473 [2024-11-07 13:40:21.349693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:13.473 [2024-11-07 13:40:21.349704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:13.473 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:13.473 [2024-11-07 13:40:21.349714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:13.473 [2024-11-07 13:40:21.349723] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:13.473 [2024-11-07 13:40:21.349731] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:13.473 [2024-11-07 13:40:21.349737] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:13.473 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:35:13.473 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:35:13.473 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:35:13.473 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:35:13.473 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:13.473 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:13.473 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.473 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:13.473 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:13.473 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:13.473 [2024-11-07 13:40:21.359334] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:35:13.473 [2024-11-07 13:40:21.359357] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:35:13.473 [2024-11-07 13:40:21.359364] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:13.473 [2024-11-07 13:40:21.359371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:13.473 [2024-11-07 13:40:21.359393] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:13.473 [2024-11-07 13:40:21.359742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.474 [2024-11-07 13:40:21.359762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416e80 with addr=10.0.0.2, port=4420 00:35:13.474 [2024-11-07 13:40:21.359774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416e80 is same with the state(6) to be set 00:35:13.474 [2024-11-07 13:40:21.359794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416e80 (9): Bad file descriptor 00:35:13.474 [2024-11-07 13:40:21.359819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:13.474 [2024-11-07 13:40:21.359829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:13.474 [2024-11-07 13:40:21.359840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:13.474 [2024-11-07 13:40:21.359849] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:13.474 [2024-11-07 13:40:21.359857] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:13.474 [2024-11-07 13:40:21.359869] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:13.474 [2024-11-07 13:40:21.369427] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:35:13.474 [2024-11-07 13:40:21.369453] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:35:13.474 [2024-11-07 13:40:21.369461] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:13.474 [2024-11-07 13:40:21.369468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:13.474 [2024-11-07 13:40:21.369497] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:13.474 [2024-11-07 13:40:21.369850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.474 [2024-11-07 13:40:21.369875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416e80 with addr=10.0.0.2, port=4420 00:35:13.474 [2024-11-07 13:40:21.369887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416e80 is same with the state(6) to be set 00:35:13.474 [2024-11-07 13:40:21.369903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416e80 (9): Bad file descriptor 00:35:13.474 [2024-11-07 13:40:21.369919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:13.474 [2024-11-07 13:40:21.369928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:13.474 [2024-11-07 13:40:21.369938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:13.474 [2024-11-07 13:40:21.369947] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:13.474 [2024-11-07 13:40:21.369955] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:13.474 [2024-11-07 13:40:21.369962] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:13.474 [2024-11-07 13:40:21.378424] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:35:13.474 [2024-11-07 13:40:21.378457] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:35:13.474 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.474 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:35:13.474 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:35:13.474 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:35:13.474 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:35:13.474 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:35:13.474 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:35:13.474 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:35:13.474 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:35:13.474 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:35:13.474 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:35:13.474 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.474 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:35:13.474 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:13.474 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:35:13.474 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.474 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:35:13.474 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:35:13.474 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:35:13.474 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:35:13.474 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:13.474 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:13.474 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:35:13.474 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:35:13.474 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:13.474 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:35:13.474 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:35:13.474 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:13.474 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.474 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:13.474 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.735 13:40:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:15.119 [2024-11-07 13:40:22.742058] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:15.119 [2024-11-07 13:40:22.742085] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:15.119 [2024-11-07 13:40:22.742117] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:15.119 [2024-11-07 13:40:22.829392] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:35:15.119 [2024-11-07 13:40:22.933443] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:35:15.119 [2024-11-07 13:40:22.934656] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x615000419680:1 started. 00:35:15.119 [2024-11-07 13:40:22.936980] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:15.119 [2024-11-07 13:40:22.937021] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:35:15.119 13:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.119 13:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:15.119 13:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:35:15.119 13:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:15.119 13:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:15.119 [2024-11-07 13:40:22.940543] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x615000419680 was disconnected and freed. delete nvme_qpair. 00:35:15.119 13:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:15.119 13:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:15.119 13:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:15.119 13:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:15.119 13:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.119 13:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:15.120 request: 00:35:15.120 { 00:35:15.120 "name": "nvme", 00:35:15.120 "trtype": "tcp", 00:35:15.120 "traddr": "10.0.0.2", 00:35:15.120 "adrfam": "ipv4", 00:35:15.120 "trsvcid": "8009", 00:35:15.120 "hostnqn": "nqn.2021-12.io.spdk:test", 00:35:15.120 "wait_for_attach": true, 00:35:15.120 "method": "bdev_nvme_start_discovery", 00:35:15.120 "req_id": 1 00:35:15.120 } 00:35:15.120 Got JSON-RPC error response 00:35:15.120 response: 00:35:15.120 { 00:35:15.120 "code": -17, 00:35:15.120 "message": "File exists" 00:35:15.120 } 00:35:15.120 13:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:15.120 13:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:35:15.120 13:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:15.120 13:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:15.120 13:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:15.120 13:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:35:15.120 13:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:35:15.120 13:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:35:15.120 13:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.120 13:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:35:15.120 13:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:15.120 13:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:35:15.120 13:40:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.120 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:35:15.120 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:35:15.120 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:15.120 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.120 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:15.120 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:15.120 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:15.120 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:15.120 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.120 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:35:15.120 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:15.120 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:35:15.120 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:15.120 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:15.120 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:15.120 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:15.120 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:15.120 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:15.120 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.120 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:15.120 request: 00:35:15.120 { 00:35:15.120 "name": "nvme_second", 00:35:15.120 "trtype": "tcp", 00:35:15.120 "traddr": "10.0.0.2", 00:35:15.120 "adrfam": "ipv4", 00:35:15.120 "trsvcid": "8009", 00:35:15.120 "hostnqn": "nqn.2021-12.io.spdk:test", 00:35:15.120 "wait_for_attach": true, 00:35:15.120 "method": "bdev_nvme_start_discovery", 00:35:15.120 "req_id": 1 00:35:15.120 } 00:35:15.120 Got JSON-RPC error response 00:35:15.120 response: 00:35:15.120 { 00:35:15.120 "code": -17, 00:35:15.120 "message": "File exists" 00:35:15.120 } 00:35:15.120 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:15.120 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:35:15.120 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:15.120 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:15.120 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:15.120 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:35:15.120 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:35:15.120 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.120 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:15.120 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:35:15.120 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:35:15.120 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:35:15.120 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.381 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:35:15.381 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:35:15.381 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:15.381 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:15.381 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.381 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:15.381 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:15.381 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:15.381 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.381 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:35:15.381 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:35:15.381 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:35:15.381 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:35:15.381 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:15.381 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:15.381 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:15.381 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:15.381 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:35:15.381 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.381 13:40:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:16.322 [2024-11-07 13:40:24.188609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.322 [2024-11-07 13:40:24.188650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000419b80 with addr=10.0.0.2, port=8010 00:35:16.322 [2024-11-07 13:40:24.188699] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:35:16.322 [2024-11-07 13:40:24.188711] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:16.322 [2024-11-07 13:40:24.188731] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:35:17.263 [2024-11-07 13:40:25.190877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.263 [2024-11-07 13:40:25.190910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000419e00 with addr=10.0.0.2, port=8010 00:35:17.263 [2024-11-07 13:40:25.190953] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:35:17.263 [2024-11-07 13:40:25.190963] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:17.263 [2024-11-07 13:40:25.190973] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:35:18.204 [2024-11-07 13:40:26.192880] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:35:18.204 request: 00:35:18.204 { 00:35:18.204 "name": "nvme_second", 00:35:18.204 "trtype": "tcp", 00:35:18.204 "traddr": "10.0.0.2", 00:35:18.204 "adrfam": "ipv4", 00:35:18.204 "trsvcid": "8010", 00:35:18.204 "hostnqn": "nqn.2021-12.io.spdk:test", 00:35:18.204 "wait_for_attach": false, 00:35:18.205 "attach_timeout_ms": 3000, 00:35:18.205 "method": "bdev_nvme_start_discovery", 00:35:18.205 "req_id": 1 00:35:18.205 } 00:35:18.205 Got JSON-RPC error response 00:35:18.205 response: 00:35:18.205 { 00:35:18.205 "code": -110, 00:35:18.205 "message": "Connection timed out" 00:35:18.205 } 00:35:18.205 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:18.205 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:35:18.205 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:18.205 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:18.205 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:18.205 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:35:18.205 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:35:18.205 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:35:18.205 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.205 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:35:18.205 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:18.205 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:35:18.466 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.466 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:35:18.466 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:35:18.466 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 4075357 00:35:18.466 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:35:18.466 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:18.466 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:35:18.466 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:18.466 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:35:18.466 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:18.466 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:18.466 rmmod nvme_tcp 00:35:18.466 rmmod nvme_fabrics 00:35:18.466 rmmod nvme_keyring 00:35:18.466 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:18.466 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:35:18.466 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:35:18.466 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 4075310 ']' 00:35:18.466 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 4075310 00:35:18.466 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 4075310 ']' 00:35:18.466 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 4075310 00:35:18.466 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:35:18.466 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:18.466 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4075310 00:35:18.466 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:35:18.466 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:35:18.466 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4075310' 00:35:18.466 killing process with pid 4075310 00:35:18.466 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 4075310 00:35:18.466 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 4075310 00:35:19.039 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:19.039 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:19.039 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:19.039 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:35:19.039 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:19.039 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:35:19.039 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:35:19.039 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:19.039 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:19.039 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:19.039 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:19.039 13:40:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:21.584 00:35:21.584 real 0m20.784s 00:35:21.584 user 0m23.760s 00:35:21.584 sys 0m7.455s 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:21.584 ************************************ 00:35:21.584 END TEST nvmf_host_discovery 00:35:21.584 ************************************ 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.584 ************************************ 00:35:21.584 START TEST nvmf_host_multipath_status 00:35:21.584 ************************************ 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:35:21.584 * Looking for test storage... 00:35:21.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:21.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.584 --rc genhtml_branch_coverage=1 00:35:21.584 --rc genhtml_function_coverage=1 00:35:21.584 --rc genhtml_legend=1 00:35:21.584 --rc geninfo_all_blocks=1 00:35:21.584 --rc geninfo_unexecuted_blocks=1 00:35:21.584 00:35:21.584 ' 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:21.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.584 --rc genhtml_branch_coverage=1 00:35:21.584 --rc genhtml_function_coverage=1 00:35:21.584 --rc genhtml_legend=1 00:35:21.584 --rc geninfo_all_blocks=1 00:35:21.584 --rc geninfo_unexecuted_blocks=1 00:35:21.584 00:35:21.584 ' 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:21.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.584 --rc genhtml_branch_coverage=1 00:35:21.584 --rc genhtml_function_coverage=1 00:35:21.584 --rc genhtml_legend=1 00:35:21.584 --rc geninfo_all_blocks=1 00:35:21.584 --rc geninfo_unexecuted_blocks=1 00:35:21.584 00:35:21.584 ' 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:21.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.584 --rc genhtml_branch_coverage=1 00:35:21.584 --rc genhtml_function_coverage=1 00:35:21.584 --rc genhtml_legend=1 00:35:21.584 --rc geninfo_all_blocks=1 00:35:21.584 --rc geninfo_unexecuted_blocks=1 00:35:21.584 00:35:21.584 ' 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.584 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.585 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:35:21.585 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.585 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:35:21.585 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:21.585 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:21.585 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:21.585 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:21.585 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:21.585 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:21.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:21.585 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:21.585 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:21.585 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:21.585 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:35:21.585 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:35:21.585 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:21.585 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:35:21.585 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:21.585 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:35:21.585 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:35:21.585 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:21.585 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:21.585 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:21.585 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:21.585 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:21.585 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:21.585 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:21.585 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:21.585 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:21.585 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:21.585 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:35:21.585 13:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:29.748 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:29.748 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:29.748 Found net devices under 0000:31:00.0: cvl_0_0 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:29.748 Found net devices under 0000:31:00.1: cvl_0_1 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:29.748 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:29.749 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:29.749 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:29.749 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:29.749 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:29.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:29.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.462 ms 00:35:29.749 00:35:29.749 --- 10.0.0.2 ping statistics --- 00:35:29.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:29.749 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:35:29.749 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:29.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:29.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.349 ms 00:35:29.749 00:35:29.749 --- 10.0.0.1 ping statistics --- 00:35:29.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:29.749 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:35:29.749 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:29.749 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:35:29.749 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:29.749 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:29.749 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:29.749 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:29.749 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:29.749 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:29.749 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:29.749 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:35:29.749 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:29.749 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:29.749 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:29.749 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=4082027 00:35:29.749 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 4082027 00:35:29.749 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:35:30.010 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 4082027 ']' 00:35:30.010 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:30.010 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:30.010 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:30.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:30.010 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:30.010 13:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:30.010 [2024-11-07 13:40:37.851711] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:35:30.010 [2024-11-07 13:40:37.851848] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:30.010 [2024-11-07 13:40:38.009343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:30.271 [2024-11-07 13:40:38.105808] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:30.271 [2024-11-07 13:40:38.105855] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:30.271 [2024-11-07 13:40:38.105874] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:30.271 [2024-11-07 13:40:38.105888] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:30.271 [2024-11-07 13:40:38.105897] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:30.271 [2024-11-07 13:40:38.107776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:30.271 [2024-11-07 13:40:38.107800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:30.843 13:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:30.843 13:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:35:30.843 13:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:30.843 13:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:30.843 13:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:30.843 13:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:30.843 13:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=4082027 00:35:30.843 13:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:30.843 [2024-11-07 13:40:38.814733] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:30.843 13:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:35:31.104 Malloc0 00:35:31.104 13:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:35:31.365 13:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:31.625 13:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:31.625 [2024-11-07 13:40:39.539075] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:31.625 13:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:35:31.886 [2024-11-07 13:40:39.707461] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:31.886 13:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=4082463 00:35:31.886 13:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:35:31.886 13:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:31.886 13:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 4082463 /var/tmp/bdevperf.sock 00:35:31.886 13:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 4082463 ']' 00:35:31.886 13:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:31.886 13:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:31.886 13:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:31.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:31.886 13:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:31.886 13:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:32.828 13:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:32.828 13:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:35:32.828 13:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:35:32.828 13:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:35:33.089 Nvme0n1 00:35:33.350 13:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:35:33.612 Nvme0n1 00:35:33.612 13:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:35:33.612 13:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:35:35.526 13:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:35:35.526 13:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:35:35.787 13:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:35.787 13:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:35:37.172 13:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:35:37.172 13:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:37.172 13:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:37.172 13:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:37.172 13:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:37.172 13:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:37.172 13:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:37.172 13:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:37.172 13:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:37.172 13:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:37.172 13:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:37.172 13:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:37.434 13:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:37.434 13:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:37.434 13:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:37.434 13:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:37.694 13:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:37.694 13:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:37.694 13:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:37.694 13:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:37.694 13:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:37.955 13:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:37.955 13:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:37.955 13:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:37.955 13:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:37.955 13:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:35:37.955 13:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:38.216 13:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:38.476 13:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:35:39.417 13:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:35:39.417 13:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:39.417 13:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:39.417 13:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:39.677 13:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:39.677 13:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:39.677 13:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:39.677 13:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:39.677 13:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:39.677 13:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:39.677 13:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:39.677 13:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:39.937 13:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:39.937 13:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:39.937 13:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:39.937 13:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:40.197 13:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:40.197 13:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:40.197 13:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:40.197 13:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:40.197 13:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:40.197 13:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:40.197 13:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:40.197 13:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:40.456 13:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:40.457 13:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:35:40.457 13:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:40.716 13:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:35:40.716 13:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:35:42.095 13:40:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:35:42.095 13:40:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:42.095 13:40:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:42.095 13:40:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:42.095 13:40:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:42.095 13:40:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:42.095 13:40:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:42.095 13:40:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:42.356 13:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:42.356 13:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:42.356 13:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:42.356 13:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:42.356 13:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:42.356 13:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:42.356 13:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:42.356 13:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:42.617 13:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:42.617 13:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:42.617 13:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:42.617 13:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:42.879 13:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:42.879 13:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:42.879 13:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:42.879 13:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:42.879 13:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:42.879 13:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:35:42.879 13:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:43.139 13:40:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:43.399 13:40:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:35:44.339 13:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:35:44.339 13:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:44.339 13:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:44.339 13:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:44.605 13:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:44.605 13:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:44.605 13:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:44.605 13:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:44.605 13:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:44.605 13:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:44.869 13:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:44.869 13:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:44.869 13:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:44.869 13:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:44.869 13:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:44.869 13:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:45.130 13:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:45.130 13:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:45.130 13:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:45.130 13:40:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:45.391 13:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:45.391 13:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:45.391 13:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:45.391 13:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:45.391 13:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:45.391 13:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:35:45.391 13:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:35:45.652 13:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:45.913 13:40:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:35:46.853 13:40:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:35:46.853 13:40:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:46.853 13:40:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:46.853 13:40:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:47.114 13:40:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:47.114 13:40:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:47.114 13:40:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:47.114 13:40:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:47.114 13:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:47.114 13:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:47.114 13:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:47.114 13:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:47.375 13:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:47.375 13:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:47.375 13:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:47.375 13:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:47.635 13:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:47.635 13:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:35:47.635 13:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:47.635 13:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:47.896 13:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:47.896 13:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:47.896 13:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:47.896 13:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:47.896 13:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:47.896 13:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:35:47.896 13:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:35:48.156 13:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:48.417 13:40:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:35:49.359 13:40:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:35:49.359 13:40:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:49.359 13:40:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:49.359 13:40:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:49.359 13:40:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:49.620 13:40:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:49.620 13:40:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:49.620 13:40:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:49.620 13:40:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:49.620 13:40:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:49.620 13:40:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:49.620 13:40:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:49.881 13:40:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:49.881 13:40:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:49.881 13:40:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:49.881 13:40:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:50.142 13:40:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:50.142 13:40:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:35:50.142 13:40:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:50.142 13:40:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:50.142 13:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:50.142 13:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:50.142 13:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:50.142 13:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:50.402 13:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:50.402 13:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:35:50.663 13:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:35:50.663 13:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:35:50.924 13:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:50.924 13:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:35:52.308 13:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:35:52.308 13:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:52.308 13:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:52.308 13:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:52.308 13:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:52.308 13:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:52.308 13:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:52.308 13:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:52.308 13:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:52.308 13:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:52.308 13:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:52.308 13:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:52.569 13:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:52.569 13:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:52.569 13:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:52.569 13:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:52.830 13:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:52.830 13:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:52.830 13:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:52.830 13:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:52.830 13:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:52.830 13:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:52.830 13:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:52.830 13:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:53.091 13:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:53.091 13:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:35:53.091 13:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:53.352 13:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:53.352 13:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:35:54.738 13:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:35:54.738 13:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:54.738 13:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:54.738 13:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:54.738 13:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:54.738 13:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:54.738 13:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:54.738 13:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:54.738 13:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:54.738 13:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:54.738 13:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:54.738 13:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:54.999 13:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:54.999 13:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:54.999 13:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:54.999 13:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:55.260 13:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:55.260 13:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:55.260 13:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:55.260 13:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:55.526 13:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:55.526 13:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:55.526 13:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:55.526 13:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:55.526 13:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:55.526 13:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:35:55.526 13:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:55.837 13:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:35:56.124 13:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:35:57.096 13:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:35:57.096 13:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:57.096 13:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:57.096 13:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:57.096 13:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:57.096 13:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:57.096 13:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:57.096 13:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:57.356 13:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:57.356 13:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:57.356 13:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:57.356 13:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:57.615 13:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:57.615 13:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:57.615 13:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:57.615 13:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:57.615 13:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:57.615 13:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:57.615 13:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:57.615 13:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:57.875 13:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:57.876 13:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:57.876 13:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:57.876 13:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:58.136 13:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:58.136 13:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:35:58.136 13:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:58.396 13:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:58.396 13:41:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:35:59.337 13:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:35:59.337 13:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:59.337 13:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:59.337 13:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:59.598 13:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:59.598 13:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:59.598 13:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:59.598 13:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:59.857 13:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:59.857 13:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:59.857 13:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:59.857 13:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:00.117 13:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:00.117 13:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:00.117 13:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:00.117 13:41:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:00.117 13:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:00.117 13:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:00.117 13:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:00.117 13:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:00.377 13:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:00.377 13:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:36:00.377 13:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:00.377 13:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:00.637 13:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:00.637 13:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 4082463 00:36:00.637 13:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 4082463 ']' 00:36:00.637 13:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 4082463 00:36:00.637 13:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:36:00.637 13:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:00.637 13:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4082463 00:36:00.637 13:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:36:00.637 13:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:36:00.637 13:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4082463' 00:36:00.637 killing process with pid 4082463 00:36:00.637 13:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 4082463 00:36:00.637 13:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 4082463 00:36:00.637 { 00:36:00.637 "results": [ 00:36:00.637 { 00:36:00.637 "job": "Nvme0n1", 00:36:00.637 "core_mask": "0x4", 00:36:00.637 "workload": "verify", 00:36:00.637 "status": "terminated", 00:36:00.637 "verify_range": { 00:36:00.637 "start": 0, 00:36:00.637 "length": 16384 00:36:00.637 }, 00:36:00.637 "queue_depth": 128, 00:36:00.637 "io_size": 4096, 00:36:00.637 "runtime": 27.001429, 00:36:00.637 "iops": 9739.33638845559, 00:36:00.637 "mibps": 38.044282767404646, 00:36:00.637 "io_failed": 0, 00:36:00.638 "io_timeout": 0, 00:36:00.638 "avg_latency_us": 13124.02938265596, 00:36:00.638 "min_latency_us": 334.50666666666666, 00:36:00.638 "max_latency_us": 3019898.88 00:36:00.638 } 00:36:00.638 ], 00:36:00.638 "core_count": 1 00:36:00.638 } 00:36:01.211 13:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 4082463 00:36:01.211 13:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:36:01.211 [2024-11-07 13:40:39.801333] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:36:01.211 [2024-11-07 13:40:39.801443] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4082463 ] 00:36:01.211 [2024-11-07 13:40:39.919159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:01.211 [2024-11-07 13:40:39.992844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:01.211 Running I/O for 90 seconds... 00:36:01.211 8502.00 IOPS, 33.21 MiB/s [2024-11-07T12:41:09.218Z] 8600.00 IOPS, 33.59 MiB/s [2024-11-07T12:41:09.218Z] 8607.33 IOPS, 33.62 MiB/s [2024-11-07T12:41:09.218Z] 8600.50 IOPS, 33.60 MiB/s [2024-11-07T12:41:09.218Z] 8844.60 IOPS, 34.55 MiB/s [2024-11-07T12:41:09.218Z] 9309.17 IOPS, 36.36 MiB/s [2024-11-07T12:41:09.218Z] 9655.29 IOPS, 37.72 MiB/s [2024-11-07T12:41:09.218Z] 9625.25 IOPS, 37.60 MiB/s [2024-11-07T12:41:09.218Z] 9511.78 IOPS, 37.16 MiB/s [2024-11-07T12:41:09.218Z] 9416.40 IOPS, 36.78 MiB/s [2024-11-07T12:41:09.218Z] 9353.45 IOPS, 36.54 MiB/s [2024-11-07T12:41:09.218Z] [2024-11-07 13:40:53.515419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:105592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.211 [2024-11-07 13:40:53.515466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:01.211 [2024-11-07 13:40:53.515513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:105600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.211 [2024-11-07 13:40:53.515524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:36:01.211 [2024-11-07 13:40:53.515539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:105608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.211 [2024-11-07 13:40:53.515547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:01.211 [2024-11-07 13:40:53.515561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:105616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.211 [2024-11-07 13:40:53.515569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:01.211 [2024-11-07 13:40:53.515583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.211 [2024-11-07 13:40:53.515590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:01.211 [2024-11-07 13:40:53.515604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:105632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.211 [2024-11-07 13:40:53.515612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:01.211 [2024-11-07 13:40:53.515625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:105640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.211 [2024-11-07 13:40:53.515633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:01.211 [2024-11-07 13:40:53.515646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:105648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.211 [2024-11-07 13:40:53.515654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:01.211 [2024-11-07 13:40:53.515667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.211 [2024-11-07 13:40:53.515675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:01.211 [2024-11-07 13:40:53.515688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:105664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.211 [2024-11-07 13:40:53.515702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:01.211 [2024-11-07 13:40:53.515715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:105672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.211 [2024-11-07 13:40:53.515723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:01.211 [2024-11-07 13:40:53.515737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:105680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.211 [2024-11-07 13:40:53.515745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:01.211 [2024-11-07 13:40:53.515759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:105688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.211 [2024-11-07 13:40:53.515766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:01.211 [2024-11-07 13:40:53.515780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:105696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.211 [2024-11-07 13:40:53.515787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:01.211 [2024-11-07 13:40:53.515800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:105704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.211 [2024-11-07 13:40:53.515808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:01.212 [2024-11-07 13:40:53.515822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.212 [2024-11-07 13:40:53.515829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:01.212 [2024-11-07 13:40:53.515874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:105720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.212 [2024-11-07 13:40:53.515884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:36:01.212 [2024-11-07 13:40:53.515898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:105728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.212 [2024-11-07 13:40:53.515905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:01.212 [2024-11-07 13:40:53.515919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:105736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.212 [2024-11-07 13:40:53.515927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:01.212 [2024-11-07 13:40:53.515941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:105744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.212 [2024-11-07 13:40:53.515949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:01.212 [2024-11-07 13:40:53.515963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:105752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.212 [2024-11-07 13:40:53.515970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:01.212 [2024-11-07 13:40:53.515984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:105760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.212 [2024-11-07 13:40:53.515991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:01.212 [2024-11-07 13:40:53.516007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:105768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.212 [2024-11-07 13:40:53.516014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:01.212 [2024-11-07 13:40:53.516028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:105776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.212 [2024-11-07 13:40:53.516035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:01.212 [2024-11-07 13:40:53.516049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.212 [2024-11-07 13:40:53.516056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:01.212 [2024-11-07 13:40:53.516070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:105792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.212 [2024-11-07 13:40:53.516077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:01.212 [2024-11-07 13:40:53.516091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:105800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.212 [2024-11-07 13:40:53.516099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.212 [2024-11-07 13:40:53.516113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:105808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.212 [2024-11-07 13:40:53.516120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:01.212 [2024-11-07 13:40:53.516133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:105816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.212 [2024-11-07 13:40:53.516141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:01.212 [2024-11-07 13:40:53.516154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:105824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.212 [2024-11-07 13:40:53.516162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:01.212 [2024-11-07 13:40:53.516175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:105832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.212 [2024-11-07 13:40:53.516183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:01.212 [2024-11-07 13:40:53.516197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:105840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.212 [2024-11-07 13:40:53.516205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:01.212 9287.58 IOPS, 36.28 MiB/s [2024-11-07T12:41:09.219Z] [2024-11-07 13:40:53.516603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:105848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.212 [2024-11-07 13:40:53.516619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:36:01.212 [2024-11-07 13:40:53.516643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:105856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.212 [2024-11-07 13:40:53.516651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:01.212 [2024-11-07 13:40:53.516668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:105864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.212 [2024-11-07 13:40:53.516676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:01.212 [2024-11-07 13:40:53.516692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:105872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.212 [2024-11-07 13:40:53.516699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:01.212 [2024-11-07 13:40:53.516714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:105880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.212 [2024-11-07 13:40:53.516722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:01.212 [2024-11-07 13:40:53.516738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:105888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.212 [2024-11-07 13:40:53.516746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:01.212 [2024-11-07 13:40:53.516762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:105896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.212 [2024-11-07 13:40:53.516769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:01.212 [2024-11-07 13:40:53.516785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:105904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.212 [2024-11-07 13:40:53.516793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:01.212 [2024-11-07 13:40:53.516808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:105912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.212 [2024-11-07 13:40:53.516816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:01.212 [2024-11-07 13:40:53.516831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:105920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.212 [2024-11-07 13:40:53.516838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:01.212 [2024-11-07 13:40:53.516854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:105928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.212 [2024-11-07 13:40:53.516866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:01.212 [2024-11-07 13:40:53.516882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:105936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.212 [2024-11-07 13:40:53.516889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:01.212 [2024-11-07 13:40:53.516904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:105944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.212 [2024-11-07 13:40:53.516912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:01.212 [2024-11-07 13:40:53.516927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:105952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.212 [2024-11-07 13:40:53.516941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:01.212 [2024-11-07 13:40:53.516957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:105960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.212 [2024-11-07 13:40:53.516965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:01.212 [2024-11-07 13:40:53.516980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:105968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.212 [2024-11-07 13:40:53.516988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:01.212 [2024-11-07 13:40:53.517003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:105976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.212 [2024-11-07 13:40:53.517010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:01.212 [2024-11-07 13:40:53.517026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:105984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.212 [2024-11-07 13:40:53.517033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:01.212 [2024-11-07 13:40:53.517048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.213 [2024-11-07 13:40:53.517056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:01.213 [2024-11-07 13:40:53.517071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:106000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.213 [2024-11-07 13:40:53.517078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:01.213 [2024-11-07 13:40:53.517094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:106008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.213 [2024-11-07 13:40:53.517101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:01.213 [2024-11-07 13:40:53.517116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:106016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.213 [2024-11-07 13:40:53.517123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:01.213 [2024-11-07 13:40:53.517139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:106024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.213 [2024-11-07 13:40:53.517146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:01.213 [2024-11-07 13:40:53.517162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.213 [2024-11-07 13:40:53.517169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:01.213 [2024-11-07 13:40:53.517184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:106040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.213 [2024-11-07 13:40:53.517192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:01.213 [2024-11-07 13:40:53.517207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:106048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.213 [2024-11-07 13:40:53.517215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:01.213 [2024-11-07 13:40:53.517230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:106056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.213 [2024-11-07 13:40:53.517238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.213 [2024-11-07 13:40:53.517254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:106064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.213 [2024-11-07 13:40:53.517261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:01.213 [2024-11-07 13:40:53.517276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:106072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.213 [2024-11-07 13:40:53.517284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:01.213 [2024-11-07 13:40:53.517299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:106080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.213 [2024-11-07 13:40:53.517306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:01.213 [2024-11-07 13:40:53.517322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:106088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.213 [2024-11-07 13:40:53.517329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:01.213 [2024-11-07 13:40:53.517344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:106096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.213 [2024-11-07 13:40:53.517351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:01.213 [2024-11-07 13:40:53.517366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:106104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.213 [2024-11-07 13:40:53.517374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:01.213 [2024-11-07 13:40:53.517388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:106112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.213 [2024-11-07 13:40:53.517396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:01.213 [2024-11-07 13:40:53.517411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:106120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.213 [2024-11-07 13:40:53.517419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:01.213 [2024-11-07 13:40:53.517434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:106128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.213 [2024-11-07 13:40:53.517441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:01.213 [2024-11-07 13:40:53.517456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:106136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.213 [2024-11-07 13:40:53.517464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:01.213 [2024-11-07 13:40:53.517479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:106144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.213 [2024-11-07 13:40:53.517488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:01.213 [2024-11-07 13:40:53.517503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:106152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.213 [2024-11-07 13:40:53.517512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:01.213 [2024-11-07 13:40:53.517527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:106160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.213 [2024-11-07 13:40:53.517535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:01.213 [2024-11-07 13:40:53.517550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:106168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.213 [2024-11-07 13:40:53.517558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:01.213 [2024-11-07 13:40:53.517573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:106176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.213 [2024-11-07 13:40:53.517580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:01.213 [2024-11-07 13:40:53.517596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:106184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.213 [2024-11-07 13:40:53.517603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:01.213 [2024-11-07 13:40:53.517618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:106192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.213 [2024-11-07 13:40:53.517626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:01.213 [2024-11-07 13:40:53.517641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:105408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.213 [2024-11-07 13:40:53.517649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:01.213 [2024-11-07 13:40:53.517664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:105416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.213 [2024-11-07 13:40:53.517671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:01.213 [2024-11-07 13:40:53.517686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:105424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.213 [2024-11-07 13:40:53.517694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:01.213 [2024-11-07 13:40:53.517709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.213 [2024-11-07 13:40:53.517717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:01.213 [2024-11-07 13:40:53.517732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:105440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.213 [2024-11-07 13:40:53.517739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:01.213 [2024-11-07 13:40:53.517755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:105448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.213 [2024-11-07 13:40:53.517763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:01.213 [2024-11-07 13:40:53.517868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:105456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.213 [2024-11-07 13:40:53.517881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:01.213 [2024-11-07 13:40:53.517900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:106200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.213 [2024-11-07 13:40:53.517908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:01.213 [2024-11-07 13:40:53.517926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:106208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.214 [2024-11-07 13:40:53.517934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:01.214 [2024-11-07 13:40:53.517951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:106216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.214 [2024-11-07 13:40:53.517959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:01.214 [2024-11-07 13:40:53.517977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:106224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.214 [2024-11-07 13:40:53.517985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:01.214 [2024-11-07 13:40:53.518002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:106232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.214 [2024-11-07 13:40:53.518010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:01.214 [2024-11-07 13:40:53.518028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:106240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.214 [2024-11-07 13:40:53.518036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:01.214 [2024-11-07 13:40:53.518053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:106248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.214 [2024-11-07 13:40:53.518061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.214 [2024-11-07 13:40:53.518078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:106256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.214 [2024-11-07 13:40:53.518086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.214 [2024-11-07 13:40:53.518104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:106264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.214 [2024-11-07 13:40:53.518111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:01.214 [2024-11-07 13:40:53.518129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:106272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.214 [2024-11-07 13:40:53.518136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:01.214 [2024-11-07 13:40:53.518154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:106280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.214 [2024-11-07 13:40:53.518161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:01.214 [2024-11-07 13:40:53.518180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:106288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.214 [2024-11-07 13:40:53.518187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:01.214 [2024-11-07 13:40:53.518209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:106296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.214 [2024-11-07 13:40:53.518216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:36:01.214 [2024-11-07 13:40:53.518234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:106304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.214 [2024-11-07 13:40:53.518241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:01.214 [2024-11-07 13:40:53.518258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:106312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.214 [2024-11-07 13:40:53.518266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:01.214 [2024-11-07 13:40:53.518283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:106320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.214 [2024-11-07 13:40:53.518291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:01.214 [2024-11-07 13:40:53.518308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:106328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.214 [2024-11-07 13:40:53.518316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:01.214 [2024-11-07 13:40:53.518333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:106336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.214 [2024-11-07 13:40:53.518341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:01.214 [2024-11-07 13:40:53.518358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:106344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.214 [2024-11-07 13:40:53.518366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:01.214 [2024-11-07 13:40:53.518384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:106352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.214 [2024-11-07 13:40:53.518392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:36:01.214 [2024-11-07 13:40:53.518409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:106360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.214 [2024-11-07 13:40:53.518416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:01.214 [2024-11-07 13:40:53.518434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:106368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.214 [2024-11-07 13:40:53.518441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:01.214 [2024-11-07 13:40:53.518459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:106376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.214 [2024-11-07 13:40:53.518466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:01.214 [2024-11-07 13:40:53.518529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:106384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.214 [2024-11-07 13:40:53.518539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:01.214 [2024-11-07 13:40:53.518560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:106392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.214 [2024-11-07 13:40:53.518568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:01.214 [2024-11-07 13:40:53.518586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:106400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.214 [2024-11-07 13:40:53.518594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:36:01.214 [2024-11-07 13:40:53.518613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:106408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.214 [2024-11-07 13:40:53.518626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:01.214 [2024-11-07 13:40:53.518645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:106416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.214 [2024-11-07 13:40:53.518652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:01.214 [2024-11-07 13:40:53.518671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:105464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.214 [2024-11-07 13:40:53.518679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:01.214 [2024-11-07 13:40:53.518698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:105472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.214 [2024-11-07 13:40:53.518706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:36:01.214 [2024-11-07 13:40:53.518725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:105480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.214 [2024-11-07 13:40:53.518732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:36:01.214 [2024-11-07 13:40:53.518751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:105488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.214 [2024-11-07 13:40:53.518759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:01.214 [2024-11-07 13:40:53.518778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:105496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.214 [2024-11-07 13:40:53.518785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:36:01.214 [2024-11-07 13:40:53.518804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.214 [2024-11-07 13:40:53.518811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:01.214 [2024-11-07 13:40:53.518831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:105512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.214 [2024-11-07 13:40:53.518838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:36:01.214 [2024-11-07 13:40:53.518857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:105520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.215 [2024-11-07 13:40:53.518870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:36:01.215 [2024-11-07 13:40:53.518889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:105528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.215 [2024-11-07 13:40:53.518898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:01.215 [2024-11-07 13:40:53.518917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:105536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.215 [2024-11-07 13:40:53.518925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:01.215 [2024-11-07 13:40:53.518944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:105544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.215 [2024-11-07 13:40:53.518951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:01.215 [2024-11-07 13:40:53.518970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:105552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.215 [2024-11-07 13:40:53.518978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.215 [2024-11-07 13:40:53.518996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:105560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.215 [2024-11-07 13:40:53.519004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:01.215 [2024-11-07 13:40:53.519022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:105568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.215 [2024-11-07 13:40:53.519031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:01.215 [2024-11-07 13:40:53.519050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:105576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.215 [2024-11-07 13:40:53.519057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:01.215 [2024-11-07 13:40:53.519076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:105584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.215 [2024-11-07 13:40:53.519084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:01.215 8575.38 IOPS, 33.50 MiB/s [2024-11-07T12:41:09.222Z] 7962.86 IOPS, 31.10 MiB/s [2024-11-07T12:41:09.222Z] 7432.00 IOPS, 29.03 MiB/s [2024-11-07T12:41:09.222Z] 7677.56 IOPS, 29.99 MiB/s [2024-11-07T12:41:09.222Z] 7911.88 IOPS, 30.91 MiB/s [2024-11-07T12:41:09.222Z] 8293.72 IOPS, 32.40 MiB/s [2024-11-07T12:41:09.222Z] 8670.58 IOPS, 33.87 MiB/s [2024-11-07T12:41:09.222Z] 8958.65 IOPS, 34.99 MiB/s [2024-11-07T12:41:09.222Z] 9085.48 IOPS, 35.49 MiB/s [2024-11-07T12:41:09.222Z] 9189.55 IOPS, 35.90 MiB/s [2024-11-07T12:41:09.222Z] 9411.74 IOPS, 36.76 MiB/s [2024-11-07T12:41:09.222Z] 9665.96 IOPS, 37.76 MiB/s [2024-11-07T12:41:09.222Z] [2024-11-07 13:41:06.309902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:115752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.215 [2024-11-07 13:41:06.309952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:01.215 [2024-11-07 13:41:06.309998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:115768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.215 [2024-11-07 13:41:06.310008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:01.215 [2024-11-07 13:41:06.310024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:115784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.215 [2024-11-07 13:41:06.310032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:01.215 [2024-11-07 13:41:06.310046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:115800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.215 [2024-11-07 13:41:06.310058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:01.215 [2024-11-07 13:41:06.310072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:115816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.215 [2024-11-07 13:41:06.310080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:36:01.215 [2024-11-07 13:41:06.310095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:115832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.215 [2024-11-07 13:41:06.310103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:01.215 [2024-11-07 13:41:06.310117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:115848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.215 [2024-11-07 13:41:06.310124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:01.215 [2024-11-07 13:41:06.310139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:115864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.215 [2024-11-07 13:41:06.310146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:01.215 [2024-11-07 13:41:06.310159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:115880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.215 [2024-11-07 13:41:06.310167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:01.215 [2024-11-07 13:41:06.310181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:115896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.215 [2024-11-07 13:41:06.310189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:01.215 [2024-11-07 13:41:06.310202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:115912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.215 [2024-11-07 13:41:06.310210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:01.215 [2024-11-07 13:41:06.310224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.215 [2024-11-07 13:41:06.310231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:01.215 [2024-11-07 13:41:06.310245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:115944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.215 [2024-11-07 13:41:06.310253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:01.215 [2024-11-07 13:41:06.310267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:115960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.215 [2024-11-07 13:41:06.310274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:01.215 [2024-11-07 13:41:06.310288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:115976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.215 [2024-11-07 13:41:06.310296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:01.215 [2024-11-07 13:41:06.310310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:115096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.215 [2024-11-07 13:41:06.310319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:01.215 [2024-11-07 13:41:06.310334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:115128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.215 [2024-11-07 13:41:06.310341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:01.215 [2024-11-07 13:41:06.310355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:115152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.215 [2024-11-07 13:41:06.310363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:01.215 [2024-11-07 13:41:06.310377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:115184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.215 [2024-11-07 13:41:06.310385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:01.215 [2024-11-07 13:41:06.310400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:115992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.215 [2024-11-07 13:41:06.310408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:01.215 [2024-11-07 13:41:06.310421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:116008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.215 [2024-11-07 13:41:06.310429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:01.215 [2024-11-07 13:41:06.310443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:115744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.215 [2024-11-07 13:41:06.310450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:01.215 [2024-11-07 13:41:06.310464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:115216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.215 [2024-11-07 13:41:06.310472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:01.215 [2024-11-07 13:41:06.310485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:115248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.215 [2024-11-07 13:41:06.310493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:01.215 [2024-11-07 13:41:06.310506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:115280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.215 [2024-11-07 13:41:06.310513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:01.215 [2024-11-07 13:41:06.310528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:115312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.215 [2024-11-07 13:41:06.310535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:01.215 [2024-11-07 13:41:06.311093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:116024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.215 [2024-11-07 13:41:06.311112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:01.215 [2024-11-07 13:41:06.311137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:116040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.215 [2024-11-07 13:41:06.311147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:01.216 [2024-11-07 13:41:06.311161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:116056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.216 [2024-11-07 13:41:06.311168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:01.216 [2024-11-07 13:41:06.311182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:116072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.216 [2024-11-07 13:41:06.311190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:01.216 [2024-11-07 13:41:06.311212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:116088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.216 [2024-11-07 13:41:06.311220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.216 [2024-11-07 13:41:06.311234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:116104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.216 [2024-11-07 13:41:06.311242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:01.216 [2024-11-07 13:41:06.311255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:116120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.216 [2024-11-07 13:41:06.311263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:01.216 [2024-11-07 13:41:06.311277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:116136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.216 [2024-11-07 13:41:06.311284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:01.216 [2024-11-07 13:41:06.311299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:116152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:01.216 [2024-11-07 13:41:06.311306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:01.216 [2024-11-07 13:41:06.311736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:115344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.216 [2024-11-07 13:41:06.311753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:01.216 [2024-11-07 13:41:06.311770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:115376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.216 [2024-11-07 13:41:06.311778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:01.216 [2024-11-07 13:41:06.311792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:115400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.216 [2024-11-07 13:41:06.311800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:01.216 [2024-11-07 13:41:06.311814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:115432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.216 [2024-11-07 13:41:06.311822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:01.216 9830.36 IOPS, 38.40 MiB/s [2024-11-07T12:41:09.223Z] 9783.00 IOPS, 38.21 MiB/s [2024-11-07T12:41:09.223Z] Received shutdown signal, test time was about 27.002068 seconds 00:36:01.216 00:36:01.216 Latency(us) 00:36:01.216 [2024-11-07T12:41:09.223Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:01.216 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:36:01.216 Verification LBA range: start 0x0 length 0x4000 00:36:01.216 Nvme0n1 : 27.00 9739.34 38.04 0.00 0.00 13124.03 334.51 3019898.88 00:36:01.216 [2024-11-07T12:41:09.223Z] =================================================================================================================== 00:36:01.216 [2024-11-07T12:41:09.223Z] Total : 9739.34 38.04 0.00 0.00 13124.03 334.51 3019898.88 00:36:01.216 13:41:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:01.216 13:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:36:01.216 13:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:36:01.216 13:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:36:01.216 13:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:01.216 13:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:36:01.216 13:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:01.216 13:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:36:01.216 13:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:01.216 13:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:01.216 rmmod nvme_tcp 00:36:01.216 rmmod nvme_fabrics 00:36:01.216 rmmod nvme_keyring 00:36:01.476 13:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:01.476 13:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:36:01.476 13:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:36:01.476 13:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 4082027 ']' 00:36:01.476 13:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 4082027 00:36:01.476 13:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 4082027 ']' 00:36:01.476 13:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 4082027 00:36:01.476 13:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:36:01.476 13:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:01.476 13:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4082027 00:36:01.476 13:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:01.476 13:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:01.476 13:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4082027' 00:36:01.476 killing process with pid 4082027 00:36:01.476 13:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 4082027 00:36:01.476 13:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 4082027 00:36:02.424 13:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:02.424 13:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:02.424 13:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:02.424 13:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:36:02.424 13:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:36:02.424 13:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:02.424 13:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:36:02.424 13:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:02.424 13:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:02.424 13:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:02.424 13:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:02.424 13:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:04.333 13:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:04.333 00:36:04.333 real 0m43.167s 00:36:04.333 user 1m48.649s 00:36:04.333 sys 0m12.412s 00:36:04.333 13:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:04.333 13:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:36:04.333 ************************************ 00:36:04.333 END TEST nvmf_host_multipath_status 00:36:04.333 ************************************ 00:36:04.333 13:41:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:36:04.333 13:41:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:36:04.333 13:41:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:04.333 13:41:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.333 ************************************ 00:36:04.333 START TEST nvmf_discovery_remove_ifc 00:36:04.333 ************************************ 00:36:04.333 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:36:04.595 * Looking for test storage... 00:36:04.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:04.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.595 --rc genhtml_branch_coverage=1 00:36:04.595 --rc genhtml_function_coverage=1 00:36:04.595 --rc genhtml_legend=1 00:36:04.595 --rc geninfo_all_blocks=1 00:36:04.595 --rc geninfo_unexecuted_blocks=1 00:36:04.595 00:36:04.595 ' 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:04.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.595 --rc genhtml_branch_coverage=1 00:36:04.595 --rc genhtml_function_coverage=1 00:36:04.595 --rc genhtml_legend=1 00:36:04.595 --rc geninfo_all_blocks=1 00:36:04.595 --rc geninfo_unexecuted_blocks=1 00:36:04.595 00:36:04.595 ' 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:04.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.595 --rc genhtml_branch_coverage=1 00:36:04.595 --rc genhtml_function_coverage=1 00:36:04.595 --rc genhtml_legend=1 00:36:04.595 --rc geninfo_all_blocks=1 00:36:04.595 --rc geninfo_unexecuted_blocks=1 00:36:04.595 00:36:04.595 ' 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:04.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.595 --rc genhtml_branch_coverage=1 00:36:04.595 --rc genhtml_function_coverage=1 00:36:04.595 --rc genhtml_legend=1 00:36:04.595 --rc geninfo_all_blocks=1 00:36:04.595 --rc geninfo_unexecuted_blocks=1 00:36:04.595 00:36:04.595 ' 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.595 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.596 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.596 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:36:04.596 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.596 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:36:04.596 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:04.596 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:04.596 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:04.596 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:04.596 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:04.596 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:04.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:04.596 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:04.596 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:04.596 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:04.596 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:36:04.596 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:36:04.596 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:36:04.596 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:36:04.596 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:36:04.596 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:36:04.596 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:36:04.596 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:04.596 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:04.596 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:04.596 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:04.596 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:04.596 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:04.596 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:04.596 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:04.596 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:04.596 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:04.596 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:36:04.596 13:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:12.731 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:12.731 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:12.731 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:12.732 Found net devices under 0000:31:00.0: cvl_0_0 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:12.732 Found net devices under 0000:31:00.1: cvl_0_1 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:12.732 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:12.992 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:12.992 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:12.992 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:12.992 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:12.992 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:12.992 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:36:12.992 00:36:12.992 --- 10.0.0.2 ping statistics --- 00:36:12.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:12.992 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:36:12.992 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:12.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:12.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:36:12.992 00:36:12.992 --- 10.0.0.1 ping statistics --- 00:36:12.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:12.992 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:36:12.992 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:12.992 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:36:12.992 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:12.992 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:12.992 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:12.992 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:12.992 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:12.992 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:12.992 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:12.992 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:36:12.992 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:12.992 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:12.992 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:12.992 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=4092822 00:36:12.992 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 4092822 00:36:12.992 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:36:12.992 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 4092822 ']' 00:36:12.992 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:12.992 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:12.992 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:12.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:12.992 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:12.992 13:41:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:12.992 [2024-11-07 13:41:20.928317] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:36:12.992 [2024-11-07 13:41:20.928445] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:13.252 [2024-11-07 13:41:21.109626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:13.252 [2024-11-07 13:41:21.220726] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:13.253 [2024-11-07 13:41:21.220775] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:13.253 [2024-11-07 13:41:21.220787] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:13.253 [2024-11-07 13:41:21.220799] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:13.253 [2024-11-07 13:41:21.220811] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:13.253 [2024-11-07 13:41:21.222073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:13.822 13:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:13.822 13:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:36:13.822 13:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:13.822 13:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:13.822 13:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:13.822 13:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:13.822 13:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:36:13.822 13:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.822 13:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:13.822 [2024-11-07 13:41:21.736383] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:13.822 [2024-11-07 13:41:21.744603] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:36:13.822 null0 00:36:13.822 [2024-11-07 13:41:21.776581] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:13.822 13:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.823 13:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=4093000 00:36:13.823 13:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 4093000 /tmp/host.sock 00:36:13.823 13:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 4093000 ']' 00:36:13.823 13:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:36:13.823 13:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:13.823 13:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:36:13.823 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:36:13.823 13:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:13.823 13:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:13.823 13:41:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:36:14.082 [2024-11-07 13:41:21.881244] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:36:14.082 [2024-11-07 13:41:21.881357] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4093000 ] 00:36:14.082 [2024-11-07 13:41:22.018577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:14.343 [2024-11-07 13:41:22.115722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:14.913 13:41:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:14.913 13:41:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:36:14.913 13:41:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:14.913 13:41:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:36:14.913 13:41:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.913 13:41:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:14.913 13:41:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.913 13:41:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:36:14.913 13:41:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.913 13:41:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:14.913 13:41:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.913 13:41:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:36:14.913 13:41:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.913 13:41:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:16.296 [2024-11-07 13:41:23.895962] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:36:16.296 [2024-11-07 13:41:23.896002] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:36:16.296 [2024-11-07 13:41:23.896030] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:16.296 [2024-11-07 13:41:24.024461] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:36:16.296 [2024-11-07 13:41:24.084439] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:36:16.296 [2024-11-07 13:41:24.085734] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x615000417880:1 started. 00:36:16.296 [2024-11-07 13:41:24.087660] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:36:16.296 [2024-11-07 13:41:24.087718] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:36:16.296 [2024-11-07 13:41:24.087766] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:36:16.296 [2024-11-07 13:41:24.087789] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:36:16.296 [2024-11-07 13:41:24.087819] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:36:16.296 13:41:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.296 13:41:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:36:16.296 13:41:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:16.296 13:41:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:16.296 13:41:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:16.296 13:41:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.296 13:41:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:16.296 13:41:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:16.296 13:41:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:16.296 [2024-11-07 13:41:24.095971] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x615000417880 was disconnected and freed. delete nvme_qpair. 00:36:16.296 13:41:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.296 13:41:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:36:16.296 13:41:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:36:16.296 13:41:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:36:16.296 13:41:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:36:16.296 13:41:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:16.296 13:41:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:16.296 13:41:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:16.296 13:41:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:16.296 13:41:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.296 13:41:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:16.296 13:41:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:16.296 13:41:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.556 13:41:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:16.556 13:41:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:17.496 13:41:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:17.496 13:41:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:17.496 13:41:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:17.496 13:41:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:17.496 13:41:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.496 13:41:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:17.496 13:41:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:17.496 13:41:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.496 13:41:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:17.496 13:41:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:18.436 13:41:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:18.436 13:41:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:18.436 13:41:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:18.436 13:41:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:18.436 13:41:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:18.436 13:41:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:18.436 13:41:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:18.436 13:41:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:18.436 13:41:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:18.436 13:41:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:19.816 13:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:19.816 13:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:19.816 13:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:19.816 13:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:19.816 13:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:19.816 13:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:19.816 13:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:19.816 13:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:19.816 13:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:19.816 13:41:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:20.753 13:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:20.753 13:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:20.753 13:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:20.753 13:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.753 13:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:20.753 13:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:20.753 13:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:20.753 13:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.753 13:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:20.753 13:41:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:21.694 [2024-11-07 13:41:29.527920] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:36:21.694 [2024-11-07 13:41:29.527987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:21.694 [2024-11-07 13:41:29.528007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:21.694 [2024-11-07 13:41:29.528023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:21.694 [2024-11-07 13:41:29.528034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:21.694 [2024-11-07 13:41:29.528046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:21.694 [2024-11-07 13:41:29.528057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:21.694 [2024-11-07 13:41:29.528068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:21.694 [2024-11-07 13:41:29.528079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:21.694 [2024-11-07 13:41:29.528091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:36:21.694 [2024-11-07 13:41:29.528102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:21.694 [2024-11-07 13:41:29.528112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000417100 is same with the state(6) to be set 00:36:21.694 [2024-11-07 13:41:29.537936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000417100 (9): Bad file descriptor 00:36:21.694 13:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:21.694 13:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:21.694 13:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:21.694 13:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:21.694 13:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:21.694 13:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:21.694 13:41:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:21.694 [2024-11-07 13:41:29.547977] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:36:21.694 [2024-11-07 13:41:29.548004] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:36:21.694 [2024-11-07 13:41:29.548014] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:36:21.694 [2024-11-07 13:41:29.548026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:36:21.694 [2024-11-07 13:41:29.548068] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:36:22.633 [2024-11-07 13:41:30.587917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:36:22.633 [2024-11-07 13:41:30.587995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417100 with addr=10.0.0.2, port=4420 00:36:22.633 [2024-11-07 13:41:30.588018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000417100 is same with the state(6) to be set 00:36:22.633 [2024-11-07 13:41:30.588098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000417100 (9): Bad file descriptor 00:36:22.633 [2024-11-07 13:41:30.588642] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:36:22.633 [2024-11-07 13:41:30.588684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:36:22.633 [2024-11-07 13:41:30.588697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:36:22.633 [2024-11-07 13:41:30.588710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:36:22.633 [2024-11-07 13:41:30.588723] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:36:22.633 [2024-11-07 13:41:30.588733] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:36:22.633 [2024-11-07 13:41:30.588742] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:36:22.633 [2024-11-07 13:41:30.588754] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:36:22.633 [2024-11-07 13:41:30.588766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:36:22.633 13:41:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:22.633 13:41:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:22.633 13:41:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:24.013 [2024-11-07 13:41:31.591154] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:36:24.013 [2024-11-07 13:41:31.591185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:36:24.013 [2024-11-07 13:41:31.591206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:36:24.013 [2024-11-07 13:41:31.591217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:36:24.013 [2024-11-07 13:41:31.591228] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:36:24.013 [2024-11-07 13:41:31.591239] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:36:24.013 [2024-11-07 13:41:31.591248] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:36:24.013 [2024-11-07 13:41:31.591255] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:36:24.013 [2024-11-07 13:41:31.591294] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:36:24.013 [2024-11-07 13:41:31.591330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:24.013 [2024-11-07 13:41:31.591346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.013 [2024-11-07 13:41:31.591363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:24.013 [2024-11-07 13:41:31.591379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.013 [2024-11-07 13:41:31.591392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:24.013 [2024-11-07 13:41:31.591403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.013 [2024-11-07 13:41:31.591415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:24.013 [2024-11-07 13:41:31.591426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.013 [2024-11-07 13:41:31.591438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:36:24.013 [2024-11-07 13:41:31.591449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.013 [2024-11-07 13:41:31.591459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:36:24.014 [2024-11-07 13:41:31.591738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416980 (9): Bad file descriptor 00:36:24.014 [2024-11-07 13:41:31.592755] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:36:24.014 [2024-11-07 13:41:31.592776] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:36:24.014 13:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:24.014 13:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:24.014 13:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:24.014 13:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:24.014 13:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.014 13:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:24.014 13:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:24.014 13:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.014 13:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:36:24.014 13:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:24.014 13:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:24.014 13:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:36:24.014 13:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:24.014 13:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:24.014 13:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:24.014 13:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.014 13:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:24.014 13:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:24.014 13:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:24.014 13:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.014 13:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:36:24.014 13:41:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:24.952 13:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:24.952 13:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:24.952 13:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:24.952 13:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.952 13:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:24.952 13:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:24.952 13:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:24.952 13:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.952 13:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:36:24.952 13:41:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:25.893 [2024-11-07 13:41:33.652078] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:36:25.893 [2024-11-07 13:41:33.652104] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:36:25.893 [2024-11-07 13:41:33.652136] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:25.893 [2024-11-07 13:41:33.740419] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:36:25.893 [2024-11-07 13:41:33.839371] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:36:25.893 [2024-11-07 13:41:33.840730] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x615000418c80:1 started. 00:36:25.893 [2024-11-07 13:41:33.842630] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:36:25.893 [2024-11-07 13:41:33.842685] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:36:25.893 [2024-11-07 13:41:33.842731] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:36:25.893 [2024-11-07 13:41:33.842753] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:36:25.893 [2024-11-07 13:41:33.842767] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:36:25.893 [2024-11-07 13:41:33.849933] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x615000418c80 was disconnected and freed. delete nvme_qpair. 00:36:25.893 13:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:25.893 13:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:25.893 13:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:25.893 13:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:25.893 13:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.893 13:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:25.893 13:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:26.154 13:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.154 13:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:36:26.154 13:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:36:26.154 13:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 4093000 00:36:26.154 13:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 4093000 ']' 00:36:26.154 13:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 4093000 00:36:26.154 13:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:36:26.154 13:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:26.154 13:41:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4093000 00:36:26.155 13:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:26.155 13:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:26.155 13:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4093000' 00:36:26.155 killing process with pid 4093000 00:36:26.155 13:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 4093000 00:36:26.155 13:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 4093000 00:36:26.725 13:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:36:26.726 13:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:26.726 13:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:36:26.726 13:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:26.726 13:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:36:26.726 13:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:26.726 13:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:26.726 rmmod nvme_tcp 00:36:26.726 rmmod nvme_fabrics 00:36:26.726 rmmod nvme_keyring 00:36:26.726 13:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:26.726 13:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:36:26.726 13:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:36:26.726 13:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 4092822 ']' 00:36:26.726 13:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 4092822 00:36:26.726 13:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 4092822 ']' 00:36:26.726 13:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 4092822 00:36:26.726 13:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:36:26.726 13:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:26.726 13:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4092822 00:36:26.726 13:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:36:26.726 13:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:36:26.726 13:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4092822' 00:36:26.726 killing process with pid 4092822 00:36:26.726 13:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 4092822 00:36:26.726 13:41:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 4092822 00:36:27.297 13:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:27.297 13:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:27.297 13:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:27.297 13:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:36:27.297 13:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:27.297 13:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:36:27.297 13:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:36:27.297 13:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:27.297 13:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:27.297 13:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:27.297 13:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:27.297 13:41:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:29.842 00:36:29.842 real 0m25.078s 00:36:29.842 user 0m28.757s 00:36:29.842 sys 0m7.765s 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:29.842 ************************************ 00:36:29.842 END TEST nvmf_discovery_remove_ifc 00:36:29.842 ************************************ 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.842 ************************************ 00:36:29.842 START TEST nvmf_identify_kernel_target 00:36:29.842 ************************************ 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:36:29.842 * Looking for test storage... 00:36:29.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:29.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:29.842 --rc genhtml_branch_coverage=1 00:36:29.842 --rc genhtml_function_coverage=1 00:36:29.842 --rc genhtml_legend=1 00:36:29.842 --rc geninfo_all_blocks=1 00:36:29.842 --rc geninfo_unexecuted_blocks=1 00:36:29.842 00:36:29.842 ' 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:29.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:29.842 --rc genhtml_branch_coverage=1 00:36:29.842 --rc genhtml_function_coverage=1 00:36:29.842 --rc genhtml_legend=1 00:36:29.842 --rc geninfo_all_blocks=1 00:36:29.842 --rc geninfo_unexecuted_blocks=1 00:36:29.842 00:36:29.842 ' 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:29.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:29.842 --rc genhtml_branch_coverage=1 00:36:29.842 --rc genhtml_function_coverage=1 00:36:29.842 --rc genhtml_legend=1 00:36:29.842 --rc geninfo_all_blocks=1 00:36:29.842 --rc geninfo_unexecuted_blocks=1 00:36:29.842 00:36:29.842 ' 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:29.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:29.842 --rc genhtml_branch_coverage=1 00:36:29.842 --rc genhtml_function_coverage=1 00:36:29.842 --rc genhtml_legend=1 00:36:29.842 --rc geninfo_all_blocks=1 00:36:29.842 --rc geninfo_unexecuted_blocks=1 00:36:29.842 00:36:29.842 ' 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:29.842 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:29.843 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:29.843 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:29.843 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:29.843 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:29.843 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:29.843 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:36:29.843 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:29.843 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:29.843 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:29.843 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:29.843 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:29.843 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:29.843 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:36:29.843 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:29.843 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:36:29.843 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:29.843 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:29.843 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:29.843 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:29.843 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:29.843 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:29.843 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:29.843 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:29.843 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:29.843 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:29.843 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:36:29.843 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:29.843 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:29.843 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:29.843 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:29.843 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:29.843 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:29.843 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:29.843 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:29.843 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:29.843 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:29.843 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:36:29.843 13:41:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:38.000 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:38.000 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:38.000 Found net devices under 0000:31:00.0: cvl_0_0 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:38.000 Found net devices under 0000:31:00.1: cvl_0_1 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:38.000 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:38.000 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:36:38.000 00:36:38.000 --- 10.0.0.2 ping statistics --- 00:36:38.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:38.000 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:38.000 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:38.000 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:36:38.000 00:36:38.000 --- 10.0.0.1 ping statistics --- 00:36:38.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:38.000 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:36:38.000 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:38.001 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:38.001 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:38.001 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:38.001 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:38.001 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:38.001 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:38.001 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:38.001 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:38.001 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:36:38.001 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:38.001 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:38.001 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:36:38.001 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:38.001 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:38.001 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:38.001 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:36:38.001 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:36:38.001 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:36:38.001 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:38.001 13:41:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:42.205 Waiting for block devices as requested 00:36:42.205 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:42.205 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:42.205 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:42.205 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:42.205 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:42.205 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:42.205 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:42.205 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:42.205 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:42.466 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:42.466 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:42.466 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:42.726 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:42.726 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:42.726 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:42.726 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:42.986 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:43.247 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:36:43.247 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:43.247 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:36:43.247 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:36:43.247 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:43.247 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:36:43.247 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:36:43.247 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:36:43.247 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:43.247 No valid GPT data, bailing 00:36:43.247 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:43.247 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:36:43.247 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:36:43.247 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:36:43.247 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:36:43.247 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:43.247 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:43.247 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:43.247 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:43.247 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:36:43.247 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:36:43.247 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:36:43.247 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:36:43.247 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:36:43.247 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:36:43.247 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:36:43.247 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:43.247 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:36:43.247 00:36:43.247 Discovery Log Number of Records 2, Generation counter 2 00:36:43.247 =====Discovery Log Entry 0====== 00:36:43.247 trtype: tcp 00:36:43.247 adrfam: ipv4 00:36:43.247 subtype: current discovery subsystem 00:36:43.247 treq: not specified, sq flow control disable supported 00:36:43.247 portid: 1 00:36:43.247 trsvcid: 4420 00:36:43.247 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:43.247 traddr: 10.0.0.1 00:36:43.247 eflags: none 00:36:43.247 sectype: none 00:36:43.247 =====Discovery Log Entry 1====== 00:36:43.247 trtype: tcp 00:36:43.247 adrfam: ipv4 00:36:43.247 subtype: nvme subsystem 00:36:43.247 treq: not specified, sq flow control disable supported 00:36:43.247 portid: 1 00:36:43.247 trsvcid: 4420 00:36:43.247 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:43.247 traddr: 10.0.0.1 00:36:43.247 eflags: none 00:36:43.247 sectype: none 00:36:43.247 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:36:43.247 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:36:43.509 ===================================================== 00:36:43.509 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:36:43.509 ===================================================== 00:36:43.509 Controller Capabilities/Features 00:36:43.509 ================================ 00:36:43.509 Vendor ID: 0000 00:36:43.509 Subsystem Vendor ID: 0000 00:36:43.509 Serial Number: b124cce72502f5448ce4 00:36:43.509 Model Number: Linux 00:36:43.509 Firmware Version: 6.8.9-20 00:36:43.509 Recommended Arb Burst: 0 00:36:43.509 IEEE OUI Identifier: 00 00 00 00:36:43.509 Multi-path I/O 00:36:43.509 May have multiple subsystem ports: No 00:36:43.509 May have multiple controllers: No 00:36:43.509 Associated with SR-IOV VF: No 00:36:43.509 Max Data Transfer Size: Unlimited 00:36:43.509 Max Number of Namespaces: 0 00:36:43.509 Max Number of I/O Queues: 1024 00:36:43.509 NVMe Specification Version (VS): 1.3 00:36:43.509 NVMe Specification Version (Identify): 1.3 00:36:43.509 Maximum Queue Entries: 1024 00:36:43.509 Contiguous Queues Required: No 00:36:43.509 Arbitration Mechanisms Supported 00:36:43.509 Weighted Round Robin: Not Supported 00:36:43.509 Vendor Specific: Not Supported 00:36:43.509 Reset Timeout: 7500 ms 00:36:43.509 Doorbell Stride: 4 bytes 00:36:43.509 NVM Subsystem Reset: Not Supported 00:36:43.509 Command Sets Supported 00:36:43.509 NVM Command Set: Supported 00:36:43.509 Boot Partition: Not Supported 00:36:43.509 Memory Page Size Minimum: 4096 bytes 00:36:43.509 Memory Page Size Maximum: 4096 bytes 00:36:43.509 Persistent Memory Region: Not Supported 00:36:43.509 Optional Asynchronous Events Supported 00:36:43.509 Namespace Attribute Notices: Not Supported 00:36:43.509 Firmware Activation Notices: Not Supported 00:36:43.509 ANA Change Notices: Not Supported 00:36:43.509 PLE Aggregate Log Change Notices: Not Supported 00:36:43.509 LBA Status Info Alert Notices: Not Supported 00:36:43.509 EGE Aggregate Log Change Notices: Not Supported 00:36:43.509 Normal NVM Subsystem Shutdown event: Not Supported 00:36:43.509 Zone Descriptor Change Notices: Not Supported 00:36:43.509 Discovery Log Change Notices: Supported 00:36:43.509 Controller Attributes 00:36:43.509 128-bit Host Identifier: Not Supported 00:36:43.509 Non-Operational Permissive Mode: Not Supported 00:36:43.509 NVM Sets: Not Supported 00:36:43.509 Read Recovery Levels: Not Supported 00:36:43.509 Endurance Groups: Not Supported 00:36:43.509 Predictable Latency Mode: Not Supported 00:36:43.509 Traffic Based Keep ALive: Not Supported 00:36:43.509 Namespace Granularity: Not Supported 00:36:43.509 SQ Associations: Not Supported 00:36:43.509 UUID List: Not Supported 00:36:43.509 Multi-Domain Subsystem: Not Supported 00:36:43.509 Fixed Capacity Management: Not Supported 00:36:43.509 Variable Capacity Management: Not Supported 00:36:43.509 Delete Endurance Group: Not Supported 00:36:43.509 Delete NVM Set: Not Supported 00:36:43.509 Extended LBA Formats Supported: Not Supported 00:36:43.509 Flexible Data Placement Supported: Not Supported 00:36:43.509 00:36:43.509 Controller Memory Buffer Support 00:36:43.509 ================================ 00:36:43.509 Supported: No 00:36:43.509 00:36:43.509 Persistent Memory Region Support 00:36:43.509 ================================ 00:36:43.509 Supported: No 00:36:43.509 00:36:43.509 Admin Command Set Attributes 00:36:43.509 ============================ 00:36:43.509 Security Send/Receive: Not Supported 00:36:43.509 Format NVM: Not Supported 00:36:43.509 Firmware Activate/Download: Not Supported 00:36:43.509 Namespace Management: Not Supported 00:36:43.509 Device Self-Test: Not Supported 00:36:43.509 Directives: Not Supported 00:36:43.509 NVMe-MI: Not Supported 00:36:43.509 Virtualization Management: Not Supported 00:36:43.509 Doorbell Buffer Config: Not Supported 00:36:43.509 Get LBA Status Capability: Not Supported 00:36:43.509 Command & Feature Lockdown Capability: Not Supported 00:36:43.509 Abort Command Limit: 1 00:36:43.509 Async Event Request Limit: 1 00:36:43.509 Number of Firmware Slots: N/A 00:36:43.509 Firmware Slot 1 Read-Only: N/A 00:36:43.509 Firmware Activation Without Reset: N/A 00:36:43.509 Multiple Update Detection Support: N/A 00:36:43.509 Firmware Update Granularity: No Information Provided 00:36:43.509 Per-Namespace SMART Log: No 00:36:43.509 Asymmetric Namespace Access Log Page: Not Supported 00:36:43.509 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:36:43.509 Command Effects Log Page: Not Supported 00:36:43.509 Get Log Page Extended Data: Supported 00:36:43.509 Telemetry Log Pages: Not Supported 00:36:43.509 Persistent Event Log Pages: Not Supported 00:36:43.509 Supported Log Pages Log Page: May Support 00:36:43.509 Commands Supported & Effects Log Page: Not Supported 00:36:43.509 Feature Identifiers & Effects Log Page:May Support 00:36:43.509 NVMe-MI Commands & Effects Log Page: May Support 00:36:43.509 Data Area 4 for Telemetry Log: Not Supported 00:36:43.509 Error Log Page Entries Supported: 1 00:36:43.509 Keep Alive: Not Supported 00:36:43.509 00:36:43.509 NVM Command Set Attributes 00:36:43.509 ========================== 00:36:43.509 Submission Queue Entry Size 00:36:43.509 Max: 1 00:36:43.509 Min: 1 00:36:43.509 Completion Queue Entry Size 00:36:43.509 Max: 1 00:36:43.509 Min: 1 00:36:43.509 Number of Namespaces: 0 00:36:43.509 Compare Command: Not Supported 00:36:43.509 Write Uncorrectable Command: Not Supported 00:36:43.509 Dataset Management Command: Not Supported 00:36:43.509 Write Zeroes Command: Not Supported 00:36:43.509 Set Features Save Field: Not Supported 00:36:43.509 Reservations: Not Supported 00:36:43.509 Timestamp: Not Supported 00:36:43.509 Copy: Not Supported 00:36:43.509 Volatile Write Cache: Not Present 00:36:43.509 Atomic Write Unit (Normal): 1 00:36:43.509 Atomic Write Unit (PFail): 1 00:36:43.509 Atomic Compare & Write Unit: 1 00:36:43.509 Fused Compare & Write: Not Supported 00:36:43.509 Scatter-Gather List 00:36:43.509 SGL Command Set: Supported 00:36:43.509 SGL Keyed: Not Supported 00:36:43.509 SGL Bit Bucket Descriptor: Not Supported 00:36:43.509 SGL Metadata Pointer: Not Supported 00:36:43.509 Oversized SGL: Not Supported 00:36:43.509 SGL Metadata Address: Not Supported 00:36:43.509 SGL Offset: Supported 00:36:43.509 Transport SGL Data Block: Not Supported 00:36:43.509 Replay Protected Memory Block: Not Supported 00:36:43.509 00:36:43.509 Firmware Slot Information 00:36:43.509 ========================= 00:36:43.509 Active slot: 0 00:36:43.510 00:36:43.510 00:36:43.510 Error Log 00:36:43.510 ========= 00:36:43.510 00:36:43.510 Active Namespaces 00:36:43.510 ================= 00:36:43.510 Discovery Log Page 00:36:43.510 ================== 00:36:43.510 Generation Counter: 2 00:36:43.510 Number of Records: 2 00:36:43.510 Record Format: 0 00:36:43.510 00:36:43.510 Discovery Log Entry 0 00:36:43.510 ---------------------- 00:36:43.510 Transport Type: 3 (TCP) 00:36:43.510 Address Family: 1 (IPv4) 00:36:43.510 Subsystem Type: 3 (Current Discovery Subsystem) 00:36:43.510 Entry Flags: 00:36:43.510 Duplicate Returned Information: 0 00:36:43.510 Explicit Persistent Connection Support for Discovery: 0 00:36:43.510 Transport Requirements: 00:36:43.510 Secure Channel: Not Specified 00:36:43.510 Port ID: 1 (0x0001) 00:36:43.510 Controller ID: 65535 (0xffff) 00:36:43.510 Admin Max SQ Size: 32 00:36:43.510 Transport Service Identifier: 4420 00:36:43.510 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:36:43.510 Transport Address: 10.0.0.1 00:36:43.510 Discovery Log Entry 1 00:36:43.510 ---------------------- 00:36:43.510 Transport Type: 3 (TCP) 00:36:43.510 Address Family: 1 (IPv4) 00:36:43.510 Subsystem Type: 2 (NVM Subsystem) 00:36:43.510 Entry Flags: 00:36:43.510 Duplicate Returned Information: 0 00:36:43.510 Explicit Persistent Connection Support for Discovery: 0 00:36:43.510 Transport Requirements: 00:36:43.510 Secure Channel: Not Specified 00:36:43.510 Port ID: 1 (0x0001) 00:36:43.510 Controller ID: 65535 (0xffff) 00:36:43.510 Admin Max SQ Size: 32 00:36:43.510 Transport Service Identifier: 4420 00:36:43.510 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:36:43.510 Transport Address: 10.0.0.1 00:36:43.510 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:43.771 get_feature(0x01) failed 00:36:43.771 get_feature(0x02) failed 00:36:43.771 get_feature(0x04) failed 00:36:43.771 ===================================================== 00:36:43.771 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:43.771 ===================================================== 00:36:43.771 Controller Capabilities/Features 00:36:43.771 ================================ 00:36:43.771 Vendor ID: 0000 00:36:43.771 Subsystem Vendor ID: 0000 00:36:43.771 Serial Number: d788fee3262fb1b594d4 00:36:43.771 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:36:43.771 Firmware Version: 6.8.9-20 00:36:43.771 Recommended Arb Burst: 6 00:36:43.771 IEEE OUI Identifier: 00 00 00 00:36:43.771 Multi-path I/O 00:36:43.771 May have multiple subsystem ports: Yes 00:36:43.771 May have multiple controllers: Yes 00:36:43.771 Associated with SR-IOV VF: No 00:36:43.771 Max Data Transfer Size: Unlimited 00:36:43.771 Max Number of Namespaces: 1024 00:36:43.771 Max Number of I/O Queues: 128 00:36:43.771 NVMe Specification Version (VS): 1.3 00:36:43.771 NVMe Specification Version (Identify): 1.3 00:36:43.771 Maximum Queue Entries: 1024 00:36:43.771 Contiguous Queues Required: No 00:36:43.771 Arbitration Mechanisms Supported 00:36:43.771 Weighted Round Robin: Not Supported 00:36:43.771 Vendor Specific: Not Supported 00:36:43.771 Reset Timeout: 7500 ms 00:36:43.771 Doorbell Stride: 4 bytes 00:36:43.771 NVM Subsystem Reset: Not Supported 00:36:43.771 Command Sets Supported 00:36:43.771 NVM Command Set: Supported 00:36:43.771 Boot Partition: Not Supported 00:36:43.771 Memory Page Size Minimum: 4096 bytes 00:36:43.771 Memory Page Size Maximum: 4096 bytes 00:36:43.771 Persistent Memory Region: Not Supported 00:36:43.771 Optional Asynchronous Events Supported 00:36:43.771 Namespace Attribute Notices: Supported 00:36:43.771 Firmware Activation Notices: Not Supported 00:36:43.771 ANA Change Notices: Supported 00:36:43.771 PLE Aggregate Log Change Notices: Not Supported 00:36:43.771 LBA Status Info Alert Notices: Not Supported 00:36:43.771 EGE Aggregate Log Change Notices: Not Supported 00:36:43.771 Normal NVM Subsystem Shutdown event: Not Supported 00:36:43.771 Zone Descriptor Change Notices: Not Supported 00:36:43.771 Discovery Log Change Notices: Not Supported 00:36:43.771 Controller Attributes 00:36:43.771 128-bit Host Identifier: Supported 00:36:43.771 Non-Operational Permissive Mode: Not Supported 00:36:43.771 NVM Sets: Not Supported 00:36:43.771 Read Recovery Levels: Not Supported 00:36:43.771 Endurance Groups: Not Supported 00:36:43.771 Predictable Latency Mode: Not Supported 00:36:43.771 Traffic Based Keep ALive: Supported 00:36:43.771 Namespace Granularity: Not Supported 00:36:43.771 SQ Associations: Not Supported 00:36:43.771 UUID List: Not Supported 00:36:43.771 Multi-Domain Subsystem: Not Supported 00:36:43.771 Fixed Capacity Management: Not Supported 00:36:43.771 Variable Capacity Management: Not Supported 00:36:43.771 Delete Endurance Group: Not Supported 00:36:43.771 Delete NVM Set: Not Supported 00:36:43.771 Extended LBA Formats Supported: Not Supported 00:36:43.771 Flexible Data Placement Supported: Not Supported 00:36:43.771 00:36:43.771 Controller Memory Buffer Support 00:36:43.771 ================================ 00:36:43.771 Supported: No 00:36:43.771 00:36:43.771 Persistent Memory Region Support 00:36:43.771 ================================ 00:36:43.771 Supported: No 00:36:43.771 00:36:43.771 Admin Command Set Attributes 00:36:43.771 ============================ 00:36:43.771 Security Send/Receive: Not Supported 00:36:43.771 Format NVM: Not Supported 00:36:43.771 Firmware Activate/Download: Not Supported 00:36:43.771 Namespace Management: Not Supported 00:36:43.771 Device Self-Test: Not Supported 00:36:43.771 Directives: Not Supported 00:36:43.771 NVMe-MI: Not Supported 00:36:43.771 Virtualization Management: Not Supported 00:36:43.771 Doorbell Buffer Config: Not Supported 00:36:43.771 Get LBA Status Capability: Not Supported 00:36:43.771 Command & Feature Lockdown Capability: Not Supported 00:36:43.771 Abort Command Limit: 4 00:36:43.771 Async Event Request Limit: 4 00:36:43.771 Number of Firmware Slots: N/A 00:36:43.771 Firmware Slot 1 Read-Only: N/A 00:36:43.771 Firmware Activation Without Reset: N/A 00:36:43.771 Multiple Update Detection Support: N/A 00:36:43.771 Firmware Update Granularity: No Information Provided 00:36:43.772 Per-Namespace SMART Log: Yes 00:36:43.772 Asymmetric Namespace Access Log Page: Supported 00:36:43.772 ANA Transition Time : 10 sec 00:36:43.772 00:36:43.772 Asymmetric Namespace Access Capabilities 00:36:43.772 ANA Optimized State : Supported 00:36:43.772 ANA Non-Optimized State : Supported 00:36:43.772 ANA Inaccessible State : Supported 00:36:43.772 ANA Persistent Loss State : Supported 00:36:43.772 ANA Change State : Supported 00:36:43.772 ANAGRPID is not changed : No 00:36:43.772 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:36:43.772 00:36:43.772 ANA Group Identifier Maximum : 128 00:36:43.772 Number of ANA Group Identifiers : 128 00:36:43.772 Max Number of Allowed Namespaces : 1024 00:36:43.772 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:36:43.772 Command Effects Log Page: Supported 00:36:43.772 Get Log Page Extended Data: Supported 00:36:43.772 Telemetry Log Pages: Not Supported 00:36:43.772 Persistent Event Log Pages: Not Supported 00:36:43.772 Supported Log Pages Log Page: May Support 00:36:43.772 Commands Supported & Effects Log Page: Not Supported 00:36:43.772 Feature Identifiers & Effects Log Page:May Support 00:36:43.772 NVMe-MI Commands & Effects Log Page: May Support 00:36:43.772 Data Area 4 for Telemetry Log: Not Supported 00:36:43.772 Error Log Page Entries Supported: 128 00:36:43.772 Keep Alive: Supported 00:36:43.772 Keep Alive Granularity: 1000 ms 00:36:43.772 00:36:43.772 NVM Command Set Attributes 00:36:43.772 ========================== 00:36:43.772 Submission Queue Entry Size 00:36:43.772 Max: 64 00:36:43.772 Min: 64 00:36:43.772 Completion Queue Entry Size 00:36:43.772 Max: 16 00:36:43.772 Min: 16 00:36:43.772 Number of Namespaces: 1024 00:36:43.772 Compare Command: Not Supported 00:36:43.772 Write Uncorrectable Command: Not Supported 00:36:43.772 Dataset Management Command: Supported 00:36:43.772 Write Zeroes Command: Supported 00:36:43.772 Set Features Save Field: Not Supported 00:36:43.772 Reservations: Not Supported 00:36:43.772 Timestamp: Not Supported 00:36:43.772 Copy: Not Supported 00:36:43.772 Volatile Write Cache: Present 00:36:43.772 Atomic Write Unit (Normal): 1 00:36:43.772 Atomic Write Unit (PFail): 1 00:36:43.772 Atomic Compare & Write Unit: 1 00:36:43.772 Fused Compare & Write: Not Supported 00:36:43.772 Scatter-Gather List 00:36:43.772 SGL Command Set: Supported 00:36:43.772 SGL Keyed: Not Supported 00:36:43.772 SGL Bit Bucket Descriptor: Not Supported 00:36:43.772 SGL Metadata Pointer: Not Supported 00:36:43.772 Oversized SGL: Not Supported 00:36:43.772 SGL Metadata Address: Not Supported 00:36:43.772 SGL Offset: Supported 00:36:43.772 Transport SGL Data Block: Not Supported 00:36:43.772 Replay Protected Memory Block: Not Supported 00:36:43.772 00:36:43.772 Firmware Slot Information 00:36:43.772 ========================= 00:36:43.772 Active slot: 0 00:36:43.772 00:36:43.772 Asymmetric Namespace Access 00:36:43.772 =========================== 00:36:43.772 Change Count : 0 00:36:43.772 Number of ANA Group Descriptors : 1 00:36:43.772 ANA Group Descriptor : 0 00:36:43.772 ANA Group ID : 1 00:36:43.772 Number of NSID Values : 1 00:36:43.772 Change Count : 0 00:36:43.772 ANA State : 1 00:36:43.772 Namespace Identifier : 1 00:36:43.772 00:36:43.772 Commands Supported and Effects 00:36:43.772 ============================== 00:36:43.772 Admin Commands 00:36:43.772 -------------- 00:36:43.772 Get Log Page (02h): Supported 00:36:43.772 Identify (06h): Supported 00:36:43.772 Abort (08h): Supported 00:36:43.772 Set Features (09h): Supported 00:36:43.772 Get Features (0Ah): Supported 00:36:43.772 Asynchronous Event Request (0Ch): Supported 00:36:43.772 Keep Alive (18h): Supported 00:36:43.772 I/O Commands 00:36:43.772 ------------ 00:36:43.772 Flush (00h): Supported 00:36:43.772 Write (01h): Supported LBA-Change 00:36:43.772 Read (02h): Supported 00:36:43.772 Write Zeroes (08h): Supported LBA-Change 00:36:43.772 Dataset Management (09h): Supported 00:36:43.772 00:36:43.772 Error Log 00:36:43.772 ========= 00:36:43.772 Entry: 0 00:36:43.772 Error Count: 0x3 00:36:43.772 Submission Queue Id: 0x0 00:36:43.772 Command Id: 0x5 00:36:43.772 Phase Bit: 0 00:36:43.772 Status Code: 0x2 00:36:43.772 Status Code Type: 0x0 00:36:43.772 Do Not Retry: 1 00:36:43.772 Error Location: 0x28 00:36:43.772 LBA: 0x0 00:36:43.772 Namespace: 0x0 00:36:43.772 Vendor Log Page: 0x0 00:36:43.772 ----------- 00:36:43.772 Entry: 1 00:36:43.772 Error Count: 0x2 00:36:43.772 Submission Queue Id: 0x0 00:36:43.772 Command Id: 0x5 00:36:43.772 Phase Bit: 0 00:36:43.772 Status Code: 0x2 00:36:43.772 Status Code Type: 0x0 00:36:43.772 Do Not Retry: 1 00:36:43.772 Error Location: 0x28 00:36:43.772 LBA: 0x0 00:36:43.772 Namespace: 0x0 00:36:43.772 Vendor Log Page: 0x0 00:36:43.772 ----------- 00:36:43.772 Entry: 2 00:36:43.772 Error Count: 0x1 00:36:43.772 Submission Queue Id: 0x0 00:36:43.772 Command Id: 0x4 00:36:43.772 Phase Bit: 0 00:36:43.772 Status Code: 0x2 00:36:43.772 Status Code Type: 0x0 00:36:43.772 Do Not Retry: 1 00:36:43.772 Error Location: 0x28 00:36:43.772 LBA: 0x0 00:36:43.772 Namespace: 0x0 00:36:43.772 Vendor Log Page: 0x0 00:36:43.772 00:36:43.772 Number of Queues 00:36:43.772 ================ 00:36:43.772 Number of I/O Submission Queues: 128 00:36:43.772 Number of I/O Completion Queues: 128 00:36:43.772 00:36:43.772 ZNS Specific Controller Data 00:36:43.772 ============================ 00:36:43.772 Zone Append Size Limit: 0 00:36:43.772 00:36:43.772 00:36:43.772 Active Namespaces 00:36:43.772 ================= 00:36:43.772 get_feature(0x05) failed 00:36:43.772 Namespace ID:1 00:36:43.772 Command Set Identifier: NVM (00h) 00:36:43.772 Deallocate: Supported 00:36:43.772 Deallocated/Unwritten Error: Not Supported 00:36:43.772 Deallocated Read Value: Unknown 00:36:43.772 Deallocate in Write Zeroes: Not Supported 00:36:43.772 Deallocated Guard Field: 0xFFFF 00:36:43.772 Flush: Supported 00:36:43.772 Reservation: Not Supported 00:36:43.772 Namespace Sharing Capabilities: Multiple Controllers 00:36:43.772 Size (in LBAs): 3750748848 (1788GiB) 00:36:43.772 Capacity (in LBAs): 3750748848 (1788GiB) 00:36:43.772 Utilization (in LBAs): 3750748848 (1788GiB) 00:36:43.772 UUID: 53c8bcf7-f990-436a-8b9c-0d3b5cf59949 00:36:43.772 Thin Provisioning: Not Supported 00:36:43.772 Per-NS Atomic Units: Yes 00:36:43.772 Atomic Write Unit (Normal): 8 00:36:43.772 Atomic Write Unit (PFail): 8 00:36:43.772 Preferred Write Granularity: 8 00:36:43.772 Atomic Compare & Write Unit: 8 00:36:43.772 Atomic Boundary Size (Normal): 0 00:36:43.772 Atomic Boundary Size (PFail): 0 00:36:43.772 Atomic Boundary Offset: 0 00:36:43.772 NGUID/EUI64 Never Reused: No 00:36:43.772 ANA group ID: 1 00:36:43.772 Namespace Write Protected: No 00:36:43.772 Number of LBA Formats: 1 00:36:43.772 Current LBA Format: LBA Format #00 00:36:43.772 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:43.772 00:36:43.772 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:36:43.772 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:43.772 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:36:43.772 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:43.772 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:36:43.772 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:43.772 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:43.772 rmmod nvme_tcp 00:36:43.772 rmmod nvme_fabrics 00:36:43.772 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:43.772 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:36:43.772 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:36:43.772 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:36:43.772 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:43.772 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:43.772 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:43.772 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:36:43.772 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:36:43.772 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:43.772 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:36:43.772 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:43.772 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:43.773 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:43.773 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:43.773 13:41:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:45.683 13:41:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:45.683 13:41:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:36:45.683 13:41:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:45.683 13:41:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:36:45.683 13:41:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:45.683 13:41:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:45.683 13:41:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:45.683 13:41:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:45.683 13:41:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:36:45.683 13:41:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:36:45.944 13:41:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:49.245 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:49.245 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:49.245 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:49.245 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:49.245 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:49.245 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:49.245 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:49.245 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:49.245 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:49.245 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:49.506 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:49.506 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:49.506 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:49.506 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:49.506 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:49.506 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:49.506 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:36:49.766 00:36:49.766 real 0m20.282s 00:36:49.766 user 0m5.468s 00:36:49.766 sys 0m11.690s 00:36:49.766 13:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:49.766 13:41:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:36:49.766 ************************************ 00:36:49.766 END TEST nvmf_identify_kernel_target 00:36:49.766 ************************************ 00:36:49.766 13:41:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:36:49.766 13:41:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:36:49.766 13:41:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:49.766 13:41:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.766 ************************************ 00:36:49.766 START TEST nvmf_auth_host 00:36:49.766 ************************************ 00:36:49.766 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:36:50.078 * Looking for test storage... 00:36:50.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:50.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:50.078 --rc genhtml_branch_coverage=1 00:36:50.078 --rc genhtml_function_coverage=1 00:36:50.078 --rc genhtml_legend=1 00:36:50.078 --rc geninfo_all_blocks=1 00:36:50.078 --rc geninfo_unexecuted_blocks=1 00:36:50.078 00:36:50.078 ' 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:50.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:50.078 --rc genhtml_branch_coverage=1 00:36:50.078 --rc genhtml_function_coverage=1 00:36:50.078 --rc genhtml_legend=1 00:36:50.078 --rc geninfo_all_blocks=1 00:36:50.078 --rc geninfo_unexecuted_blocks=1 00:36:50.078 00:36:50.078 ' 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:50.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:50.078 --rc genhtml_branch_coverage=1 00:36:50.078 --rc genhtml_function_coverage=1 00:36:50.078 --rc genhtml_legend=1 00:36:50.078 --rc geninfo_all_blocks=1 00:36:50.078 --rc geninfo_unexecuted_blocks=1 00:36:50.078 00:36:50.078 ' 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:50.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:50.078 --rc genhtml_branch_coverage=1 00:36:50.078 --rc genhtml_function_coverage=1 00:36:50.078 --rc genhtml_legend=1 00:36:50.078 --rc geninfo_all_blocks=1 00:36:50.078 --rc geninfo_unexecuted_blocks=1 00:36:50.078 00:36:50.078 ' 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:50.078 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:50.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:36:50.079 13:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:58.293 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:58.293 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:58.293 Found net devices under 0000:31:00.0: cvl_0_0 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:58.293 Found net devices under 0000:31:00.1: cvl_0_1 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:58.293 13:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:58.293 13:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:58.293 13:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:58.293 13:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:58.293 13:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:58.293 13:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:58.293 13:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:58.293 13:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:58.294 13:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:58.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:58.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:36:58.294 00:36:58.294 --- 10.0.0.2 ping statistics --- 00:36:58.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:58.294 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:36:58.294 13:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:58.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:58.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:36:58.294 00:36:58.294 --- 10.0.0.1 ping statistics --- 00:36:58.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:58.294 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:36:58.294 13:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:58.294 13:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:36:58.294 13:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:58.294 13:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:58.294 13:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:58.294 13:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:58.294 13:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:58.294 13:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:58.294 13:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:58.294 13:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:36:58.294 13:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:58.294 13:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:58.294 13:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.294 13:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=4108445 00:36:58.294 13:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 4108445 00:36:58.294 13:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:36:58.294 13:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 4108445 ']' 00:36:58.294 13:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:58.294 13:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:58.294 13:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:58.294 13:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:58.294 13:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=18e97c02f4fbc247028e56cc578505bd 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.CKD 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 18e97c02f4fbc247028e56cc578505bd 0 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 18e97c02f4fbc247028e56cc578505bd 0 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=18e97c02f4fbc247028e56cc578505bd 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.CKD 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.CKD 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.CKD 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=83742d56a7e8c07fb92e45d849a062f7b2052590f1787c993d012eff7a28d9b0 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.VRD 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 83742d56a7e8c07fb92e45d849a062f7b2052590f1787c993d012eff7a28d9b0 3 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 83742d56a7e8c07fb92e45d849a062f7b2052590f1787c993d012eff7a28d9b0 3 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=83742d56a7e8c07fb92e45d849a062f7b2052590f1787c993d012eff7a28d9b0 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.VRD 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.VRD 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.VRD 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=62f7f74ad07943ebcb11e7438ff18d78b011370da90a8d0c 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.V3Z 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 62f7f74ad07943ebcb11e7438ff18d78b011370da90a8d0c 0 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 62f7f74ad07943ebcb11e7438ff18d78b011370da90a8d0c 0 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=62f7f74ad07943ebcb11e7438ff18d78b011370da90a8d0c 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:36:58.294 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:58.555 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.V3Z 00:36:58.555 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.V3Z 00:36:58.555 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.V3Z 00:36:58.555 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:36:58.555 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:58.555 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:58.555 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:58.555 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:36:58.555 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:36:58.555 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:58.555 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=64c5ad88d288f301e58db7ff00ee45a0c0b8380027252de7 00:36:58.555 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:36:58.555 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.9no 00:36:58.555 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 64c5ad88d288f301e58db7ff00ee45a0c0b8380027252de7 2 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 64c5ad88d288f301e58db7ff00ee45a0c0b8380027252de7 2 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=64c5ad88d288f301e58db7ff00ee45a0c0b8380027252de7 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.9no 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.9no 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.9no 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=336f853ffddce51053c9acf6b673f14f 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.pXn 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 336f853ffddce51053c9acf6b673f14f 1 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 336f853ffddce51053c9acf6b673f14f 1 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=336f853ffddce51053c9acf6b673f14f 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.pXn 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.pXn 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.pXn 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=863b9c41dcc4c23253cdc52af61d4826 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ILn 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 863b9c41dcc4c23253cdc52af61d4826 1 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 863b9c41dcc4c23253cdc52af61d4826 1 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=863b9c41dcc4c23253cdc52af61d4826 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ILn 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ILn 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.ILn 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=33c3ea6d667a9c09fcd15c276477d1a5ca140010ee770919 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.uyv 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 33c3ea6d667a9c09fcd15c276477d1a5ca140010ee770919 2 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 33c3ea6d667a9c09fcd15c276477d1a5ca140010ee770919 2 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=33c3ea6d667a9c09fcd15c276477d1a5ca140010ee770919 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:58.556 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.uyv 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.uyv 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.uyv 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=80bd52f35369e58b9ca3ca5c0053c5e0 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.1Op 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 80bd52f35369e58b9ca3ca5c0053c5e0 0 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 80bd52f35369e58b9ca3ca5c0053c5e0 0 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=80bd52f35369e58b9ca3ca5c0053c5e0 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.1Op 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.1Op 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.1Op 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=850ff8ed9333ff9735fe538727e6633996765216d8970e299e07529181b88aa2 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Y9B 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 850ff8ed9333ff9735fe538727e6633996765216d8970e299e07529181b88aa2 3 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 850ff8ed9333ff9735fe538727e6633996765216d8970e299e07529181b88aa2 3 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=850ff8ed9333ff9735fe538727e6633996765216d8970e299e07529181b88aa2 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Y9B 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Y9B 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Y9B 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 4108445 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 4108445 ']' 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:58.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:58.817 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.078 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:59.078 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:36:59.078 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:59.078 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.CKD 00:36:59.078 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:59.078 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.078 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:59.078 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.VRD ]] 00:36:59.078 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.VRD 00:36:59.078 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:59.078 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.078 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:59.078 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:59.078 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.V3Z 00:36:59.078 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:59.078 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.078 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.9no ]] 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.9no 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.pXn 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.ILn ]] 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ILn 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.uyv 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.1Op ]] 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.1Op 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Y9B 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:36:59.079 13:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:36:59.079 13:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:59.079 13:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:03.285 Waiting for block devices as requested 00:37:03.285 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:03.285 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:03.285 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:03.285 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:03.285 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:03.285 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:03.285 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:03.285 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:03.546 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:03.546 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:03.807 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:03.807 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:03.807 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:03.807 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:04.067 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:04.067 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:04.067 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:05.008 13:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:37:05.008 13:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:05.008 13:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:37:05.008 13:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:37:05.009 13:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:05.009 13:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:37:05.009 13:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:37:05.009 13:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:37:05.009 13:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:05.009 No valid GPT data, bailing 00:37:05.009 13:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:05.009 13:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:37:05.009 13:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:37:05.009 13:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:37:05.009 13:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:37:05.009 13:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:05.009 13:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:37:05.009 13:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:05.009 13:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:37:05.009 13:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:37:05.009 13:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:37:05.009 13:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:37:05.009 13:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:37:05.009 13:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:37:05.009 13:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:37:05.009 13:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:37:05.009 13:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:05.009 13:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:37:05.270 00:37:05.270 Discovery Log Number of Records 2, Generation counter 2 00:37:05.270 =====Discovery Log Entry 0====== 00:37:05.270 trtype: tcp 00:37:05.270 adrfam: ipv4 00:37:05.270 subtype: current discovery subsystem 00:37:05.270 treq: not specified, sq flow control disable supported 00:37:05.270 portid: 1 00:37:05.270 trsvcid: 4420 00:37:05.270 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:05.270 traddr: 10.0.0.1 00:37:05.270 eflags: none 00:37:05.270 sectype: none 00:37:05.270 =====Discovery Log Entry 1====== 00:37:05.270 trtype: tcp 00:37:05.270 adrfam: ipv4 00:37:05.270 subtype: nvme subsystem 00:37:05.270 treq: not specified, sq flow control disable supported 00:37:05.270 portid: 1 00:37:05.270 trsvcid: 4420 00:37:05.270 subnqn: nqn.2024-02.io.spdk:cnode0 00:37:05.270 traddr: 10.0.0.1 00:37:05.270 eflags: none 00:37:05.270 sectype: none 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjJmN2Y3NGFkMDc5NDNlYmNiMTFlNzQzOGZmMThkNzhiMDExMzcwZGE5MGE4ZDBjgI8hZw==: 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjJmN2Y3NGFkMDc5NDNlYmNiMTFlNzQzOGZmMThkNzhiMDExMzcwZGE5MGE4ZDBjgI8hZw==: 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: ]] 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.270 nvme0n1 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.270 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MThlOTdjMDJmNGZiYzI0NzAyOGU1NmNjNTc4NTA1YmS740y+: 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MThlOTdjMDJmNGZiYzI0NzAyOGU1NmNjNTc4NTA1YmS740y+: 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: ]] 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.532 nvme0n1 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.532 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjJmN2Y3NGFkMDc5NDNlYmNiMTFlNzQzOGZmMThkNzhiMDExMzcwZGE5MGE4ZDBjgI8hZw==: 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjJmN2Y3NGFkMDc5NDNlYmNiMTFlNzQzOGZmMThkNzhiMDExMzcwZGE5MGE4ZDBjgI8hZw==: 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: ]] 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.793 nvme0n1 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzM2Zjg1M2ZmZGRjZTUxMDUzYzlhY2Y2YjY3M2YxNGYd8LXC: 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzM2Zjg1M2ZmZGRjZTUxMDUzYzlhY2Y2YjY3M2YxNGYd8LXC: 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: ]] 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:37:05.793 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:05.794 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:05.794 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:05.794 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:05.794 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:05.794 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:05.794 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.794 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.054 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.054 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:06.054 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:06.054 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:06.054 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:06.054 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:06.054 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:06.054 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:06.054 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:06.054 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:06.054 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:06.054 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:06.054 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:06.054 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.054 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.054 nvme0n1 00:37:06.054 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.054 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:06.054 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:06.054 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.054 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.054 13:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.054 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:06.054 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:06.054 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.054 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.054 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.054 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:06.054 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:37:06.054 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:06.054 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:06.054 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:06.054 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:06.054 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzNjM2VhNmQ2NjdhOWMwOWZjZDE1YzI3NjQ3N2QxYTVjYTE0MDAxMGVlNzcwOTE5IGhKiw==: 00:37:06.055 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: 00:37:06.055 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:06.055 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:06.055 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzNjM2VhNmQ2NjdhOWMwOWZjZDE1YzI3NjQ3N2QxYTVjYTE0MDAxMGVlNzcwOTE5IGhKiw==: 00:37:06.055 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: ]] 00:37:06.055 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: 00:37:06.055 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:37:06.055 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:06.055 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:06.055 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:06.055 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:06.055 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:06.055 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:06.055 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.055 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.055 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.055 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:06.055 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:06.055 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:06.055 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:06.055 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:06.055 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:06.055 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:06.055 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:06.055 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:06.055 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:06.055 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:06.055 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:06.055 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.055 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.315 nvme0n1 00:37:06.315 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.315 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:06.315 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:06.315 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.315 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.315 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.315 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:06.316 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:06.316 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.316 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.316 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.316 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:06.316 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:37:06.316 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:06.316 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:06.316 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:06.316 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:06.316 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODUwZmY4ZWQ5MzMzZmY5NzM1ZmU1Mzg3MjdlNjYzMzk5Njc2NTIxNmQ4OTcwZTI5OWUwNzUyOTE4MWI4OGFhMjOeTLc=: 00:37:06.316 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:06.316 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:06.316 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:06.316 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODUwZmY4ZWQ5MzMzZmY5NzM1ZmU1Mzg3MjdlNjYzMzk5Njc2NTIxNmQ4OTcwZTI5OWUwNzUyOTE4MWI4OGFhMjOeTLc=: 00:37:06.316 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:06.316 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:37:06.316 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:06.316 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:06.316 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:06.316 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:06.316 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:06.316 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:06.316 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.316 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.316 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.316 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:06.316 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:06.316 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:06.316 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:06.316 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:06.316 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:06.316 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:06.316 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:06.316 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:06.316 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:06.316 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:06.316 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:06.316 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.316 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.576 nvme0n1 00:37:06.576 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.576 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:06.576 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:06.576 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MThlOTdjMDJmNGZiYzI0NzAyOGU1NmNjNTc4NTA1YmS740y+: 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MThlOTdjMDJmNGZiYzI0NzAyOGU1NmNjNTc4NTA1YmS740y+: 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: ]] 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.577 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.838 nvme0n1 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjJmN2Y3NGFkMDc5NDNlYmNiMTFlNzQzOGZmMThkNzhiMDExMzcwZGE5MGE4ZDBjgI8hZw==: 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjJmN2Y3NGFkMDc5NDNlYmNiMTFlNzQzOGZmMThkNzhiMDExMzcwZGE5MGE4ZDBjgI8hZw==: 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: ]] 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.838 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.110 nvme0n1 00:37:07.110 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.111 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:07.111 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.111 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:07.111 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.111 13:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.111 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:07.111 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:07.111 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.111 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.111 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.111 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:07.111 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:37:07.111 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:07.111 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:07.111 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:07.111 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:07.111 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzM2Zjg1M2ZmZGRjZTUxMDUzYzlhY2Y2YjY3M2YxNGYd8LXC: 00:37:07.111 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: 00:37:07.111 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:07.111 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:07.111 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzM2Zjg1M2ZmZGRjZTUxMDUzYzlhY2Y2YjY3M2YxNGYd8LXC: 00:37:07.111 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: ]] 00:37:07.111 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: 00:37:07.111 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:37:07.111 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:07.111 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:07.111 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:07.111 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:07.111 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:07.111 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:37:07.111 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.111 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.111 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.112 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:07.112 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:07.112 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:07.112 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:07.112 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:07.112 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:07.112 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:07.112 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:07.112 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:07.112 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:07.112 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:07.112 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:07.112 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.112 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.375 nvme0n1 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzNjM2VhNmQ2NjdhOWMwOWZjZDE1YzI3NjQ3N2QxYTVjYTE0MDAxMGVlNzcwOTE5IGhKiw==: 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzNjM2VhNmQ2NjdhOWMwOWZjZDE1YzI3NjQ3N2QxYTVjYTE0MDAxMGVlNzcwOTE5IGhKiw==: 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: ]] 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.375 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.635 nvme0n1 00:37:07.635 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.635 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:07.635 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.635 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:07.635 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.635 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.635 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:07.635 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:07.635 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.635 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.635 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.635 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:07.635 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:37:07.635 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:07.635 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:07.635 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:07.635 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:07.635 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODUwZmY4ZWQ5MzMzZmY5NzM1ZmU1Mzg3MjdlNjYzMzk5Njc2NTIxNmQ4OTcwZTI5OWUwNzUyOTE4MWI4OGFhMjOeTLc=: 00:37:07.635 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:07.635 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:07.635 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:07.635 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODUwZmY4ZWQ5MzMzZmY5NzM1ZmU1Mzg3MjdlNjYzMzk5Njc2NTIxNmQ4OTcwZTI5OWUwNzUyOTE4MWI4OGFhMjOeTLc=: 00:37:07.635 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:07.635 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:37:07.635 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:07.635 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:07.635 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:07.635 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:07.635 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:07.635 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:37:07.635 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.635 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.635 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.635 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:07.635 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:07.635 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:07.635 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:07.635 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:07.635 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:07.636 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:07.636 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:07.636 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:07.636 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:07.636 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:07.636 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:07.636 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.636 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.896 nvme0n1 00:37:07.896 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.896 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:07.896 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:07.896 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MThlOTdjMDJmNGZiYzI0NzAyOGU1NmNjNTc4NTA1YmS740y+: 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MThlOTdjMDJmNGZiYzI0NzAyOGU1NmNjNTc4NTA1YmS740y+: 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: ]] 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.897 13:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.468 nvme0n1 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjJmN2Y3NGFkMDc5NDNlYmNiMTFlNzQzOGZmMThkNzhiMDExMzcwZGE5MGE4ZDBjgI8hZw==: 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjJmN2Y3NGFkMDc5NDNlYmNiMTFlNzQzOGZmMThkNzhiMDExMzcwZGE5MGE4ZDBjgI8hZw==: 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: ]] 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:08.468 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:08.469 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:08.469 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:08.469 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.469 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.729 nvme0n1 00:37:08.729 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.729 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:08.729 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:08.729 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.729 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.729 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzM2Zjg1M2ZmZGRjZTUxMDUzYzlhY2Y2YjY3M2YxNGYd8LXC: 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzM2Zjg1M2ZmZGRjZTUxMDUzYzlhY2Y2YjY3M2YxNGYd8LXC: 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: ]] 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.730 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.991 nvme0n1 00:37:08.991 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.991 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:08.991 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:08.991 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.991 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.991 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.991 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:08.991 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:08.991 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.991 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.991 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.991 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:08.991 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:37:08.991 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:08.991 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:08.991 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:08.991 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:08.991 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzNjM2VhNmQ2NjdhOWMwOWZjZDE1YzI3NjQ3N2QxYTVjYTE0MDAxMGVlNzcwOTE5IGhKiw==: 00:37:08.991 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: 00:37:08.991 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:08.991 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:08.991 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzNjM2VhNmQ2NjdhOWMwOWZjZDE1YzI3NjQ3N2QxYTVjYTE0MDAxMGVlNzcwOTE5IGhKiw==: 00:37:08.991 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: ]] 00:37:08.991 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: 00:37:08.991 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:37:08.991 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:08.991 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:08.991 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:08.991 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:08.991 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:08.991 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:37:08.991 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.991 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.991 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.991 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:08.991 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:08.991 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:08.991 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:08.992 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:08.992 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:08.992 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:08.992 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:08.992 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:08.992 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:08.992 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:08.992 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:08.992 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.992 13:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.563 nvme0n1 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODUwZmY4ZWQ5MzMzZmY5NzM1ZmU1Mzg3MjdlNjYzMzk5Njc2NTIxNmQ4OTcwZTI5OWUwNzUyOTE4MWI4OGFhMjOeTLc=: 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODUwZmY4ZWQ5MzMzZmY5NzM1ZmU1Mzg3MjdlNjYzMzk5Njc2NTIxNmQ4OTcwZTI5OWUwNzUyOTE4MWI4OGFhMjOeTLc=: 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.563 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.824 nvme0n1 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MThlOTdjMDJmNGZiYzI0NzAyOGU1NmNjNTc4NTA1YmS740y+: 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MThlOTdjMDJmNGZiYzI0NzAyOGU1NmNjNTc4NTA1YmS740y+: 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: ]] 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.824 13:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.395 nvme0n1 00:37:10.395 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.395 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:10.395 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:10.395 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.395 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.395 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.395 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:10.395 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:10.395 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.395 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.395 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.395 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:10.395 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:37:10.395 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:10.395 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:10.395 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:10.395 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:10.395 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjJmN2Y3NGFkMDc5NDNlYmNiMTFlNzQzOGZmMThkNzhiMDExMzcwZGE5MGE4ZDBjgI8hZw==: 00:37:10.395 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: 00:37:10.395 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:10.395 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:10.395 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjJmN2Y3NGFkMDc5NDNlYmNiMTFlNzQzOGZmMThkNzhiMDExMzcwZGE5MGE4ZDBjgI8hZw==: 00:37:10.395 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: ]] 00:37:10.395 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: 00:37:10.395 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:37:10.395 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:10.395 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:10.395 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:10.395 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:10.395 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:10.395 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:37:10.395 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.395 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.395 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.395 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:10.395 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:10.395 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:10.395 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:10.395 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:10.395 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:10.395 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:10.396 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:10.396 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:10.396 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:10.396 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:10.396 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:10.396 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.396 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.966 nvme0n1 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzM2Zjg1M2ZmZGRjZTUxMDUzYzlhY2Y2YjY3M2YxNGYd8LXC: 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzM2Zjg1M2ZmZGRjZTUxMDUzYzlhY2Y2YjY3M2YxNGYd8LXC: 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: ]] 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:10.966 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:10.967 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:10.967 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:10.967 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:10.967 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:10.967 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.967 13:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.537 nvme0n1 00:37:11.537 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.537 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:11.537 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:11.537 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzNjM2VhNmQ2NjdhOWMwOWZjZDE1YzI3NjQ3N2QxYTVjYTE0MDAxMGVlNzcwOTE5IGhKiw==: 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzNjM2VhNmQ2NjdhOWMwOWZjZDE1YzI3NjQ3N2QxYTVjYTE0MDAxMGVlNzcwOTE5IGhKiw==: 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: ]] 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.538 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:12.120 nvme0n1 00:37:12.120 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.120 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:12.120 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.120 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:12.120 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:12.120 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.120 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:12.120 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:12.120 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.120 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:12.120 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.120 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:12.120 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:37:12.120 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:12.120 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:12.120 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:12.120 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:12.120 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODUwZmY4ZWQ5MzMzZmY5NzM1ZmU1Mzg3MjdlNjYzMzk5Njc2NTIxNmQ4OTcwZTI5OWUwNzUyOTE4MWI4OGFhMjOeTLc=: 00:37:12.120 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:12.120 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:12.121 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:12.121 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODUwZmY4ZWQ5MzMzZmY5NzM1ZmU1Mzg3MjdlNjYzMzk5Njc2NTIxNmQ4OTcwZTI5OWUwNzUyOTE4MWI4OGFhMjOeTLc=: 00:37:12.121 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:12.121 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:37:12.121 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:12.121 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:12.121 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:12.121 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:12.121 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:12.121 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:37:12.121 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.121 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:12.121 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.121 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:12.121 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:12.121 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:12.121 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:12.121 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:12.121 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:12.121 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:12.121 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:12.121 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:12.121 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:12.121 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:12.121 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:12.121 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.121 13:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:12.381 nvme0n1 00:37:12.381 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.381 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:12.381 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:12.381 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.381 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:12.642 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.642 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:12.642 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:12.642 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.642 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:12.642 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.642 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:12.642 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:12.642 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:37:12.642 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:12.642 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:12.642 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:12.642 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:12.642 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MThlOTdjMDJmNGZiYzI0NzAyOGU1NmNjNTc4NTA1YmS740y+: 00:37:12.642 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: 00:37:12.642 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:12.642 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:12.642 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MThlOTdjMDJmNGZiYzI0NzAyOGU1NmNjNTc4NTA1YmS740y+: 00:37:12.642 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: ]] 00:37:12.642 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: 00:37:12.642 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:37:12.642 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:12.642 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:12.642 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:12.642 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:12.642 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:12.642 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:37:12.642 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.642 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:12.642 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.642 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:12.642 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:12.642 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:12.642 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:12.642 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:12.642 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:12.642 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:12.642 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:12.642 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:12.642 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:12.642 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:12.643 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:12.643 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.643 13:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:13.213 nvme0n1 00:37:13.213 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.213 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:13.213 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:13.213 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.213 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:13.473 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.473 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:13.473 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:13.473 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.473 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:13.473 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.473 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:13.473 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:37:13.473 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:13.473 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:13.473 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:13.473 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:13.473 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjJmN2Y3NGFkMDc5NDNlYmNiMTFlNzQzOGZmMThkNzhiMDExMzcwZGE5MGE4ZDBjgI8hZw==: 00:37:13.473 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: 00:37:13.473 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:13.473 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:13.473 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjJmN2Y3NGFkMDc5NDNlYmNiMTFlNzQzOGZmMThkNzhiMDExMzcwZGE5MGE4ZDBjgI8hZw==: 00:37:13.473 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: ]] 00:37:13.473 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: 00:37:13.473 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:37:13.473 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:13.473 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:13.473 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:13.473 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:13.473 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:13.473 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:37:13.473 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.473 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:13.473 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.473 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:13.473 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:13.473 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:13.473 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:13.473 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:13.473 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:13.473 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:13.473 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:13.473 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:13.473 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:13.473 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:13.473 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:13.474 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.474 13:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.044 nvme0n1 00:37:14.044 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.044 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:14.044 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:14.044 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.044 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.305 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.305 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:14.305 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:14.305 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.305 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.305 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.305 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:14.305 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:37:14.305 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:14.305 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:14.305 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:14.305 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:14.305 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzM2Zjg1M2ZmZGRjZTUxMDUzYzlhY2Y2YjY3M2YxNGYd8LXC: 00:37:14.305 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: 00:37:14.305 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:14.305 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:14.305 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzM2Zjg1M2ZmZGRjZTUxMDUzYzlhY2Y2YjY3M2YxNGYd8LXC: 00:37:14.305 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: ]] 00:37:14.305 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: 00:37:14.305 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:37:14.305 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:14.305 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:14.305 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:14.305 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:14.305 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:14.305 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:37:14.305 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.305 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.305 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.305 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:14.305 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:14.305 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:14.306 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:14.306 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:14.306 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:14.306 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:14.306 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:14.306 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:14.306 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:14.306 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:14.306 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:14.306 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.306 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.876 nvme0n1 00:37:14.876 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzNjM2VhNmQ2NjdhOWMwOWZjZDE1YzI3NjQ3N2QxYTVjYTE0MDAxMGVlNzcwOTE5IGhKiw==: 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzNjM2VhNmQ2NjdhOWMwOWZjZDE1YzI3NjQ3N2QxYTVjYTE0MDAxMGVlNzcwOTE5IGhKiw==: 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: ]] 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.137 13:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.077 nvme0n1 00:37:16.077 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.077 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:16.077 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:16.077 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.077 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.077 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.077 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:16.077 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:16.078 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.078 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.078 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.078 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:16.078 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:37:16.078 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:16.078 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:16.078 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:16.078 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:16.078 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODUwZmY4ZWQ5MzMzZmY5NzM1ZmU1Mzg3MjdlNjYzMzk5Njc2NTIxNmQ4OTcwZTI5OWUwNzUyOTE4MWI4OGFhMjOeTLc=: 00:37:16.078 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:16.078 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:16.078 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:16.078 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODUwZmY4ZWQ5MzMzZmY5NzM1ZmU1Mzg3MjdlNjYzMzk5Njc2NTIxNmQ4OTcwZTI5OWUwNzUyOTE4MWI4OGFhMjOeTLc=: 00:37:16.078 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:16.078 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:37:16.078 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:16.078 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:16.078 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:16.078 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:16.078 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:16.078 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:37:16.078 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.078 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.078 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.078 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:16.078 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:16.078 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:16.078 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:16.078 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:16.078 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:16.078 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:16.078 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:16.078 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:16.078 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:16.078 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:16.078 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:16.078 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.078 13:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.649 nvme0n1 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MThlOTdjMDJmNGZiYzI0NzAyOGU1NmNjNTc4NTA1YmS740y+: 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MThlOTdjMDJmNGZiYzI0NzAyOGU1NmNjNTc4NTA1YmS740y+: 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: ]] 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.649 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.910 nvme0n1 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjJmN2Y3NGFkMDc5NDNlYmNiMTFlNzQzOGZmMThkNzhiMDExMzcwZGE5MGE4ZDBjgI8hZw==: 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjJmN2Y3NGFkMDc5NDNlYmNiMTFlNzQzOGZmMThkNzhiMDExMzcwZGE5MGE4ZDBjgI8hZw==: 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: ]] 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.910 13:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.171 nvme0n1 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzM2Zjg1M2ZmZGRjZTUxMDUzYzlhY2Y2YjY3M2YxNGYd8LXC: 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzM2Zjg1M2ZmZGRjZTUxMDUzYzlhY2Y2YjY3M2YxNGYd8LXC: 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: ]] 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.171 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.432 nvme0n1 00:37:17.432 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.432 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:17.432 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:17.432 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.432 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.432 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.432 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:17.432 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:17.432 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.432 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.432 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.432 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:17.432 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:37:17.432 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:17.432 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:17.432 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:17.432 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:17.432 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzNjM2VhNmQ2NjdhOWMwOWZjZDE1YzI3NjQ3N2QxYTVjYTE0MDAxMGVlNzcwOTE5IGhKiw==: 00:37:17.432 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: 00:37:17.432 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:17.432 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:17.432 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzNjM2VhNmQ2NjdhOWMwOWZjZDE1YzI3NjQ3N2QxYTVjYTE0MDAxMGVlNzcwOTE5IGhKiw==: 00:37:17.432 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: ]] 00:37:17.432 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: 00:37:17.432 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:37:17.432 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:17.432 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:17.432 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:17.432 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:17.432 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:17.432 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:37:17.432 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.432 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.433 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.433 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:17.433 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:17.433 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:17.433 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:17.433 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:17.433 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:17.433 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:17.433 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:17.433 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:17.433 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:17.433 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:17.433 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:17.433 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.433 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.694 nvme0n1 00:37:17.694 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.694 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:17.694 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:17.694 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.694 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.694 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.694 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:17.694 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:17.694 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.694 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.694 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.694 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:17.694 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:37:17.694 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:17.694 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:17.694 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:17.694 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:17.694 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODUwZmY4ZWQ5MzMzZmY5NzM1ZmU1Mzg3MjdlNjYzMzk5Njc2NTIxNmQ4OTcwZTI5OWUwNzUyOTE4MWI4OGFhMjOeTLc=: 00:37:17.694 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:17.694 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:17.694 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:17.695 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODUwZmY4ZWQ5MzMzZmY5NzM1ZmU1Mzg3MjdlNjYzMzk5Njc2NTIxNmQ4OTcwZTI5OWUwNzUyOTE4MWI4OGFhMjOeTLc=: 00:37:17.695 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:17.695 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:37:17.695 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:17.695 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:17.695 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:17.695 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:17.695 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:17.695 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:37:17.695 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.695 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.695 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.695 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:17.695 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:17.695 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:17.695 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:17.695 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:17.695 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:17.695 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:17.695 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:17.695 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:17.695 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:17.695 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:17.695 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:17.695 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.695 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.955 nvme0n1 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MThlOTdjMDJmNGZiYzI0NzAyOGU1NmNjNTc4NTA1YmS740y+: 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MThlOTdjMDJmNGZiYzI0NzAyOGU1NmNjNTc4NTA1YmS740y+: 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: ]] 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:17.955 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:17.956 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:17.956 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:17.956 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.956 13:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.216 nvme0n1 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjJmN2Y3NGFkMDc5NDNlYmNiMTFlNzQzOGZmMThkNzhiMDExMzcwZGE5MGE4ZDBjgI8hZw==: 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjJmN2Y3NGFkMDc5NDNlYmNiMTFlNzQzOGZmMThkNzhiMDExMzcwZGE5MGE4ZDBjgI8hZw==: 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: ]] 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.216 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.478 nvme0n1 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzM2Zjg1M2ZmZGRjZTUxMDUzYzlhY2Y2YjY3M2YxNGYd8LXC: 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzM2Zjg1M2ZmZGRjZTUxMDUzYzlhY2Y2YjY3M2YxNGYd8LXC: 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: ]] 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.478 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.739 nvme0n1 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzNjM2VhNmQ2NjdhOWMwOWZjZDE1YzI3NjQ3N2QxYTVjYTE0MDAxMGVlNzcwOTE5IGhKiw==: 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzNjM2VhNmQ2NjdhOWMwOWZjZDE1YzI3NjQ3N2QxYTVjYTE0MDAxMGVlNzcwOTE5IGhKiw==: 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: ]] 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:18.739 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:18.740 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:18.740 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:18.740 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.740 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.000 nvme0n1 00:37:19.000 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.000 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:19.000 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:19.000 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.000 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.000 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.000 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:19.000 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:19.000 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.000 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.000 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.000 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:19.000 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:37:19.000 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:19.000 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:19.000 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:19.000 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:19.000 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODUwZmY4ZWQ5MzMzZmY5NzM1ZmU1Mzg3MjdlNjYzMzk5Njc2NTIxNmQ4OTcwZTI5OWUwNzUyOTE4MWI4OGFhMjOeTLc=: 00:37:19.000 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:19.000 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:19.000 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:19.000 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODUwZmY4ZWQ5MzMzZmY5NzM1ZmU1Mzg3MjdlNjYzMzk5Njc2NTIxNmQ4OTcwZTI5OWUwNzUyOTE4MWI4OGFhMjOeTLc=: 00:37:19.000 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:19.000 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:37:19.000 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:19.000 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:19.000 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:19.000 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:19.001 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:19.001 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:37:19.001 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.001 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.001 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.001 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:19.001 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:19.001 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:19.001 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:19.001 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:19.001 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:19.001 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:19.001 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:19.001 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:19.001 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:19.001 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:19.001 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:19.001 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.001 13:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.262 nvme0n1 00:37:19.262 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.262 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:19.262 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:19.262 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.262 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.262 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.262 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:19.262 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:19.262 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.262 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.262 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.262 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:19.262 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:19.262 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:37:19.262 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:19.262 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:19.262 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:19.262 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:19.262 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MThlOTdjMDJmNGZiYzI0NzAyOGU1NmNjNTc4NTA1YmS740y+: 00:37:19.262 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: 00:37:19.262 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:19.262 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:19.262 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MThlOTdjMDJmNGZiYzI0NzAyOGU1NmNjNTc4NTA1YmS740y+: 00:37:19.262 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: ]] 00:37:19.262 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: 00:37:19.262 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:37:19.262 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:19.262 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:19.262 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:19.262 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:19.262 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:19.262 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:37:19.262 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.262 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.262 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.262 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:19.262 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:19.262 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:19.262 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:19.262 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:19.263 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:19.263 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:19.263 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:19.263 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:19.263 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:19.263 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:19.263 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:19.263 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.263 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.523 nvme0n1 00:37:19.523 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.523 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:19.523 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:19.523 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.523 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.523 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjJmN2Y3NGFkMDc5NDNlYmNiMTFlNzQzOGZmMThkNzhiMDExMzcwZGE5MGE4ZDBjgI8hZw==: 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjJmN2Y3NGFkMDc5NDNlYmNiMTFlNzQzOGZmMThkNzhiMDExMzcwZGE5MGE4ZDBjgI8hZw==: 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: ]] 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.784 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.046 nvme0n1 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzM2Zjg1M2ZmZGRjZTUxMDUzYzlhY2Y2YjY3M2YxNGYd8LXC: 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzM2Zjg1M2ZmZGRjZTUxMDUzYzlhY2Y2YjY3M2YxNGYd8LXC: 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: ]] 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.046 13:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.307 nvme0n1 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzNjM2VhNmQ2NjdhOWMwOWZjZDE1YzI3NjQ3N2QxYTVjYTE0MDAxMGVlNzcwOTE5IGhKiw==: 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzNjM2VhNmQ2NjdhOWMwOWZjZDE1YzI3NjQ3N2QxYTVjYTE0MDAxMGVlNzcwOTE5IGhKiw==: 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: ]] 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.307 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.568 nvme0n1 00:37:20.568 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.568 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:20.568 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:20.568 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.568 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.568 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.829 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:20.829 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:20.829 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.829 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.829 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.829 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:20.829 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:37:20.829 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:20.829 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:20.829 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:20.829 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:20.829 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODUwZmY4ZWQ5MzMzZmY5NzM1ZmU1Mzg3MjdlNjYzMzk5Njc2NTIxNmQ4OTcwZTI5OWUwNzUyOTE4MWI4OGFhMjOeTLc=: 00:37:20.829 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:20.829 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:20.829 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:20.829 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODUwZmY4ZWQ5MzMzZmY5NzM1ZmU1Mzg3MjdlNjYzMzk5Njc2NTIxNmQ4OTcwZTI5OWUwNzUyOTE4MWI4OGFhMjOeTLc=: 00:37:20.829 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:20.829 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:37:20.829 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:20.829 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:20.829 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:20.829 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:20.829 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:20.829 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:37:20.829 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.829 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.829 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.829 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:20.829 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:20.829 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:20.829 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:20.829 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:20.829 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:20.829 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:20.829 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:20.829 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:20.829 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:20.829 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:20.829 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:20.829 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.829 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.090 nvme0n1 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MThlOTdjMDJmNGZiYzI0NzAyOGU1NmNjNTc4NTA1YmS740y+: 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MThlOTdjMDJmNGZiYzI0NzAyOGU1NmNjNTc4NTA1YmS740y+: 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: ]] 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.090 13:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.662 nvme0n1 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjJmN2Y3NGFkMDc5NDNlYmNiMTFlNzQzOGZmMThkNzhiMDExMzcwZGE5MGE4ZDBjgI8hZw==: 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjJmN2Y3NGFkMDc5NDNlYmNiMTFlNzQzOGZmMThkNzhiMDExMzcwZGE5MGE4ZDBjgI8hZw==: 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: ]] 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.662 13:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.232 nvme0n1 00:37:22.232 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.232 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:22.232 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:22.232 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.232 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.232 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.232 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:22.232 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:22.232 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.232 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.232 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.232 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:22.232 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:37:22.232 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:22.232 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:22.232 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:22.232 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:22.232 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzM2Zjg1M2ZmZGRjZTUxMDUzYzlhY2Y2YjY3M2YxNGYd8LXC: 00:37:22.232 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: 00:37:22.232 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:22.232 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:22.232 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzM2Zjg1M2ZmZGRjZTUxMDUzYzlhY2Y2YjY3M2YxNGYd8LXC: 00:37:22.232 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: ]] 00:37:22.232 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: 00:37:22.232 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:37:22.232 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:22.232 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:22.232 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:22.232 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:22.232 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:22.232 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:37:22.232 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.232 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.232 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.232 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:22.232 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:22.232 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:22.232 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:22.232 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:22.232 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:22.233 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:22.233 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:22.233 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:22.233 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:22.233 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:22.233 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:22.233 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.233 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.802 nvme0n1 00:37:22.802 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.802 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:22.802 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:22.802 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.802 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzNjM2VhNmQ2NjdhOWMwOWZjZDE1YzI3NjQ3N2QxYTVjYTE0MDAxMGVlNzcwOTE5IGhKiw==: 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzNjM2VhNmQ2NjdhOWMwOWZjZDE1YzI3NjQ3N2QxYTVjYTE0MDAxMGVlNzcwOTE5IGhKiw==: 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: ]] 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.803 13:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.373 nvme0n1 00:37:23.373 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.373 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:23.373 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:23.373 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.373 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.373 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.373 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:23.373 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:23.373 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.373 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.373 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.373 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:23.373 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:37:23.374 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:23.374 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:23.374 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:23.374 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:23.374 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODUwZmY4ZWQ5MzMzZmY5NzM1ZmU1Mzg3MjdlNjYzMzk5Njc2NTIxNmQ4OTcwZTI5OWUwNzUyOTE4MWI4OGFhMjOeTLc=: 00:37:23.374 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:23.374 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:23.374 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:23.374 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODUwZmY4ZWQ5MzMzZmY5NzM1ZmU1Mzg3MjdlNjYzMzk5Njc2NTIxNmQ4OTcwZTI5OWUwNzUyOTE4MWI4OGFhMjOeTLc=: 00:37:23.374 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:23.374 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:37:23.374 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:23.374 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:23.374 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:23.374 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:23.374 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:23.374 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:37:23.374 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.374 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.374 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.374 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:23.374 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:23.374 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:23.374 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:23.374 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:23.374 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:23.374 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:23.374 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:23.374 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:23.374 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:23.374 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:23.374 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:23.374 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.374 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.945 nvme0n1 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MThlOTdjMDJmNGZiYzI0NzAyOGU1NmNjNTc4NTA1YmS740y+: 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MThlOTdjMDJmNGZiYzI0NzAyOGU1NmNjNTc4NTA1YmS740y+: 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: ]] 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.945 13:42:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.517 nvme0n1 00:37:24.517 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.517 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:24.517 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:24.517 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.517 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjJmN2Y3NGFkMDc5NDNlYmNiMTFlNzQzOGZmMThkNzhiMDExMzcwZGE5MGE4ZDBjgI8hZw==: 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjJmN2Y3NGFkMDc5NDNlYmNiMTFlNzQzOGZmMThkNzhiMDExMzcwZGE5MGE4ZDBjgI8hZw==: 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: ]] 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.778 13:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.349 nvme0n1 00:37:25.349 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.349 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:25.349 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:25.349 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.349 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.609 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzM2Zjg1M2ZmZGRjZTUxMDUzYzlhY2Y2YjY3M2YxNGYd8LXC: 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzM2Zjg1M2ZmZGRjZTUxMDUzYzlhY2Y2YjY3M2YxNGYd8LXC: 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: ]] 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.610 13:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.551 nvme0n1 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzNjM2VhNmQ2NjdhOWMwOWZjZDE1YzI3NjQ3N2QxYTVjYTE0MDAxMGVlNzcwOTE5IGhKiw==: 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzNjM2VhNmQ2NjdhOWMwOWZjZDE1YzI3NjQ3N2QxYTVjYTE0MDAxMGVlNzcwOTE5IGhKiw==: 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: ]] 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:26.551 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:26.552 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:26.552 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.552 13:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.123 nvme0n1 00:37:27.123 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.123 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:27.123 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:27.123 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.123 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.123 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.123 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:27.123 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:27.123 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.123 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.123 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.123 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:27.123 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:37:27.123 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:27.123 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:27.123 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:27.123 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:27.123 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODUwZmY4ZWQ5MzMzZmY5NzM1ZmU1Mzg3MjdlNjYzMzk5Njc2NTIxNmQ4OTcwZTI5OWUwNzUyOTE4MWI4OGFhMjOeTLc=: 00:37:27.123 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:27.123 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:27.123 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:27.123 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODUwZmY4ZWQ5MzMzZmY5NzM1ZmU1Mzg3MjdlNjYzMzk5Njc2NTIxNmQ4OTcwZTI5OWUwNzUyOTE4MWI4OGFhMjOeTLc=: 00:37:27.123 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:27.124 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:37:27.124 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:27.124 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:27.124 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:27.124 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:27.124 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:27.124 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:37:27.124 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.124 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.124 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.124 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:27.124 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:27.124 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:27.124 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:27.124 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:27.124 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:27.124 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:27.124 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:27.124 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:27.124 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:27.124 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:27.124 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:27.124 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.124 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.065 nvme0n1 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MThlOTdjMDJmNGZiYzI0NzAyOGU1NmNjNTc4NTA1YmS740y+: 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MThlOTdjMDJmNGZiYzI0NzAyOGU1NmNjNTc4NTA1YmS740y+: 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: ]] 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.065 13:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.325 nvme0n1 00:37:28.325 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.325 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:28.325 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:28.325 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.325 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.325 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.325 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:28.325 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:28.325 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.325 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.325 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.326 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:28.326 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:37:28.326 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:28.326 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:28.326 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:28.326 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:28.326 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjJmN2Y3NGFkMDc5NDNlYmNiMTFlNzQzOGZmMThkNzhiMDExMzcwZGE5MGE4ZDBjgI8hZw==: 00:37:28.326 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: 00:37:28.326 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:28.326 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:28.326 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjJmN2Y3NGFkMDc5NDNlYmNiMTFlNzQzOGZmMThkNzhiMDExMzcwZGE5MGE4ZDBjgI8hZw==: 00:37:28.326 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: ]] 00:37:28.326 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: 00:37:28.326 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:37:28.326 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:28.326 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:28.326 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:28.326 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:28.326 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:28.326 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:28.326 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.326 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.326 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.326 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:28.326 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:28.326 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:28.326 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:28.326 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:28.326 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:28.326 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:28.326 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:28.326 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:28.326 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:28.326 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:28.326 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:28.326 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.326 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.586 nvme0n1 00:37:28.586 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.586 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:28.586 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:28.586 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.586 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.586 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.586 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:28.586 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:28.586 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.586 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzM2Zjg1M2ZmZGRjZTUxMDUzYzlhY2Y2YjY3M2YxNGYd8LXC: 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzM2Zjg1M2ZmZGRjZTUxMDUzYzlhY2Y2YjY3M2YxNGYd8LXC: 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: ]] 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.587 nvme0n1 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.587 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.848 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:28.848 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:28.848 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.848 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.848 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.848 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:28.848 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:37:28.848 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:28.848 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:28.848 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:28.848 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:28.848 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzNjM2VhNmQ2NjdhOWMwOWZjZDE1YzI3NjQ3N2QxYTVjYTE0MDAxMGVlNzcwOTE5IGhKiw==: 00:37:28.848 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: 00:37:28.848 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:28.848 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:28.848 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzNjM2VhNmQ2NjdhOWMwOWZjZDE1YzI3NjQ3N2QxYTVjYTE0MDAxMGVlNzcwOTE5IGhKiw==: 00:37:28.848 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: ]] 00:37:28.848 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: 00:37:28.848 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:37:28.848 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:28.848 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:28.848 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:28.848 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:28.848 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:28.848 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:28.848 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.848 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.848 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.848 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:28.848 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:28.848 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:28.848 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:28.848 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:28.848 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:28.848 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:28.848 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:28.848 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:28.849 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:28.849 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:28.849 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:28.849 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.849 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.849 nvme0n1 00:37:28.849 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.849 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:28.849 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:28.849 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.849 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.849 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.108 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:29.108 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:29.108 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.108 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.108 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.108 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:29.108 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:37:29.108 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:29.108 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:29.108 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:29.108 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:29.109 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODUwZmY4ZWQ5MzMzZmY5NzM1ZmU1Mzg3MjdlNjYzMzk5Njc2NTIxNmQ4OTcwZTI5OWUwNzUyOTE4MWI4OGFhMjOeTLc=: 00:37:29.109 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:29.109 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:29.109 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:29.109 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODUwZmY4ZWQ5MzMzZmY5NzM1ZmU1Mzg3MjdlNjYzMzk5Njc2NTIxNmQ4OTcwZTI5OWUwNzUyOTE4MWI4OGFhMjOeTLc=: 00:37:29.109 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:29.109 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:37:29.109 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:29.109 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:29.109 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:29.109 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:29.109 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:29.109 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:29.109 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.109 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.109 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.109 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:29.109 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:29.109 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:29.109 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:29.109 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:29.109 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:29.109 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:29.109 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:29.109 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:29.109 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:29.109 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:29.109 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:29.109 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.109 13:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.109 nvme0n1 00:37:29.109 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.109 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:29.109 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:29.109 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.109 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.109 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.109 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:29.109 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:29.109 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.109 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MThlOTdjMDJmNGZiYzI0NzAyOGU1NmNjNTc4NTA1YmS740y+: 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MThlOTdjMDJmNGZiYzI0NzAyOGU1NmNjNTc4NTA1YmS740y+: 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: ]] 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.370 nvme0n1 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.370 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.630 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjJmN2Y3NGFkMDc5NDNlYmNiMTFlNzQzOGZmMThkNzhiMDExMzcwZGE5MGE4ZDBjgI8hZw==: 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjJmN2Y3NGFkMDc5NDNlYmNiMTFlNzQzOGZmMThkNzhiMDExMzcwZGE5MGE4ZDBjgI8hZw==: 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: ]] 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.631 nvme0n1 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.631 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzM2Zjg1M2ZmZGRjZTUxMDUzYzlhY2Y2YjY3M2YxNGYd8LXC: 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzM2Zjg1M2ZmZGRjZTUxMDUzYzlhY2Y2YjY3M2YxNGYd8LXC: 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: ]] 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.892 nvme0n1 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.892 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.153 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.153 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:30.153 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:30.153 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.153 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.153 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.153 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:30.153 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:37:30.153 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:30.153 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:30.153 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:30.153 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:30.153 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzNjM2VhNmQ2NjdhOWMwOWZjZDE1YzI3NjQ3N2QxYTVjYTE0MDAxMGVlNzcwOTE5IGhKiw==: 00:37:30.153 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: 00:37:30.153 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:30.153 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:30.153 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzNjM2VhNmQ2NjdhOWMwOWZjZDE1YzI3NjQ3N2QxYTVjYTE0MDAxMGVlNzcwOTE5IGhKiw==: 00:37:30.153 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: ]] 00:37:30.153 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: 00:37:30.153 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:37:30.153 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:30.153 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:30.153 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:30.153 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:30.153 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:30.153 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:30.153 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.153 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.153 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.153 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:30.153 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:30.153 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:30.153 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:30.153 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:30.153 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:30.153 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:30.153 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:30.153 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:30.154 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:30.154 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:30.154 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:30.154 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.154 13:42:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.413 nvme0n1 00:37:30.413 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.413 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:30.413 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:30.413 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.413 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.413 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.413 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:30.413 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:30.413 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.413 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.413 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.413 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:30.413 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:37:30.413 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:30.413 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:30.413 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:30.413 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:30.413 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODUwZmY4ZWQ5MzMzZmY5NzM1ZmU1Mzg3MjdlNjYzMzk5Njc2NTIxNmQ4OTcwZTI5OWUwNzUyOTE4MWI4OGFhMjOeTLc=: 00:37:30.413 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:30.413 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:30.413 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:30.413 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODUwZmY4ZWQ5MzMzZmY5NzM1ZmU1Mzg3MjdlNjYzMzk5Njc2NTIxNmQ4OTcwZTI5OWUwNzUyOTE4MWI4OGFhMjOeTLc=: 00:37:30.413 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:30.413 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:37:30.413 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:30.413 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:30.413 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:30.413 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:30.413 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:30.413 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:30.413 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.414 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.414 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.414 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:30.414 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:30.414 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:30.414 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:30.414 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:30.414 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:30.414 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:30.414 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:30.414 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:30.414 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:30.414 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:30.414 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:30.414 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.414 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.673 nvme0n1 00:37:30.673 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.673 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:30.673 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:30.673 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.673 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.673 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.673 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:30.673 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:30.673 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.673 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.673 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.674 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:30.674 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:30.674 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:37:30.674 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:30.674 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:30.674 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:30.674 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:30.674 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MThlOTdjMDJmNGZiYzI0NzAyOGU1NmNjNTc4NTA1YmS740y+: 00:37:30.674 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: 00:37:30.674 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:30.674 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:30.674 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MThlOTdjMDJmNGZiYzI0NzAyOGU1NmNjNTc4NTA1YmS740y+: 00:37:30.674 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: ]] 00:37:30.674 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: 00:37:30.674 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:37:30.674 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:30.674 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:30.674 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:30.674 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:30.674 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:30.674 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:30.674 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.674 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.674 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.674 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:30.674 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:30.674 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:30.674 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:30.674 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:30.674 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:30.674 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:30.674 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:30.674 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:30.674 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:30.674 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:30.674 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:30.674 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.674 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.934 nvme0n1 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjJmN2Y3NGFkMDc5NDNlYmNiMTFlNzQzOGZmMThkNzhiMDExMzcwZGE5MGE4ZDBjgI8hZw==: 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjJmN2Y3NGFkMDc5NDNlYmNiMTFlNzQzOGZmMThkNzhiMDExMzcwZGE5MGE4ZDBjgI8hZw==: 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: ]] 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.934 13:42:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.194 nvme0n1 00:37:31.194 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.194 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:31.194 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:31.194 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.194 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.194 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.454 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:31.454 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:31.454 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.454 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.454 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.454 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:31.455 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:37:31.455 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:31.455 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:31.455 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:31.455 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:31.455 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzM2Zjg1M2ZmZGRjZTUxMDUzYzlhY2Y2YjY3M2YxNGYd8LXC: 00:37:31.455 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: 00:37:31.455 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:31.455 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:31.455 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzM2Zjg1M2ZmZGRjZTUxMDUzYzlhY2Y2YjY3M2YxNGYd8LXC: 00:37:31.455 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: ]] 00:37:31.455 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: 00:37:31.455 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:37:31.455 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:31.455 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:31.455 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:31.455 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:31.455 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:31.455 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:31.455 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.455 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.455 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.455 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:31.455 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:31.455 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:31.455 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:31.455 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:31.455 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:31.455 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:31.455 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:31.455 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:31.455 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:31.455 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:31.455 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:31.455 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.455 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.716 nvme0n1 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzNjM2VhNmQ2NjdhOWMwOWZjZDE1YzI3NjQ3N2QxYTVjYTE0MDAxMGVlNzcwOTE5IGhKiw==: 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzNjM2VhNmQ2NjdhOWMwOWZjZDE1YzI3NjQ3N2QxYTVjYTE0MDAxMGVlNzcwOTE5IGhKiw==: 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: ]] 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.716 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.977 nvme0n1 00:37:31.977 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.977 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:31.977 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:31.977 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.977 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.977 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.977 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:31.977 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:31.977 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.977 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.977 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.977 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:31.977 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:37:31.977 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:31.977 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:31.977 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:31.977 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:31.977 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODUwZmY4ZWQ5MzMzZmY5NzM1ZmU1Mzg3MjdlNjYzMzk5Njc2NTIxNmQ4OTcwZTI5OWUwNzUyOTE4MWI4OGFhMjOeTLc=: 00:37:31.977 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:31.977 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:31.977 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:31.977 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODUwZmY4ZWQ5MzMzZmY5NzM1ZmU1Mzg3MjdlNjYzMzk5Njc2NTIxNmQ4OTcwZTI5OWUwNzUyOTE4MWI4OGFhMjOeTLc=: 00:37:31.977 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:31.977 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:37:31.977 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:31.977 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:31.977 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:31.977 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:31.977 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:31.977 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:31.977 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.977 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.977 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:32.237 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:32.237 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:32.237 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:32.237 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:32.237 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:32.237 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:32.237 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:32.237 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:32.237 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:32.237 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:32.237 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:32.237 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:32.237 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:32.237 13:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.498 nvme0n1 00:37:32.498 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:32.498 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:32.498 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MThlOTdjMDJmNGZiYzI0NzAyOGU1NmNjNTc4NTA1YmS740y+: 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MThlOTdjMDJmNGZiYzI0NzAyOGU1NmNjNTc4NTA1YmS740y+: 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: ]] 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:32.499 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.069 nvme0n1 00:37:33.069 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.069 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:33.069 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:33.069 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.069 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.069 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.069 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:33.069 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:33.069 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.069 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.069 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.069 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:33.069 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:37:33.069 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:33.069 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:33.069 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:33.069 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:33.069 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjJmN2Y3NGFkMDc5NDNlYmNiMTFlNzQzOGZmMThkNzhiMDExMzcwZGE5MGE4ZDBjgI8hZw==: 00:37:33.069 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: 00:37:33.070 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:33.070 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:33.070 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjJmN2Y3NGFkMDc5NDNlYmNiMTFlNzQzOGZmMThkNzhiMDExMzcwZGE5MGE4ZDBjgI8hZw==: 00:37:33.070 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: ]] 00:37:33.070 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: 00:37:33.070 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:37:33.070 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:33.070 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:33.070 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:33.070 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:33.070 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:33.070 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:33.070 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.070 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.070 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.070 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:33.070 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:33.070 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:33.070 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:33.070 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:33.070 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:33.070 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:33.070 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:33.070 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:33.070 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:33.070 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:33.070 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:33.070 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.070 13:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.330 nvme0n1 00:37:33.330 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.330 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:33.330 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:33.330 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.330 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.330 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzM2Zjg1M2ZmZGRjZTUxMDUzYzlhY2Y2YjY3M2YxNGYd8LXC: 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzM2Zjg1M2ZmZGRjZTUxMDUzYzlhY2Y2YjY3M2YxNGYd8LXC: 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: ]] 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.591 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.164 nvme0n1 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzNjM2VhNmQ2NjdhOWMwOWZjZDE1YzI3NjQ3N2QxYTVjYTE0MDAxMGVlNzcwOTE5IGhKiw==: 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzNjM2VhNmQ2NjdhOWMwOWZjZDE1YzI3NjQ3N2QxYTVjYTE0MDAxMGVlNzcwOTE5IGhKiw==: 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: ]] 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.164 13:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.436 nvme0n1 00:37:34.436 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.436 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:34.436 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.436 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:34.436 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.436 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.761 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:34.761 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:34.761 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.761 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.761 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.761 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:34.761 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:37:34.761 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:34.761 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:34.761 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:34.761 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:34.761 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODUwZmY4ZWQ5MzMzZmY5NzM1ZmU1Mzg3MjdlNjYzMzk5Njc2NTIxNmQ4OTcwZTI5OWUwNzUyOTE4MWI4OGFhMjOeTLc=: 00:37:34.761 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:34.761 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:34.761 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:34.761 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODUwZmY4ZWQ5MzMzZmY5NzM1ZmU1Mzg3MjdlNjYzMzk5Njc2NTIxNmQ4OTcwZTI5OWUwNzUyOTE4MWI4OGFhMjOeTLc=: 00:37:34.761 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:34.761 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:37:34.761 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:34.761 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:34.761 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:34.761 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:34.761 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:34.761 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:34.761 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.761 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.761 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.761 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:34.761 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:34.761 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:34.761 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:34.761 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:34.761 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:34.761 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:34.761 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:34.761 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:34.761 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:34.761 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:34.761 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:34.761 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.761 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.076 nvme0n1 00:37:35.076 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.076 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:35.076 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:35.076 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.076 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.076 13:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.076 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:35.076 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:35.076 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.076 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.076 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.076 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:35.076 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:35.076 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:37:35.076 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:35.076 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:35.076 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:35.076 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:35.076 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MThlOTdjMDJmNGZiYzI0NzAyOGU1NmNjNTc4NTA1YmS740y+: 00:37:35.076 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: 00:37:35.076 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:35.076 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:35.076 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MThlOTdjMDJmNGZiYzI0NzAyOGU1NmNjNTc4NTA1YmS740y+: 00:37:35.076 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: ]] 00:37:35.076 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODM3NDJkNTZhN2U4YzA3ZmI5MmU0NWQ4NDlhMDYyZjdiMjA1MjU5MGYxNzg3Yzk5M2QwMTJlZmY3YTI4ZDliMFqUgO0=: 00:37:35.076 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:37:35.076 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:35.076 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:35.076 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:35.076 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:35.076 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:35.076 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:35.076 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.076 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.076 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.076 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:35.076 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:35.076 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:35.076 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:35.076 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:35.076 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:35.076 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:35.076 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:35.076 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:35.076 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:35.076 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:35.337 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:35.337 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.337 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.909 nvme0n1 00:37:35.909 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.909 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:35.909 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:35.909 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.909 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.909 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.909 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:35.909 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:35.909 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.909 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.909 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.909 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:35.909 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:37:35.909 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:35.909 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:35.909 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:35.909 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:35.909 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjJmN2Y3NGFkMDc5NDNlYmNiMTFlNzQzOGZmMThkNzhiMDExMzcwZGE5MGE4ZDBjgI8hZw==: 00:37:35.909 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: 00:37:35.909 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:35.909 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:35.909 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjJmN2Y3NGFkMDc5NDNlYmNiMTFlNzQzOGZmMThkNzhiMDExMzcwZGE5MGE4ZDBjgI8hZw==: 00:37:35.909 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: ]] 00:37:35.909 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: 00:37:35.909 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:37:35.909 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:35.909 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:35.909 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:35.909 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:35.909 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:35.909 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:35.909 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.909 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.909 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.909 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:35.909 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:35.909 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:35.909 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:35.909 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:35.909 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:35.909 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:35.910 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:35.910 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:35.910 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:35.910 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:35.910 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:35.910 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.910 13:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.852 nvme0n1 00:37:36.852 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.852 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:36.852 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.852 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:36.852 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.852 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.852 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:36.852 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:36.852 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.852 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.852 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.852 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:36.852 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:37:36.852 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:36.852 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:36.852 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:36.852 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:36.852 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzM2Zjg1M2ZmZGRjZTUxMDUzYzlhY2Y2YjY3M2YxNGYd8LXC: 00:37:36.852 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: 00:37:36.852 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:36.852 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:36.852 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzM2Zjg1M2ZmZGRjZTUxMDUzYzlhY2Y2YjY3M2YxNGYd8LXC: 00:37:36.852 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: ]] 00:37:36.852 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: 00:37:36.852 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:37:36.852 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:36.852 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:36.852 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:36.852 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:36.852 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:36.852 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:36.852 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.852 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.852 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.852 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:36.852 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:36.852 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:36.852 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:36.852 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:36.852 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:36.852 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:36.853 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:36.853 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:36.853 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:36.853 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:36.853 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:36.853 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.853 13:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.795 nvme0n1 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzNjM2VhNmQ2NjdhOWMwOWZjZDE1YzI3NjQ3N2QxYTVjYTE0MDAxMGVlNzcwOTE5IGhKiw==: 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzNjM2VhNmQ2NjdhOWMwOWZjZDE1YzI3NjQ3N2QxYTVjYTE0MDAxMGVlNzcwOTE5IGhKiw==: 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: ]] 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODBiZDUyZjM1MzY5ZTU4YjljYTNjYTVjMDA1M2M1ZTD9TydA: 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.795 13:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:38.366 nvme0n1 00:37:38.366 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.366 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:38.366 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:38.366 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.366 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:38.366 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.366 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:38.366 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:38.366 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.366 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:38.628 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.628 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:38.628 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:37:38.628 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:38.628 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:38.628 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:38.628 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:38.628 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODUwZmY4ZWQ5MzMzZmY5NzM1ZmU1Mzg3MjdlNjYzMzk5Njc2NTIxNmQ4OTcwZTI5OWUwNzUyOTE4MWI4OGFhMjOeTLc=: 00:37:38.628 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:38.628 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:38.628 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:38.628 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODUwZmY4ZWQ5MzMzZmY5NzM1ZmU1Mzg3MjdlNjYzMzk5Njc2NTIxNmQ4OTcwZTI5OWUwNzUyOTE4MWI4OGFhMjOeTLc=: 00:37:38.628 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:38.628 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:37:38.628 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:38.628 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:38.628 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:38.628 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:38.628 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:38.628 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:38.628 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.628 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:38.628 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.628 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:38.628 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:38.628 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:38.628 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:38.628 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:38.628 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:38.628 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:38.628 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:38.628 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:38.628 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:38.628 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:38.628 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:38.628 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.628 13:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.200 nvme0n1 00:37:39.200 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.200 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:39.200 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:39.200 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.200 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.200 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.200 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:39.200 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:39.200 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.200 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjJmN2Y3NGFkMDc5NDNlYmNiMTFlNzQzOGZmMThkNzhiMDExMzcwZGE5MGE4ZDBjgI8hZw==: 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjJmN2Y3NGFkMDc5NDNlYmNiMTFlNzQzOGZmMThkNzhiMDExMzcwZGE5MGE4ZDBjgI8hZw==: 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: ]] 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.462 request: 00:37:39.462 { 00:37:39.462 "name": "nvme0", 00:37:39.462 "trtype": "tcp", 00:37:39.462 "traddr": "10.0.0.1", 00:37:39.462 "adrfam": "ipv4", 00:37:39.462 "trsvcid": "4420", 00:37:39.462 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:37:39.462 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:37:39.462 "prchk_reftag": false, 00:37:39.462 "prchk_guard": false, 00:37:39.462 "hdgst": false, 00:37:39.462 "ddgst": false, 00:37:39.462 "allow_unrecognized_csi": false, 00:37:39.462 "method": "bdev_nvme_attach_controller", 00:37:39.462 "req_id": 1 00:37:39.462 } 00:37:39.462 Got JSON-RPC error response 00:37:39.462 response: 00:37:39.462 { 00:37:39.462 "code": -5, 00:37:39.462 "message": "Input/output error" 00:37:39.462 } 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:39.462 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:39.463 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:39.463 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:39.463 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:39.463 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:37:39.463 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:37:39.463 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:37:39.463 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:39.463 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:39.463 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:39.463 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:39.463 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:37:39.463 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.463 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.463 request: 00:37:39.463 { 00:37:39.463 "name": "nvme0", 00:37:39.463 "trtype": "tcp", 00:37:39.463 "traddr": "10.0.0.1", 00:37:39.463 "adrfam": "ipv4", 00:37:39.463 "trsvcid": "4420", 00:37:39.463 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:37:39.463 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:37:39.463 "prchk_reftag": false, 00:37:39.463 "prchk_guard": false, 00:37:39.463 "hdgst": false, 00:37:39.463 "ddgst": false, 00:37:39.463 "dhchap_key": "key2", 00:37:39.463 "allow_unrecognized_csi": false, 00:37:39.463 "method": "bdev_nvme_attach_controller", 00:37:39.463 "req_id": 1 00:37:39.463 } 00:37:39.463 Got JSON-RPC error response 00:37:39.463 response: 00:37:39.463 { 00:37:39.463 "code": -5, 00:37:39.463 "message": "Input/output error" 00:37:39.463 } 00:37:39.463 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:39.463 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:37:39.463 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:39.463 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:39.463 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:39.463 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:37:39.463 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:37:39.463 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.463 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.463 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.724 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.725 request: 00:37:39.725 { 00:37:39.725 "name": "nvme0", 00:37:39.725 "trtype": "tcp", 00:37:39.725 "traddr": "10.0.0.1", 00:37:39.725 "adrfam": "ipv4", 00:37:39.725 "trsvcid": "4420", 00:37:39.725 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:37:39.725 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:37:39.725 "prchk_reftag": false, 00:37:39.725 "prchk_guard": false, 00:37:39.725 "hdgst": false, 00:37:39.725 "ddgst": false, 00:37:39.725 "dhchap_key": "key1", 00:37:39.725 "dhchap_ctrlr_key": "ckey2", 00:37:39.725 "allow_unrecognized_csi": false, 00:37:39.725 "method": "bdev_nvme_attach_controller", 00:37:39.725 "req_id": 1 00:37:39.725 } 00:37:39.725 Got JSON-RPC error response 00:37:39.725 response: 00:37:39.725 { 00:37:39.725 "code": -5, 00:37:39.725 "message": "Input/output error" 00:37:39.725 } 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.725 nvme0n1 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzM2Zjg1M2ZmZGRjZTUxMDUzYzlhY2Y2YjY3M2YxNGYd8LXC: 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzM2Zjg1M2ZmZGRjZTUxMDUzYzlhY2Y2YjY3M2YxNGYd8LXC: 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: ]] 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.725 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.986 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.986 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:37:39.986 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:37:39.986 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.986 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.986 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.986 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:39.986 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:39.986 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:37:39.986 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:39.986 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:39.986 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:39.986 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:39.986 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:39.986 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:39.986 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.986 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.986 request: 00:37:39.986 { 00:37:39.986 "name": "nvme0", 00:37:39.986 "dhchap_key": "key1", 00:37:39.986 "dhchap_ctrlr_key": "ckey2", 00:37:39.986 "method": "bdev_nvme_set_keys", 00:37:39.986 "req_id": 1 00:37:39.986 } 00:37:39.986 Got JSON-RPC error response 00:37:39.986 response: 00:37:39.986 { 00:37:39.986 "code": -13, 00:37:39.986 "message": "Permission denied" 00:37:39.986 } 00:37:39.986 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:39.986 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:37:39.986 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:39.986 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:39.986 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:39.986 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:37:39.986 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:37:39.986 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.986 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.986 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.986 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:37:39.986 13:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:37:41.371 13:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:37:41.371 13:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:37:41.371 13:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.371 13:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:41.371 13:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:41.371 13:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:37:41.371 13:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:37:42.313 13:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:37:42.313 13:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:37:42.313 13:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:42.313 13:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjJmN2Y3NGFkMDc5NDNlYmNiMTFlNzQzOGZmMThkNzhiMDExMzcwZGE5MGE4ZDBjgI8hZw==: 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjJmN2Y3NGFkMDc5NDNlYmNiMTFlNzQzOGZmMThkNzhiMDExMzcwZGE5MGE4ZDBjgI8hZw==: 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: ]] 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRjNWFkODhkMjg4ZjMwMWU1OGRiN2ZmMDBlZTQ1YTBjMGI4MzgwMDI3MjUyZGU3ruBKcA==: 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:42.313 nvme0n1 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzM2Zjg1M2ZmZGRjZTUxMDUzYzlhY2Y2YjY3M2YxNGYd8LXC: 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzM2Zjg1M2ZmZGRjZTUxMDUzYzlhY2Y2YjY3M2YxNGYd8LXC: 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: ]] 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODYzYjljNDFkY2M0YzIzMjUzY2RjNTJhZjYxZDQ4Mjb1bwvv: 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:42.313 request: 00:37:42.313 { 00:37:42.313 "name": "nvme0", 00:37:42.313 "dhchap_key": "key2", 00:37:42.313 "dhchap_ctrlr_key": "ckey1", 00:37:42.313 "method": "bdev_nvme_set_keys", 00:37:42.313 "req_id": 1 00:37:42.313 } 00:37:42.313 Got JSON-RPC error response 00:37:42.313 response: 00:37:42.313 { 00:37:42.313 "code": -13, 00:37:42.313 "message": "Permission denied" 00:37:42.313 } 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:42.313 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:42.574 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:37:42.574 13:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:37:43.517 13:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:37:43.517 13:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:37:43.517 13:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.517 13:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.517 13:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.517 13:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:37:43.517 13:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:37:43.517 13:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:37:43.517 13:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:37:43.517 13:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:43.517 13:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:37:43.517 13:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:43.517 13:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:37:43.517 13:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:43.517 13:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:43.517 rmmod nvme_tcp 00:37:43.517 rmmod nvme_fabrics 00:37:43.517 13:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:43.517 13:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:37:43.517 13:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:37:43.517 13:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 4108445 ']' 00:37:43.517 13:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 4108445 00:37:43.517 13:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 4108445 ']' 00:37:43.517 13:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 4108445 00:37:43.517 13:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:37:43.517 13:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:43.517 13:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4108445 00:37:43.517 13:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:43.517 13:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:43.517 13:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4108445' 00:37:43.517 killing process with pid 4108445 00:37:43.517 13:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 4108445 00:37:43.517 13:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 4108445 00:37:44.458 13:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:44.458 13:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:44.458 13:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:44.458 13:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:37:44.458 13:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:37:44.458 13:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:44.458 13:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:37:44.458 13:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:44.458 13:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:44.458 13:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:44.458 13:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:44.458 13:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:46.370 13:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:46.370 13:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:37:46.370 13:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:37:46.370 13:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:37:46.370 13:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:37:46.370 13:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:37:46.370 13:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:46.370 13:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:37:46.370 13:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:46.370 13:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:46.370 13:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:37:46.370 13:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:37:46.370 13:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:50.575 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:50.575 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:50.575 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:50.575 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:50.575 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:50.575 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:50.575 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:50.575 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:50.575 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:50.575 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:50.575 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:50.575 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:50.575 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:50.575 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:50.575 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:50.575 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:50.575 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:50.836 13:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.CKD /tmp/spdk.key-null.V3Z /tmp/spdk.key-sha256.pXn /tmp/spdk.key-sha384.uyv /tmp/spdk.key-sha512.Y9B /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:37:50.836 13:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:54.132 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:37:54.132 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:37:54.132 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:37:54.132 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:37:54.132 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:37:54.132 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:37:54.132 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:37:54.132 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:37:54.132 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:37:54.132 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:37:54.132 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:37:54.132 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:37:54.132 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:37:54.132 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:37:54.132 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:37:54.132 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:37:54.132 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:37:54.704 00:37:54.704 real 1m4.699s 00:37:54.704 user 0m58.090s 00:37:54.704 sys 0m16.373s 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:54.704 ************************************ 00:37:54.704 END TEST nvmf_auth_host 00:37:54.704 ************************************ 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:54.704 ************************************ 00:37:54.704 START TEST nvmf_digest 00:37:54.704 ************************************ 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:37:54.704 * Looking for test storage... 00:37:54.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:54.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:54.704 --rc genhtml_branch_coverage=1 00:37:54.704 --rc genhtml_function_coverage=1 00:37:54.704 --rc genhtml_legend=1 00:37:54.704 --rc geninfo_all_blocks=1 00:37:54.704 --rc geninfo_unexecuted_blocks=1 00:37:54.704 00:37:54.704 ' 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:54.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:54.704 --rc genhtml_branch_coverage=1 00:37:54.704 --rc genhtml_function_coverage=1 00:37:54.704 --rc genhtml_legend=1 00:37:54.704 --rc geninfo_all_blocks=1 00:37:54.704 --rc geninfo_unexecuted_blocks=1 00:37:54.704 00:37:54.704 ' 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:54.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:54.704 --rc genhtml_branch_coverage=1 00:37:54.704 --rc genhtml_function_coverage=1 00:37:54.704 --rc genhtml_legend=1 00:37:54.704 --rc geninfo_all_blocks=1 00:37:54.704 --rc geninfo_unexecuted_blocks=1 00:37:54.704 00:37:54.704 ' 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:54.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:54.704 --rc genhtml_branch_coverage=1 00:37:54.704 --rc genhtml_function_coverage=1 00:37:54.704 --rc genhtml_legend=1 00:37:54.704 --rc geninfo_all_blocks=1 00:37:54.704 --rc geninfo_unexecuted_blocks=1 00:37:54.704 00:37:54.704 ' 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:54.704 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:54.966 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:37:54.966 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:54.966 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:54.966 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:54.966 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:54.966 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:54.966 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:54.966 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:37:54.966 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:54.966 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:37:54.966 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:54.966 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:54.966 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:54.966 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:54.966 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:54.966 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:54.966 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:54.966 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:54.966 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:54.966 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:54.966 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:37:54.966 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:37:54.966 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:37:54.966 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:37:54.966 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:37:54.966 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:54.966 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:54.966 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:54.966 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:54.966 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:54.966 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:54.966 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:54.966 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:54.966 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:54.966 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:54.966 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:37:54.966 13:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:38:03.104 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:03.104 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:38:03.104 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:03.104 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:03.104 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:03.104 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:03.104 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:03.104 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:38:03.104 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:03.104 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:38:03.104 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:38:03.104 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:38:03.104 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:38:03.104 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:38:03.104 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:38:03.104 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:03.104 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:03.104 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:03.104 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:03.104 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:03.104 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:03.104 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:03.104 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:03.104 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:03.104 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:03.104 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:03.104 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:03.104 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:03.104 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:03.104 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:03.104 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:03.104 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:03.104 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:03.104 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:03.104 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:03.104 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:03.104 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:03.104 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:03.104 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:03.105 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:03.105 Found net devices under 0000:31:00.0: cvl_0_0 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:03.105 Found net devices under 0000:31:00.1: cvl_0_1 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:03.105 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:03.105 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.602 ms 00:38:03.105 00:38:03.105 --- 10.0.0.2 ping statistics --- 00:38:03.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:03.105 rtt min/avg/max/mdev = 0.602/0.602/0.602/0.000 ms 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:03.105 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:03.105 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:38:03.105 00:38:03.105 --- 10.0.0.1 ping statistics --- 00:38:03.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:03.105 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:38:03.105 ************************************ 00:38:03.105 START TEST nvmf_digest_clean 00:38:03.105 ************************************ 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=4127163 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 4127163 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 4127163 ']' 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:03.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:03.105 13:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:03.105 [2024-11-07 13:43:10.987199] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:38:03.105 [2024-11-07 13:43:10.987303] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:03.366 [2024-11-07 13:43:11.133432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:03.366 [2024-11-07 13:43:11.228310] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:03.366 [2024-11-07 13:43:11.228355] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:03.366 [2024-11-07 13:43:11.228368] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:03.366 [2024-11-07 13:43:11.228379] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:03.366 [2024-11-07 13:43:11.228391] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:03.366 [2024-11-07 13:43:11.229618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:03.937 13:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:03.937 13:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:38:03.937 13:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:03.937 13:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:03.937 13:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:03.937 13:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:03.937 13:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:38:03.937 13:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:38:03.937 13:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:38:03.937 13:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:03.937 13:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:04.198 null0 00:38:04.198 [2024-11-07 13:43:12.051626] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:04.198 [2024-11-07 13:43:12.075913] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:04.198 13:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:04.198 13:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:38:04.198 13:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:38:04.198 13:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:38:04.198 13:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:38:04.198 13:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:38:04.198 13:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:38:04.198 13:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:38:04.198 13:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4127434 00:38:04.198 13:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4127434 /var/tmp/bperf.sock 00:38:04.198 13:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 4127434 ']' 00:38:04.198 13:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:04.198 13:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:04.198 13:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:04.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:04.198 13:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:04.198 13:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:04.198 13:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:38:04.198 [2024-11-07 13:43:12.158378] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:38:04.198 [2024-11-07 13:43:12.158487] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4127434 ] 00:38:04.460 [2024-11-07 13:43:12.309635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:04.460 [2024-11-07 13:43:12.406460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:05.032 13:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:05.032 13:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:38:05.032 13:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:38:05.032 13:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:38:05.032 13:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:05.602 13:43:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:05.602 13:43:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:05.862 nvme0n1 00:38:05.862 13:43:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:38:05.862 13:43:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:05.862 Running I/O for 2 seconds... 00:38:08.186 17179.00 IOPS, 67.11 MiB/s [2024-11-07T12:43:16.193Z] 17283.50 IOPS, 67.51 MiB/s 00:38:08.186 Latency(us) 00:38:08.186 [2024-11-07T12:43:16.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:08.186 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:08.186 nvme0n1 : 2.05 16964.46 66.27 0.00 0.00 7394.11 3290.45 46749.01 00:38:08.186 [2024-11-07T12:43:16.193Z] =================================================================================================================== 00:38:08.186 [2024-11-07T12:43:16.193Z] Total : 16964.46 66.27 0.00 0.00 7394.11 3290.45 46749.01 00:38:08.186 { 00:38:08.186 "results": [ 00:38:08.186 { 00:38:08.186 "job": "nvme0n1", 00:38:08.186 "core_mask": "0x2", 00:38:08.186 "workload": "randread", 00:38:08.186 "status": "finished", 00:38:08.186 "queue_depth": 128, 00:38:08.186 "io_size": 4096, 00:38:08.186 "runtime": 2.045158, 00:38:08.186 "iops": 16964.45946963511, 00:38:08.186 "mibps": 66.26741980326214, 00:38:08.186 "io_failed": 0, 00:38:08.186 "io_timeout": 0, 00:38:08.186 "avg_latency_us": 7394.107385310083, 00:38:08.186 "min_latency_us": 3290.4533333333334, 00:38:08.186 "max_latency_us": 46749.013333333336 00:38:08.186 } 00:38:08.186 ], 00:38:08.186 "core_count": 1 00:38:08.186 } 00:38:08.186 13:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:38:08.186 13:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:38:08.186 13:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:38:08.186 13:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:38:08.186 | select(.opcode=="crc32c") 00:38:08.186 | "\(.module_name) \(.executed)"' 00:38:08.186 13:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:38:08.186 13:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:38:08.186 13:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:38:08.186 13:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:38:08.186 13:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:38:08.186 13:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4127434 00:38:08.186 13:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 4127434 ']' 00:38:08.186 13:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 4127434 00:38:08.186 13:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:38:08.186 13:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:08.186 13:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4127434 00:38:08.186 13:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:08.186 13:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:08.186 13:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4127434' 00:38:08.186 killing process with pid 4127434 00:38:08.186 13:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 4127434 00:38:08.186 Received shutdown signal, test time was about 2.000000 seconds 00:38:08.186 00:38:08.186 Latency(us) 00:38:08.186 [2024-11-07T12:43:16.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:08.186 [2024-11-07T12:43:16.193Z] =================================================================================================================== 00:38:08.186 [2024-11-07T12:43:16.193Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:08.186 13:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 4127434 00:38:08.758 13:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:38:08.758 13:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:38:08.758 13:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:38:08.758 13:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:38:08.758 13:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:38:08.758 13:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:38:08.758 13:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:38:08.758 13:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4128376 00:38:08.758 13:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4128376 /var/tmp/bperf.sock 00:38:08.758 13:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 4128376 ']' 00:38:08.758 13:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:08.759 13:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:08.759 13:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:08.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:08.759 13:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:08.759 13:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:08.759 13:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:38:08.759 [2024-11-07 13:43:16.760112] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:38:08.759 [2024-11-07 13:43:16.760222] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4128376 ] 00:38:08.759 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:08.759 Zero copy mechanism will not be used. 00:38:09.019 [2024-11-07 13:43:16.913579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:09.019 [2024-11-07 13:43:17.010798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:09.590 13:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:09.590 13:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:38:09.590 13:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:38:09.590 13:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:38:09.590 13:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:10.160 13:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:10.160 13:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:10.421 nvme0n1 00:38:10.421 13:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:38:10.421 13:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:10.421 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:10.421 Zero copy mechanism will not be used. 00:38:10.421 Running I/O for 2 seconds... 00:38:12.746 3009.00 IOPS, 376.12 MiB/s [2024-11-07T12:43:20.753Z] 3164.00 IOPS, 395.50 MiB/s 00:38:12.746 Latency(us) 00:38:12.746 [2024-11-07T12:43:20.754Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:12.747 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:38:12.747 nvme0n1 : 2.00 3165.64 395.71 0.00 0.00 5051.60 651.95 14527.15 00:38:12.747 [2024-11-07T12:43:20.754Z] =================================================================================================================== 00:38:12.747 [2024-11-07T12:43:20.754Z] Total : 3165.64 395.71 0.00 0.00 5051.60 651.95 14527.15 00:38:12.747 { 00:38:12.747 "results": [ 00:38:12.747 { 00:38:12.747 "job": "nvme0n1", 00:38:12.747 "core_mask": "0x2", 00:38:12.747 "workload": "randread", 00:38:12.747 "status": "finished", 00:38:12.747 "queue_depth": 16, 00:38:12.747 "io_size": 131072, 00:38:12.747 "runtime": 2.004647, 00:38:12.747 "iops": 3165.644624714476, 00:38:12.747 "mibps": 395.7055780893095, 00:38:12.747 "io_failed": 0, 00:38:12.747 "io_timeout": 0, 00:38:12.747 "avg_latency_us": 5051.600747977729, 00:38:12.747 "min_latency_us": 651.9466666666667, 00:38:12.747 "max_latency_us": 14527.146666666667 00:38:12.747 } 00:38:12.747 ], 00:38:12.747 "core_count": 1 00:38:12.747 } 00:38:12.747 13:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:38:12.747 13:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:38:12.747 13:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:38:12.747 13:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:38:12.747 | select(.opcode=="crc32c") 00:38:12.747 | "\(.module_name) \(.executed)"' 00:38:12.747 13:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:38:12.747 13:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:38:12.747 13:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:38:12.747 13:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:38:12.747 13:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:38:12.747 13:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4128376 00:38:12.747 13:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 4128376 ']' 00:38:12.747 13:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 4128376 00:38:12.747 13:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:38:12.747 13:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:12.747 13:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4128376 00:38:12.747 13:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:12.747 13:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:12.747 13:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4128376' 00:38:12.747 killing process with pid 4128376 00:38:12.747 13:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 4128376 00:38:12.747 Received shutdown signal, test time was about 2.000000 seconds 00:38:12.747 00:38:12.747 Latency(us) 00:38:12.747 [2024-11-07T12:43:20.754Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:12.747 [2024-11-07T12:43:20.754Z] =================================================================================================================== 00:38:12.747 [2024-11-07T12:43:20.754Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:12.747 13:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 4128376 00:38:13.317 13:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:38:13.317 13:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:38:13.317 13:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:38:13.317 13:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:38:13.317 13:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:38:13.317 13:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:38:13.317 13:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:38:13.317 13:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4129107 00:38:13.317 13:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4129107 /var/tmp/bperf.sock 00:38:13.317 13:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 4129107 ']' 00:38:13.317 13:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:38:13.317 13:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:13.317 13:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:13.317 13:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:13.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:13.318 13:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:13.318 13:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:13.318 [2024-11-07 13:43:21.179203] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:38:13.318 [2024-11-07 13:43:21.179310] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4129107 ] 00:38:13.318 [2024-11-07 13:43:21.318426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:13.578 [2024-11-07 13:43:21.393035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:14.149 13:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:14.149 13:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:38:14.149 13:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:38:14.149 13:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:38:14.149 13:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:14.409 13:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:14.409 13:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:14.670 nvme0n1 00:38:14.670 13:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:38:14.670 13:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:14.670 Running I/O for 2 seconds... 00:38:16.998 19399.00 IOPS, 75.78 MiB/s [2024-11-07T12:43:25.005Z] 19493.50 IOPS, 76.15 MiB/s 00:38:16.998 Latency(us) 00:38:16.998 [2024-11-07T12:43:25.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:16.998 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:16.998 nvme0n1 : 2.00 19512.84 76.22 0.00 0.00 6554.03 3850.24 11578.03 00:38:16.998 [2024-11-07T12:43:25.005Z] =================================================================================================================== 00:38:16.998 [2024-11-07T12:43:25.005Z] Total : 19512.84 76.22 0.00 0.00 6554.03 3850.24 11578.03 00:38:16.998 { 00:38:16.998 "results": [ 00:38:16.998 { 00:38:16.998 "job": "nvme0n1", 00:38:16.998 "core_mask": "0x2", 00:38:16.998 "workload": "randwrite", 00:38:16.998 "status": "finished", 00:38:16.998 "queue_depth": 128, 00:38:16.998 "io_size": 4096, 00:38:16.998 "runtime": 2.004577, 00:38:16.998 "iops": 19512.84485455036, 00:38:16.998 "mibps": 76.22205021308734, 00:38:16.998 "io_failed": 0, 00:38:16.998 "io_timeout": 0, 00:38:16.998 "avg_latency_us": 6554.025499509992, 00:38:16.998 "min_latency_us": 3850.24, 00:38:16.998 "max_latency_us": 11578.026666666667 00:38:16.998 } 00:38:16.998 ], 00:38:16.998 "core_count": 1 00:38:16.998 } 00:38:16.998 13:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:38:16.998 13:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:38:16.998 13:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:38:16.998 13:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:38:16.998 | select(.opcode=="crc32c") 00:38:16.998 | "\(.module_name) \(.executed)"' 00:38:16.998 13:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:38:16.998 13:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:38:16.998 13:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:38:16.998 13:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:38:16.998 13:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:38:16.998 13:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4129107 00:38:16.998 13:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 4129107 ']' 00:38:16.998 13:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 4129107 00:38:16.998 13:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:38:16.998 13:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:16.998 13:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4129107 00:38:16.998 13:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:16.998 13:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:16.998 13:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4129107' 00:38:16.998 killing process with pid 4129107 00:38:16.998 13:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 4129107 00:38:16.998 Received shutdown signal, test time was about 2.000000 seconds 00:38:16.998 00:38:16.998 Latency(us) 00:38:16.998 [2024-11-07T12:43:25.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:16.998 [2024-11-07T12:43:25.005Z] =================================================================================================================== 00:38:16.998 [2024-11-07T12:43:25.005Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:16.998 13:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 4129107 00:38:17.571 13:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:38:17.571 13:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:38:17.571 13:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:38:17.571 13:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:38:17.571 13:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:38:17.571 13:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:38:17.571 13:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:38:17.571 13:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4129901 00:38:17.571 13:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4129901 /var/tmp/bperf.sock 00:38:17.571 13:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 4129901 ']' 00:38:17.571 13:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:38:17.571 13:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:17.571 13:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:17.571 13:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:17.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:17.571 13:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:17.571 13:43:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:17.571 [2024-11-07 13:43:25.405823] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:38:17.571 [2024-11-07 13:43:25.405942] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4129901 ] 00:38:17.571 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:17.571 Zero copy mechanism will not be used. 00:38:17.571 [2024-11-07 13:43:25.545779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:17.832 [2024-11-07 13:43:25.619852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:18.404 13:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:18.404 13:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:38:18.404 13:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:38:18.404 13:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:38:18.404 13:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:18.664 13:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:18.665 13:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:18.925 nvme0n1 00:38:18.925 13:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:38:18.925 13:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:19.187 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:19.187 Zero copy mechanism will not be used. 00:38:19.187 Running I/O for 2 seconds... 00:38:21.072 6512.00 IOPS, 814.00 MiB/s [2024-11-07T12:43:29.079Z] 6424.50 IOPS, 803.06 MiB/s 00:38:21.072 Latency(us) 00:38:21.072 [2024-11-07T12:43:29.079Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:21.072 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:38:21.072 nvme0n1 : 2.00 6423.52 802.94 0.00 0.00 2487.08 1665.71 14854.83 00:38:21.072 [2024-11-07T12:43:29.079Z] =================================================================================================================== 00:38:21.072 [2024-11-07T12:43:29.079Z] Total : 6423.52 802.94 0.00 0.00 2487.08 1665.71 14854.83 00:38:21.072 { 00:38:21.072 "results": [ 00:38:21.072 { 00:38:21.072 "job": "nvme0n1", 00:38:21.072 "core_mask": "0x2", 00:38:21.072 "workload": "randwrite", 00:38:21.072 "status": "finished", 00:38:21.072 "queue_depth": 16, 00:38:21.072 "io_size": 131072, 00:38:21.072 "runtime": 2.003418, 00:38:21.072 "iops": 6423.522200559244, 00:38:21.072 "mibps": 802.9402750699055, 00:38:21.072 "io_failed": 0, 00:38:21.072 "io_timeout": 0, 00:38:21.072 "avg_latency_us": 2487.0826699821273, 00:38:21.072 "min_latency_us": 1665.7066666666667, 00:38:21.072 "max_latency_us": 14854.826666666666 00:38:21.072 } 00:38:21.072 ], 00:38:21.072 "core_count": 1 00:38:21.072 } 00:38:21.072 13:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:38:21.072 13:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:38:21.072 13:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:38:21.072 13:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:38:21.072 | select(.opcode=="crc32c") 00:38:21.072 | "\(.module_name) \(.executed)"' 00:38:21.072 13:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:38:21.333 13:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:38:21.333 13:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:38:21.333 13:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:38:21.333 13:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:38:21.333 13:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4129901 00:38:21.333 13:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 4129901 ']' 00:38:21.333 13:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 4129901 00:38:21.333 13:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:38:21.333 13:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:21.333 13:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4129901 00:38:21.334 13:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:21.334 13:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:21.334 13:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4129901' 00:38:21.334 killing process with pid 4129901 00:38:21.334 13:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 4129901 00:38:21.334 Received shutdown signal, test time was about 2.000000 seconds 00:38:21.334 00:38:21.334 Latency(us) 00:38:21.334 [2024-11-07T12:43:29.341Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:21.334 [2024-11-07T12:43:29.341Z] =================================================================================================================== 00:38:21.334 [2024-11-07T12:43:29.341Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:21.334 13:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 4129901 00:38:21.905 13:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 4127163 00:38:21.905 13:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 4127163 ']' 00:38:21.905 13:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 4127163 00:38:21.905 13:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:38:21.905 13:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:21.905 13:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4127163 00:38:21.905 13:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:38:21.905 13:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:38:21.905 13:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4127163' 00:38:21.905 killing process with pid 4127163 00:38:21.905 13:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 4127163 00:38:21.905 13:43:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 4127163 00:38:22.475 00:38:22.475 real 0m19.577s 00:38:22.475 user 0m37.496s 00:38:22.475 sys 0m3.909s 00:38:22.475 13:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:22.475 13:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:22.475 ************************************ 00:38:22.475 END TEST nvmf_digest_clean 00:38:22.475 ************************************ 00:38:22.736 13:43:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:38:22.736 13:43:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:38:22.736 13:43:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:22.736 13:43:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:38:22.736 ************************************ 00:38:22.736 START TEST nvmf_digest_error 00:38:22.736 ************************************ 00:38:22.736 13:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:38:22.736 13:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:38:22.736 13:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:22.736 13:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:22.736 13:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:22.736 13:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=4130835 00:38:22.736 13:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 4130835 00:38:22.736 13:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 4130835 ']' 00:38:22.736 13:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:22.736 13:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:22.736 13:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:22.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:22.736 13:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:22.736 13:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:22.736 13:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:38:22.736 [2024-11-07 13:43:30.610147] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:38:22.736 [2024-11-07 13:43:30.610267] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:22.997 [2024-11-07 13:43:30.768263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:22.997 [2024-11-07 13:43:30.868940] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:22.997 [2024-11-07 13:43:30.868987] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:22.997 [2024-11-07 13:43:30.868999] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:22.997 [2024-11-07 13:43:30.869010] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:22.997 [2024-11-07 13:43:30.869021] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:22.997 [2024-11-07 13:43:30.870216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:23.569 13:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:23.569 13:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:38:23.569 13:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:23.569 13:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:23.569 13:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:23.569 13:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:23.569 13:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:38:23.570 13:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:23.570 13:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:23.570 [2024-11-07 13:43:31.396076] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:38:23.570 13:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:23.570 13:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:38:23.570 13:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:38:23.570 13:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:23.570 13:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:23.831 null0 00:38:23.831 [2024-11-07 13:43:31.656528] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:23.831 [2024-11-07 13:43:31.680771] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:23.831 13:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:23.831 13:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:38:23.831 13:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:38:23.831 13:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:38:23.831 13:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:38:23.831 13:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:38:23.831 13:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4131158 00:38:23.831 13:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4131158 /var/tmp/bperf.sock 00:38:23.831 13:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 4131158 ']' 00:38:23.831 13:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:23.831 13:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:23.831 13:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:23.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:23.831 13:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:23.831 13:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:23.831 13:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:38:23.831 [2024-11-07 13:43:31.773030] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:38:23.831 [2024-11-07 13:43:31.773138] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4131158 ] 00:38:24.092 [2024-11-07 13:43:31.914444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:24.092 [2024-11-07 13:43:31.988873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:24.664 13:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:24.664 13:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:38:24.664 13:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:24.664 13:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:24.926 13:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:38:24.926 13:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.926 13:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:24.926 13:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.926 13:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:24.926 13:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:24.926 nvme0n1 00:38:24.926 13:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:38:24.926 13:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.926 13:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:24.926 13:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.926 13:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:38:24.926 13:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:25.187 Running I/O for 2 seconds... 00:38:25.188 [2024-11-07 13:43:33.030916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.188 [2024-11-07 13:43:33.030959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.188 [2024-11-07 13:43:33.030973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.188 [2024-11-07 13:43:33.044974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.188 [2024-11-07 13:43:33.045002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.188 [2024-11-07 13:43:33.045012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.188 [2024-11-07 13:43:33.059349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.188 [2024-11-07 13:43:33.059374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.188 [2024-11-07 13:43:33.059383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.188 [2024-11-07 13:43:33.074303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.188 [2024-11-07 13:43:33.074327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.188 [2024-11-07 13:43:33.074337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.188 [2024-11-07 13:43:33.089404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.188 [2024-11-07 13:43:33.089428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.188 [2024-11-07 13:43:33.089437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.188 [2024-11-07 13:43:33.103618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.188 [2024-11-07 13:43:33.103641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.188 [2024-11-07 13:43:33.103651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.188 [2024-11-07 13:43:33.115367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.188 [2024-11-07 13:43:33.115389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.188 [2024-11-07 13:43:33.115399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.188 [2024-11-07 13:43:33.131794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.188 [2024-11-07 13:43:33.131817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.188 [2024-11-07 13:43:33.131830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.188 [2024-11-07 13:43:33.144744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.188 [2024-11-07 13:43:33.144767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.188 [2024-11-07 13:43:33.144776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.188 [2024-11-07 13:43:33.157965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.188 [2024-11-07 13:43:33.157987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.188 [2024-11-07 13:43:33.157996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.188 [2024-11-07 13:43:33.173791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.188 [2024-11-07 13:43:33.173814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.188 [2024-11-07 13:43:33.173823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.188 [2024-11-07 13:43:33.189195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.188 [2024-11-07 13:43:33.189217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.188 [2024-11-07 13:43:33.189227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.502 [2024-11-07 13:43:33.201784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.502 [2024-11-07 13:43:33.201807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.502 [2024-11-07 13:43:33.201816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.502 [2024-11-07 13:43:33.216293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.502 [2024-11-07 13:43:33.216317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.502 [2024-11-07 13:43:33.216327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.502 [2024-11-07 13:43:33.230261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.502 [2024-11-07 13:43:33.230283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.502 [2024-11-07 13:43:33.230292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.502 [2024-11-07 13:43:33.243760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.502 [2024-11-07 13:43:33.243783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.502 [2024-11-07 13:43:33.243792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.502 [2024-11-07 13:43:33.257614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.502 [2024-11-07 13:43:33.257636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.502 [2024-11-07 13:43:33.257645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.502 [2024-11-07 13:43:33.271464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.502 [2024-11-07 13:43:33.271486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:25596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.502 [2024-11-07 13:43:33.271496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.502 [2024-11-07 13:43:33.283946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.502 [2024-11-07 13:43:33.283968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.502 [2024-11-07 13:43:33.283977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.502 [2024-11-07 13:43:33.299795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.502 [2024-11-07 13:43:33.299817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.502 [2024-11-07 13:43:33.299826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.502 [2024-11-07 13:43:33.314006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.502 [2024-11-07 13:43:33.314029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.502 [2024-11-07 13:43:33.314038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.502 [2024-11-07 13:43:33.327631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.502 [2024-11-07 13:43:33.327653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.502 [2024-11-07 13:43:33.327661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.502 [2024-11-07 13:43:33.340994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.502 [2024-11-07 13:43:33.341017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.502 [2024-11-07 13:43:33.341025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.502 [2024-11-07 13:43:33.354253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.502 [2024-11-07 13:43:33.354275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.502 [2024-11-07 13:43:33.354283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.502 [2024-11-07 13:43:33.368064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.503 [2024-11-07 13:43:33.368086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.503 [2024-11-07 13:43:33.368098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.503 [2024-11-07 13:43:33.382430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.503 [2024-11-07 13:43:33.382452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.503 [2024-11-07 13:43:33.382461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.503 [2024-11-07 13:43:33.397419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.503 [2024-11-07 13:43:33.397441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.503 [2024-11-07 13:43:33.397450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.503 [2024-11-07 13:43:33.410656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.503 [2024-11-07 13:43:33.410678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.503 [2024-11-07 13:43:33.410687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.503 [2024-11-07 13:43:33.421903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.503 [2024-11-07 13:43:33.421925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.503 [2024-11-07 13:43:33.421934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.503 [2024-11-07 13:43:33.438497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.503 [2024-11-07 13:43:33.438519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.503 [2024-11-07 13:43:33.438528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.503 [2024-11-07 13:43:33.452310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.503 [2024-11-07 13:43:33.452332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.503 [2024-11-07 13:43:33.452342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.503 [2024-11-07 13:43:33.466514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.503 [2024-11-07 13:43:33.466537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.503 [2024-11-07 13:43:33.466546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.503 [2024-11-07 13:43:33.479937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.503 [2024-11-07 13:43:33.479959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.503 [2024-11-07 13:43:33.479968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.503 [2024-11-07 13:43:33.495207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.503 [2024-11-07 13:43:33.495229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.503 [2024-11-07 13:43:33.495238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.822 [2024-11-07 13:43:33.509244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.822 [2024-11-07 13:43:33.509267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.822 [2024-11-07 13:43:33.509277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.822 [2024-11-07 13:43:33.522411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.822 [2024-11-07 13:43:33.522434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.822 [2024-11-07 13:43:33.522442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.822 [2024-11-07 13:43:33.534931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.822 [2024-11-07 13:43:33.534953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.822 [2024-11-07 13:43:33.534962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.822 [2024-11-07 13:43:33.549116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.822 [2024-11-07 13:43:33.549138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.822 [2024-11-07 13:43:33.549147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.822 [2024-11-07 13:43:33.563541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.822 [2024-11-07 13:43:33.563563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.822 [2024-11-07 13:43:33.563572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.822 [2024-11-07 13:43:33.579146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.822 [2024-11-07 13:43:33.579168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.822 [2024-11-07 13:43:33.579177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.822 [2024-11-07 13:43:33.590959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.822 [2024-11-07 13:43:33.590989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.822 [2024-11-07 13:43:33.590998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.822 [2024-11-07 13:43:33.605413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.822 [2024-11-07 13:43:33.605435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.822 [2024-11-07 13:43:33.605447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.822 [2024-11-07 13:43:33.618143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.822 [2024-11-07 13:43:33.618165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.822 [2024-11-07 13:43:33.618174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.822 [2024-11-07 13:43:33.633253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.822 [2024-11-07 13:43:33.633275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.822 [2024-11-07 13:43:33.633284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.822 [2024-11-07 13:43:33.649583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.822 [2024-11-07 13:43:33.649605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.822 [2024-11-07 13:43:33.649614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.822 [2024-11-07 13:43:33.662664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.822 [2024-11-07 13:43:33.662687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.822 [2024-11-07 13:43:33.662696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.822 [2024-11-07 13:43:33.677254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.822 [2024-11-07 13:43:33.677276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.822 [2024-11-07 13:43:33.677285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.822 [2024-11-07 13:43:33.690252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.822 [2024-11-07 13:43:33.690274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.822 [2024-11-07 13:43:33.690283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.822 [2024-11-07 13:43:33.701725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.822 [2024-11-07 13:43:33.701747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.822 [2024-11-07 13:43:33.701756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.822 [2024-11-07 13:43:33.716722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.822 [2024-11-07 13:43:33.716744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.822 [2024-11-07 13:43:33.716753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.822 [2024-11-07 13:43:33.730516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.822 [2024-11-07 13:43:33.730539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.822 [2024-11-07 13:43:33.730548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.822 [2024-11-07 13:43:33.745331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.822 [2024-11-07 13:43:33.745353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.822 [2024-11-07 13:43:33.745362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.822 [2024-11-07 13:43:33.759469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.822 [2024-11-07 13:43:33.759491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:25399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.822 [2024-11-07 13:43:33.759500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.822 [2024-11-07 13:43:33.774221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.823 [2024-11-07 13:43:33.774244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.823 [2024-11-07 13:43:33.774253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.823 [2024-11-07 13:43:33.788692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.823 [2024-11-07 13:43:33.788713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.823 [2024-11-07 13:43:33.788722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.823 [2024-11-07 13:43:33.801174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:25.823 [2024-11-07 13:43:33.801196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.823 [2024-11-07 13:43:33.801205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.092 [2024-11-07 13:43:33.814778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.092 [2024-11-07 13:43:33.814800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.092 [2024-11-07 13:43:33.814810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.092 [2024-11-07 13:43:33.828348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.092 [2024-11-07 13:43:33.828370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.092 [2024-11-07 13:43:33.828379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.092 [2024-11-07 13:43:33.840866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.092 [2024-11-07 13:43:33.840889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.092 [2024-11-07 13:43:33.840901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.092 [2024-11-07 13:43:33.855242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.092 [2024-11-07 13:43:33.855265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.092 [2024-11-07 13:43:33.855274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.092 [2024-11-07 13:43:33.870429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.092 [2024-11-07 13:43:33.870451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.092 [2024-11-07 13:43:33.870460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.092 [2024-11-07 13:43:33.885795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.092 [2024-11-07 13:43:33.885818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.092 [2024-11-07 13:43:33.885827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.092 [2024-11-07 13:43:33.900726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.092 [2024-11-07 13:43:33.900748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.092 [2024-11-07 13:43:33.900757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.092 [2024-11-07 13:43:33.912618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.092 [2024-11-07 13:43:33.912640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.092 [2024-11-07 13:43:33.912649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.092 [2024-11-07 13:43:33.926660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.092 [2024-11-07 13:43:33.926682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.092 [2024-11-07 13:43:33.926690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.092 [2024-11-07 13:43:33.942775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.092 [2024-11-07 13:43:33.942798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.092 [2024-11-07 13:43:33.942807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.092 [2024-11-07 13:43:33.956716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.092 [2024-11-07 13:43:33.956738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.092 [2024-11-07 13:43:33.956747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.092 [2024-11-07 13:43:33.970835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.092 [2024-11-07 13:43:33.970861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.092 [2024-11-07 13:43:33.970875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.092 [2024-11-07 13:43:33.982361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.092 [2024-11-07 13:43:33.982383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.092 [2024-11-07 13:43:33.982392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.092 [2024-11-07 13:43:33.996941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.093 [2024-11-07 13:43:33.996963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.093 [2024-11-07 13:43:33.996972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.093 [2024-11-07 13:43:34.008727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.093 [2024-11-07 13:43:34.008749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.093 [2024-11-07 13:43:34.008758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.093 18108.00 IOPS, 70.73 MiB/s [2024-11-07T12:43:34.100Z] [2024-11-07 13:43:34.023879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.093 [2024-11-07 13:43:34.023901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.093 [2024-11-07 13:43:34.023910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.093 [2024-11-07 13:43:34.038246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.093 [2024-11-07 13:43:34.038268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.093 [2024-11-07 13:43:34.038276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.093 [2024-11-07 13:43:34.052398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.093 [2024-11-07 13:43:34.052420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.093 [2024-11-07 13:43:34.052429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.093 [2024-11-07 13:43:34.066895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.093 [2024-11-07 13:43:34.066918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.093 [2024-11-07 13:43:34.066928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.093 [2024-11-07 13:43:34.081517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.093 [2024-11-07 13:43:34.081540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.093 [2024-11-07 13:43:34.081552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.355 [2024-11-07 13:43:34.096566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.355 [2024-11-07 13:43:34.096589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.355 [2024-11-07 13:43:34.096598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.355 [2024-11-07 13:43:34.109548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.355 [2024-11-07 13:43:34.109571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.355 [2024-11-07 13:43:34.109580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.355 [2024-11-07 13:43:34.124035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.355 [2024-11-07 13:43:34.124058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.355 [2024-11-07 13:43:34.124068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.355 [2024-11-07 13:43:34.135721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.355 [2024-11-07 13:43:34.135743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.355 [2024-11-07 13:43:34.135752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.355 [2024-11-07 13:43:34.149629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.355 [2024-11-07 13:43:34.149652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.355 [2024-11-07 13:43:34.149661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.355 [2024-11-07 13:43:34.164586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.355 [2024-11-07 13:43:34.164608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.355 [2024-11-07 13:43:34.164617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.355 [2024-11-07 13:43:34.178701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.355 [2024-11-07 13:43:34.178723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.355 [2024-11-07 13:43:34.178733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.355 [2024-11-07 13:43:34.194245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.355 [2024-11-07 13:43:34.194267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.355 [2024-11-07 13:43:34.194276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.355 [2024-11-07 13:43:34.206827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.355 [2024-11-07 13:43:34.206849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.355 [2024-11-07 13:43:34.206858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.355 [2024-11-07 13:43:34.219039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.355 [2024-11-07 13:43:34.219061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.355 [2024-11-07 13:43:34.219070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.355 [2024-11-07 13:43:34.232924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.355 [2024-11-07 13:43:34.232947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.355 [2024-11-07 13:43:34.232956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.355 [2024-11-07 13:43:34.248261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.355 [2024-11-07 13:43:34.248283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.356 [2024-11-07 13:43:34.248293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.356 [2024-11-07 13:43:34.263454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.356 [2024-11-07 13:43:34.263477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.356 [2024-11-07 13:43:34.263486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.356 [2024-11-07 13:43:34.277820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.356 [2024-11-07 13:43:34.277842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.356 [2024-11-07 13:43:34.277851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.356 [2024-11-07 13:43:34.290945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.356 [2024-11-07 13:43:34.290967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.356 [2024-11-07 13:43:34.290976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.356 [2024-11-07 13:43:34.302074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.356 [2024-11-07 13:43:34.302096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.356 [2024-11-07 13:43:34.302105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.356 [2024-11-07 13:43:34.318303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.356 [2024-11-07 13:43:34.318325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.356 [2024-11-07 13:43:34.318337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.356 [2024-11-07 13:43:34.332669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.356 [2024-11-07 13:43:34.332692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.356 [2024-11-07 13:43:34.332701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.356 [2024-11-07 13:43:34.347445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.356 [2024-11-07 13:43:34.347468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.356 [2024-11-07 13:43:34.347477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.618 [2024-11-07 13:43:34.362772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.618 [2024-11-07 13:43:34.362794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.618 [2024-11-07 13:43:34.362803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.618 [2024-11-07 13:43:34.375721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.618 [2024-11-07 13:43:34.375744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.618 [2024-11-07 13:43:34.375753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.618 [2024-11-07 13:43:34.387494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.618 [2024-11-07 13:43:34.387517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.618 [2024-11-07 13:43:34.387526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.618 [2024-11-07 13:43:34.402096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.618 [2024-11-07 13:43:34.402119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.618 [2024-11-07 13:43:34.402129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.618 [2024-11-07 13:43:34.417488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.618 [2024-11-07 13:43:34.417511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.618 [2024-11-07 13:43:34.417520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.618 [2024-11-07 13:43:34.432951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.618 [2024-11-07 13:43:34.432974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.618 [2024-11-07 13:43:34.432983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.618 [2024-11-07 13:43:34.446005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.618 [2024-11-07 13:43:34.446028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.618 [2024-11-07 13:43:34.446037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.618 [2024-11-07 13:43:34.457653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.618 [2024-11-07 13:43:34.457675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.618 [2024-11-07 13:43:34.457684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.618 [2024-11-07 13:43:34.471495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.618 [2024-11-07 13:43:34.471517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.618 [2024-11-07 13:43:34.471527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.618 [2024-11-07 13:43:34.487405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.618 [2024-11-07 13:43:34.487428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.618 [2024-11-07 13:43:34.487437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.618 [2024-11-07 13:43:34.501931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.618 [2024-11-07 13:43:34.501953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.618 [2024-11-07 13:43:34.501962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.618 [2024-11-07 13:43:34.514408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.618 [2024-11-07 13:43:34.514430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.618 [2024-11-07 13:43:34.514439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.618 [2024-11-07 13:43:34.529362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.618 [2024-11-07 13:43:34.529385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.618 [2024-11-07 13:43:34.529394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.618 [2024-11-07 13:43:34.544445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.618 [2024-11-07 13:43:34.544467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.618 [2024-11-07 13:43:34.544476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.618 [2024-11-07 13:43:34.556778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.618 [2024-11-07 13:43:34.556800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.618 [2024-11-07 13:43:34.556813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.618 [2024-11-07 13:43:34.569330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.618 [2024-11-07 13:43:34.569352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.618 [2024-11-07 13:43:34.569361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.618 [2024-11-07 13:43:34.585529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.618 [2024-11-07 13:43:34.585552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.618 [2024-11-07 13:43:34.585561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.618 [2024-11-07 13:43:34.600470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.618 [2024-11-07 13:43:34.600492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.618 [2024-11-07 13:43:34.600501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.618 [2024-11-07 13:43:34.614602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.618 [2024-11-07 13:43:34.614624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.618 [2024-11-07 13:43:34.614633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.880 [2024-11-07 13:43:34.628386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.880 [2024-11-07 13:43:34.628410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.880 [2024-11-07 13:43:34.628419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.880 [2024-11-07 13:43:34.639501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.880 [2024-11-07 13:43:34.639523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.880 [2024-11-07 13:43:34.639532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.880 [2024-11-07 13:43:34.655506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.880 [2024-11-07 13:43:34.655529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.880 [2024-11-07 13:43:34.655538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.880 [2024-11-07 13:43:34.669163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.880 [2024-11-07 13:43:34.669185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.880 [2024-11-07 13:43:34.669194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.880 [2024-11-07 13:43:34.681802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.880 [2024-11-07 13:43:34.681849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.880 [2024-11-07 13:43:34.681858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.880 [2024-11-07 13:43:34.695619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.880 [2024-11-07 13:43:34.695641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.880 [2024-11-07 13:43:34.695651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.880 [2024-11-07 13:43:34.709860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.880 [2024-11-07 13:43:34.709888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.880 [2024-11-07 13:43:34.709897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.880 [2024-11-07 13:43:34.723200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.880 [2024-11-07 13:43:34.723222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.880 [2024-11-07 13:43:34.723231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.880 [2024-11-07 13:43:34.737534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.880 [2024-11-07 13:43:34.737557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.880 [2024-11-07 13:43:34.737565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.880 [2024-11-07 13:43:34.752071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.880 [2024-11-07 13:43:34.752093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.880 [2024-11-07 13:43:34.752102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.880 [2024-11-07 13:43:34.765451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.880 [2024-11-07 13:43:34.765474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.880 [2024-11-07 13:43:34.765482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.881 [2024-11-07 13:43:34.778218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.881 [2024-11-07 13:43:34.778241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.881 [2024-11-07 13:43:34.778257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.881 [2024-11-07 13:43:34.793227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.881 [2024-11-07 13:43:34.793250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.881 [2024-11-07 13:43:34.793262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.881 [2024-11-07 13:43:34.808442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.881 [2024-11-07 13:43:34.808464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.881 [2024-11-07 13:43:34.808473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.881 [2024-11-07 13:43:34.821554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.881 [2024-11-07 13:43:34.821576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.881 [2024-11-07 13:43:34.821585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.881 [2024-11-07 13:43:34.835523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.881 [2024-11-07 13:43:34.835545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.881 [2024-11-07 13:43:34.835554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.881 [2024-11-07 13:43:34.849695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.881 [2024-11-07 13:43:34.849718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.881 [2024-11-07 13:43:34.849727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.881 [2024-11-07 13:43:34.864297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.881 [2024-11-07 13:43:34.864319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.881 [2024-11-07 13:43:34.864328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.881 [2024-11-07 13:43:34.876658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:26.881 [2024-11-07 13:43:34.876681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.881 [2024-11-07 13:43:34.876689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.232 [2024-11-07 13:43:34.891143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:27.232 [2024-11-07 13:43:34.891165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.232 [2024-11-07 13:43:34.891174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.232 [2024-11-07 13:43:34.905896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:27.232 [2024-11-07 13:43:34.905918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.232 [2024-11-07 13:43:34.905927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.232 [2024-11-07 13:43:34.919688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:27.232 [2024-11-07 13:43:34.919714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.232 [2024-11-07 13:43:34.919723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.232 [2024-11-07 13:43:34.932158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:27.232 [2024-11-07 13:43:34.932181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.232 [2024-11-07 13:43:34.932190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.232 [2024-11-07 13:43:34.948763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:27.232 [2024-11-07 13:43:34.948785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.232 [2024-11-07 13:43:34.948794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.232 [2024-11-07 13:43:34.963845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:27.232 [2024-11-07 13:43:34.963872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.232 [2024-11-07 13:43:34.963881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.232 [2024-11-07 13:43:34.977613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:27.232 [2024-11-07 13:43:34.977635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.232 [2024-11-07 13:43:34.977644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.232 [2024-11-07 13:43:34.989082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:27.232 [2024-11-07 13:43:34.989104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.232 [2024-11-07 13:43:34.989113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.232 [2024-11-07 13:43:35.003340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:27.232 [2024-11-07 13:43:35.003362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.232 [2024-11-07 13:43:35.003371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.232 18178.50 IOPS, 71.01 MiB/s [2024-11-07T12:43:35.239Z] [2024-11-07 13:43:35.017061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:27.232 [2024-11-07 13:43:35.017085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.232 [2024-11-07 13:43:35.017095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.232 00:38:27.232 Latency(us) 00:38:27.232 [2024-11-07T12:43:35.239Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:27.232 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:27.232 nvme0n1 : 2.04 17844.46 69.70 0.00 0.00 7025.23 2471.25 47622.83 00:38:27.232 [2024-11-07T12:43:35.239Z] =================================================================================================================== 00:38:27.232 [2024-11-07T12:43:35.239Z] Total : 17844.46 69.70 0.00 0.00 7025.23 2471.25 47622.83 00:38:27.232 { 00:38:27.232 "results": [ 00:38:27.232 { 00:38:27.232 "job": "nvme0n1", 00:38:27.232 "core_mask": "0x2", 00:38:27.232 "workload": "randread", 00:38:27.232 "status": "finished", 00:38:27.232 "queue_depth": 128, 00:38:27.232 "io_size": 4096, 00:38:27.232 "runtime": 2.044612, 00:38:27.232 "iops": 17844.4614430513, 00:38:27.232 "mibps": 69.70492751191914, 00:38:27.232 "io_failed": 0, 00:38:27.232 "io_timeout": 0, 00:38:27.232 "avg_latency_us": 7025.225650724042, 00:38:27.232 "min_latency_us": 2471.2533333333336, 00:38:27.232 "max_latency_us": 47622.82666666667 00:38:27.232 } 00:38:27.232 ], 00:38:27.232 "core_count": 1 00:38:27.232 } 00:38:27.232 13:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:38:27.232 13:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:38:27.232 13:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:38:27.232 | .driver_specific 00:38:27.232 | .nvme_error 00:38:27.232 | .status_code 00:38:27.232 | .command_transient_transport_error' 00:38:27.232 13:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:38:27.493 13:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 143 > 0 )) 00:38:27.493 13:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4131158 00:38:27.493 13:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 4131158 ']' 00:38:27.493 13:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 4131158 00:38:27.493 13:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:38:27.493 13:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:27.493 13:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4131158 00:38:27.493 13:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:27.493 13:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:27.493 13:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4131158' 00:38:27.493 killing process with pid 4131158 00:38:27.493 13:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 4131158 00:38:27.493 Received shutdown signal, test time was about 2.000000 seconds 00:38:27.493 00:38:27.493 Latency(us) 00:38:27.493 [2024-11-07T12:43:35.500Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:27.493 [2024-11-07T12:43:35.500Z] =================================================================================================================== 00:38:27.493 [2024-11-07T12:43:35.500Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:27.493 13:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 4131158 00:38:27.754 13:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:38:27.755 13:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:38:27.755 13:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:38:27.755 13:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:38:27.755 13:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:38:28.015 13:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4131841 00:38:28.015 13:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4131841 /var/tmp/bperf.sock 00:38:28.015 13:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 4131841 ']' 00:38:28.015 13:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:38:28.015 13:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:28.015 13:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:28.015 13:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:28.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:28.015 13:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:28.016 13:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:28.016 [2024-11-07 13:43:35.846342] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:38:28.016 [2024-11-07 13:43:35.846448] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4131841 ] 00:38:28.016 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:28.016 Zero copy mechanism will not be used. 00:38:28.016 [2024-11-07 13:43:35.994526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:28.277 [2024-11-07 13:43:36.068673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:28.848 13:43:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:28.848 13:43:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:38:28.848 13:43:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:28.848 13:43:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:28.848 13:43:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:38:28.848 13:43:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:28.848 13:43:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:28.848 13:43:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:28.848 13:43:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:28.848 13:43:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:29.108 nvme0n1 00:38:29.108 13:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:38:29.108 13:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:29.108 13:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:29.108 13:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:29.109 13:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:38:29.109 13:43:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:29.109 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:29.109 Zero copy mechanism will not be used. 00:38:29.109 Running I/O for 2 seconds... 00:38:29.109 [2024-11-07 13:43:37.105785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.109 [2024-11-07 13:43:37.105828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.109 [2024-11-07 13:43:37.105842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:29.371 [2024-11-07 13:43:37.114955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.371 [2024-11-07 13:43:37.114986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.371 [2024-11-07 13:43:37.114997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:29.371 [2024-11-07 13:43:37.126529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.371 [2024-11-07 13:43:37.126556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.371 [2024-11-07 13:43:37.126565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:29.371 [2024-11-07 13:43:37.136712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.371 [2024-11-07 13:43:37.136737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.371 [2024-11-07 13:43:37.136747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:29.371 [2024-11-07 13:43:37.146461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.371 [2024-11-07 13:43:37.146485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.371 [2024-11-07 13:43:37.146494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:29.371 [2024-11-07 13:43:37.154661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.371 [2024-11-07 13:43:37.154685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.371 [2024-11-07 13:43:37.154694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:29.371 [2024-11-07 13:43:37.164021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.371 [2024-11-07 13:43:37.164045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.371 [2024-11-07 13:43:37.164054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:29.371 [2024-11-07 13:43:37.174502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.371 [2024-11-07 13:43:37.174525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.371 [2024-11-07 13:43:37.174534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:29.371 [2024-11-07 13:43:37.184436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.371 [2024-11-07 13:43:37.184459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.371 [2024-11-07 13:43:37.184469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:29.371 [2024-11-07 13:43:37.192949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.371 [2024-11-07 13:43:37.192973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.371 [2024-11-07 13:43:37.192983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:29.371 [2024-11-07 13:43:37.199476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.371 [2024-11-07 13:43:37.199500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.371 [2024-11-07 13:43:37.199510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:29.371 [2024-11-07 13:43:37.210555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.371 [2024-11-07 13:43:37.210578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.371 [2024-11-07 13:43:37.210588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:29.371 [2024-11-07 13:43:37.219979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.371 [2024-11-07 13:43:37.220003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.371 [2024-11-07 13:43:37.220012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:29.371 [2024-11-07 13:43:37.229130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.371 [2024-11-07 13:43:37.229153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.371 [2024-11-07 13:43:37.229162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:29.371 [2024-11-07 13:43:37.237940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.371 [2024-11-07 13:43:37.237962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.371 [2024-11-07 13:43:37.237972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:29.371 [2024-11-07 13:43:37.248933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.371 [2024-11-07 13:43:37.248956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.371 [2024-11-07 13:43:37.248965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:29.371 [2024-11-07 13:43:37.258727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.371 [2024-11-07 13:43:37.258750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.371 [2024-11-07 13:43:37.258763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:29.371 [2024-11-07 13:43:37.265422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.371 [2024-11-07 13:43:37.265445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.371 [2024-11-07 13:43:37.265454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:29.371 [2024-11-07 13:43:37.275968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.371 [2024-11-07 13:43:37.275991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.371 [2024-11-07 13:43:37.276001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:29.371 [2024-11-07 13:43:37.287112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.371 [2024-11-07 13:43:37.287134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.371 [2024-11-07 13:43:37.287143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:29.371 [2024-11-07 13:43:37.296420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.372 [2024-11-07 13:43:37.296444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.372 [2024-11-07 13:43:37.296454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:29.372 [2024-11-07 13:43:37.306065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.372 [2024-11-07 13:43:37.306088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.372 [2024-11-07 13:43:37.306097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:29.372 [2024-11-07 13:43:37.316108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.372 [2024-11-07 13:43:37.316131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.372 [2024-11-07 13:43:37.316140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:29.372 [2024-11-07 13:43:37.325882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.372 [2024-11-07 13:43:37.325905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.372 [2024-11-07 13:43:37.325914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:29.372 [2024-11-07 13:43:37.336939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.372 [2024-11-07 13:43:37.336962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.372 [2024-11-07 13:43:37.336971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:29.372 [2024-11-07 13:43:37.349810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.372 [2024-11-07 13:43:37.349833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.372 [2024-11-07 13:43:37.349842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:29.372 [2024-11-07 13:43:37.358410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.372 [2024-11-07 13:43:37.358433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.372 [2024-11-07 13:43:37.358442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:29.372 [2024-11-07 13:43:37.368887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.372 [2024-11-07 13:43:37.368909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.372 [2024-11-07 13:43:37.368919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:29.634 [2024-11-07 13:43:37.376987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.634 [2024-11-07 13:43:37.377011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.634 [2024-11-07 13:43:37.377020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:29.634 [2024-11-07 13:43:37.387322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.634 [2024-11-07 13:43:37.387346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.634 [2024-11-07 13:43:37.387355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:29.634 [2024-11-07 13:43:37.398137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.634 [2024-11-07 13:43:37.398161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.634 [2024-11-07 13:43:37.398170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:29.634 [2024-11-07 13:43:37.407332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.634 [2024-11-07 13:43:37.407355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.634 [2024-11-07 13:43:37.407364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:29.634 [2024-11-07 13:43:37.418114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.634 [2024-11-07 13:43:37.418138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.634 [2024-11-07 13:43:37.418147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:29.634 [2024-11-07 13:43:37.429168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.634 [2024-11-07 13:43:37.429192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.634 [2024-11-07 13:43:37.429204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:29.634 [2024-11-07 13:43:37.441846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.634 [2024-11-07 13:43:37.441874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.634 [2024-11-07 13:43:37.441883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:29.634 [2024-11-07 13:43:37.454917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.634 [2024-11-07 13:43:37.454940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.634 [2024-11-07 13:43:37.454949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:29.634 [2024-11-07 13:43:37.465872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.634 [2024-11-07 13:43:37.465896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.634 [2024-11-07 13:43:37.465904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:29.634 [2024-11-07 13:43:37.477839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.634 [2024-11-07 13:43:37.477867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.635 [2024-11-07 13:43:37.477877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:29.635 [2024-11-07 13:43:37.488713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.635 [2024-11-07 13:43:37.488736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.635 [2024-11-07 13:43:37.488745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:29.635 [2024-11-07 13:43:37.500108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.635 [2024-11-07 13:43:37.500131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.635 [2024-11-07 13:43:37.500140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:29.635 [2024-11-07 13:43:37.511285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.635 [2024-11-07 13:43:37.511316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.635 [2024-11-07 13:43:37.511325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:29.635 [2024-11-07 13:43:37.523467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.635 [2024-11-07 13:43:37.523491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.635 [2024-11-07 13:43:37.523500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:29.635 [2024-11-07 13:43:37.536090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.635 [2024-11-07 13:43:37.536113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.635 [2024-11-07 13:43:37.536122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:29.635 [2024-11-07 13:43:37.548846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.635 [2024-11-07 13:43:37.548875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.635 [2024-11-07 13:43:37.548884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:29.635 [2024-11-07 13:43:37.560496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.635 [2024-11-07 13:43:37.560520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.635 [2024-11-07 13:43:37.560529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:29.635 [2024-11-07 13:43:37.573485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.635 [2024-11-07 13:43:37.573508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.635 [2024-11-07 13:43:37.573517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:29.635 [2024-11-07 13:43:37.586028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.635 [2024-11-07 13:43:37.586051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.635 [2024-11-07 13:43:37.586060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:29.635 [2024-11-07 13:43:37.597443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.635 [2024-11-07 13:43:37.597466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.635 [2024-11-07 13:43:37.597475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:29.635 [2024-11-07 13:43:37.607260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.635 [2024-11-07 13:43:37.607284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.635 [2024-11-07 13:43:37.607293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:29.635 [2024-11-07 13:43:37.617401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.635 [2024-11-07 13:43:37.617425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.635 [2024-11-07 13:43:37.617433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:29.635 [2024-11-07 13:43:37.628059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.635 [2024-11-07 13:43:37.628083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.635 [2024-11-07 13:43:37.628095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:29.896 [2024-11-07 13:43:37.637607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.896 [2024-11-07 13:43:37.637631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.896 [2024-11-07 13:43:37.637641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:29.896 [2024-11-07 13:43:37.648207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.896 [2024-11-07 13:43:37.648231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.896 [2024-11-07 13:43:37.648240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:29.896 [2024-11-07 13:43:37.657088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.896 [2024-11-07 13:43:37.657111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.896 [2024-11-07 13:43:37.657120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:29.896 [2024-11-07 13:43:37.668103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.896 [2024-11-07 13:43:37.668127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.896 [2024-11-07 13:43:37.668136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:29.896 [2024-11-07 13:43:37.679570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.896 [2024-11-07 13:43:37.679595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.896 [2024-11-07 13:43:37.679604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:29.896 [2024-11-07 13:43:37.690054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.896 [2024-11-07 13:43:37.690078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.896 [2024-11-07 13:43:37.690087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:29.896 [2024-11-07 13:43:37.701817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.896 [2024-11-07 13:43:37.701841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.896 [2024-11-07 13:43:37.701850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:29.896 [2024-11-07 13:43:37.714337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.896 [2024-11-07 13:43:37.714361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.896 [2024-11-07 13:43:37.714370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:29.896 [2024-11-07 13:43:37.726906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.896 [2024-11-07 13:43:37.726936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.896 [2024-11-07 13:43:37.726945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:29.896 [2024-11-07 13:43:37.738154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.896 [2024-11-07 13:43:37.738178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.896 [2024-11-07 13:43:37.738187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:29.896 [2024-11-07 13:43:37.750020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.896 [2024-11-07 13:43:37.750043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.896 [2024-11-07 13:43:37.750052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:29.896 [2024-11-07 13:43:37.760915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.896 [2024-11-07 13:43:37.760938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.896 [2024-11-07 13:43:37.760947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:29.896 [2024-11-07 13:43:37.771157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.896 [2024-11-07 13:43:37.771180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.896 [2024-11-07 13:43:37.771190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:29.896 [2024-11-07 13:43:37.782993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.896 [2024-11-07 13:43:37.783016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.896 [2024-11-07 13:43:37.783025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:29.896 [2024-11-07 13:43:37.795422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.896 [2024-11-07 13:43:37.795446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.896 [2024-11-07 13:43:37.795455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:29.896 [2024-11-07 13:43:37.806226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.896 [2024-11-07 13:43:37.806250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.896 [2024-11-07 13:43:37.806259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:29.896 [2024-11-07 13:43:37.817533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.896 [2024-11-07 13:43:37.817557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.896 [2024-11-07 13:43:37.817570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:29.896 [2024-11-07 13:43:37.828838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.896 [2024-11-07 13:43:37.828867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.896 [2024-11-07 13:43:37.828877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:29.896 [2024-11-07 13:43:37.837516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.896 [2024-11-07 13:43:37.837539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.896 [2024-11-07 13:43:37.837549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:29.896 [2024-11-07 13:43:37.848927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.896 [2024-11-07 13:43:37.848950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.896 [2024-11-07 13:43:37.848959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:29.896 [2024-11-07 13:43:37.860955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.896 [2024-11-07 13:43:37.860979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.896 [2024-11-07 13:43:37.860987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:29.896 [2024-11-07 13:43:37.872067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.897 [2024-11-07 13:43:37.872090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.897 [2024-11-07 13:43:37.872099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:29.897 [2024-11-07 13:43:37.883583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.897 [2024-11-07 13:43:37.883606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.897 [2024-11-07 13:43:37.883616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:29.897 [2024-11-07 13:43:37.893485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:29.897 [2024-11-07 13:43:37.893509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.897 [2024-11-07 13:43:37.893518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:30.158 [2024-11-07 13:43:37.904369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.158 [2024-11-07 13:43:37.904393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.158 [2024-11-07 13:43:37.904403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:30.158 [2024-11-07 13:43:37.914315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.158 [2024-11-07 13:43:37.914342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.158 [2024-11-07 13:43:37.914351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:30.158 [2024-11-07 13:43:37.924620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.158 [2024-11-07 13:43:37.924644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.158 [2024-11-07 13:43:37.924653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:30.158 [2024-11-07 13:43:37.933845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.158 [2024-11-07 13:43:37.933874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.158 [2024-11-07 13:43:37.933883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:30.158 [2024-11-07 13:43:37.945251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.158 [2024-11-07 13:43:37.945275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.158 [2024-11-07 13:43:37.945283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:30.158 [2024-11-07 13:43:37.956855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.158 [2024-11-07 13:43:37.956884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.158 [2024-11-07 13:43:37.956894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:30.158 [2024-11-07 13:43:37.964159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.159 [2024-11-07 13:43:37.964182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.159 [2024-11-07 13:43:37.964190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:30.159 [2024-11-07 13:43:37.973397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.159 [2024-11-07 13:43:37.973420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.159 [2024-11-07 13:43:37.973429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:30.159 [2024-11-07 13:43:37.984448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.159 [2024-11-07 13:43:37.984471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.159 [2024-11-07 13:43:37.984480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:30.159 [2024-11-07 13:43:37.994834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.159 [2024-11-07 13:43:37.994856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.159 [2024-11-07 13:43:37.994874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:30.159 [2024-11-07 13:43:38.005154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.159 [2024-11-07 13:43:38.005177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.159 [2024-11-07 13:43:38.005186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:30.159 [2024-11-07 13:43:38.013309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.159 [2024-11-07 13:43:38.013333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.159 [2024-11-07 13:43:38.013342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:30.159 [2024-11-07 13:43:38.023260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.159 [2024-11-07 13:43:38.023283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.159 [2024-11-07 13:43:38.023292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:30.159 [2024-11-07 13:43:38.033137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.159 [2024-11-07 13:43:38.033161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.159 [2024-11-07 13:43:38.033170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:30.159 [2024-11-07 13:43:38.041205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.159 [2024-11-07 13:43:38.041229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.159 [2024-11-07 13:43:38.041238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:30.159 [2024-11-07 13:43:38.052045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.159 [2024-11-07 13:43:38.052069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.159 [2024-11-07 13:43:38.052078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:30.159 [2024-11-07 13:43:38.060592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.159 [2024-11-07 13:43:38.060617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.159 [2024-11-07 13:43:38.060625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:30.159 [2024-11-07 13:43:38.070584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.159 [2024-11-07 13:43:38.070607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.159 [2024-11-07 13:43:38.070616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:30.159 [2024-11-07 13:43:38.080913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.159 [2024-11-07 13:43:38.080939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.159 [2024-11-07 13:43:38.080948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:30.159 [2024-11-07 13:43:38.092006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.159 [2024-11-07 13:43:38.092030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.159 [2024-11-07 13:43:38.092039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:30.159 2953.00 IOPS, 369.12 MiB/s [2024-11-07T12:43:38.166Z] [2024-11-07 13:43:38.102159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.159 [2024-11-07 13:43:38.102184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.159 [2024-11-07 13:43:38.102193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:30.159 [2024-11-07 13:43:38.111590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.159 [2024-11-07 13:43:38.111613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.159 [2024-11-07 13:43:38.111623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:30.159 [2024-11-07 13:43:38.121637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.159 [2024-11-07 13:43:38.121661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.159 [2024-11-07 13:43:38.121670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:30.159 [2024-11-07 13:43:38.132330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.159 [2024-11-07 13:43:38.132354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.159 [2024-11-07 13:43:38.132363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:30.159 [2024-11-07 13:43:38.142930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.159 [2024-11-07 13:43:38.142953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.159 [2024-11-07 13:43:38.142962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:30.159 [2024-11-07 13:43:38.153772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.159 [2024-11-07 13:43:38.153796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.159 [2024-11-07 13:43:38.153805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:30.421 [2024-11-07 13:43:38.163622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.421 [2024-11-07 13:43:38.163645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.421 [2024-11-07 13:43:38.163658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:30.421 [2024-11-07 13:43:38.172457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.421 [2024-11-07 13:43:38.172481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.421 [2024-11-07 13:43:38.172490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:30.421 [2024-11-07 13:43:38.184108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.421 [2024-11-07 13:43:38.184131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.421 [2024-11-07 13:43:38.184140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:30.421 [2024-11-07 13:43:38.195877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.421 [2024-11-07 13:43:38.195899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.421 [2024-11-07 13:43:38.195908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:30.421 [2024-11-07 13:43:38.204443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.421 [2024-11-07 13:43:38.204467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.421 [2024-11-07 13:43:38.204475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:30.421 [2024-11-07 13:43:38.215638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.421 [2024-11-07 13:43:38.215662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.421 [2024-11-07 13:43:38.215671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:30.421 [2024-11-07 13:43:38.226156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.421 [2024-11-07 13:43:38.226180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.421 [2024-11-07 13:43:38.226189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:30.421 [2024-11-07 13:43:38.236647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.421 [2024-11-07 13:43:38.236670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.421 [2024-11-07 13:43:38.236679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:30.421 [2024-11-07 13:43:38.245963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.421 [2024-11-07 13:43:38.245986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.421 [2024-11-07 13:43:38.245995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:30.421 [2024-11-07 13:43:38.255413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.421 [2024-11-07 13:43:38.255443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.421 [2024-11-07 13:43:38.255452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:30.421 [2024-11-07 13:43:38.267908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.421 [2024-11-07 13:43:38.267931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.421 [2024-11-07 13:43:38.267940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:30.421 [2024-11-07 13:43:38.279134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.421 [2024-11-07 13:43:38.279158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.421 [2024-11-07 13:43:38.279167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:30.421 [2024-11-07 13:43:38.290645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.421 [2024-11-07 13:43:38.290669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.421 [2024-11-07 13:43:38.290678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:30.422 [2024-11-07 13:43:38.303567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.422 [2024-11-07 13:43:38.303591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.422 [2024-11-07 13:43:38.303600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:30.422 [2024-11-07 13:43:38.315498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.422 [2024-11-07 13:43:38.315522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.422 [2024-11-07 13:43:38.315531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:30.422 [2024-11-07 13:43:38.327970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.422 [2024-11-07 13:43:38.327994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.422 [2024-11-07 13:43:38.328003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:30.422 [2024-11-07 13:43:38.338062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.422 [2024-11-07 13:43:38.338084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.422 [2024-11-07 13:43:38.338094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:30.422 [2024-11-07 13:43:38.347254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.422 [2024-11-07 13:43:38.347278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.422 [2024-11-07 13:43:38.347291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:30.422 [2024-11-07 13:43:38.356830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.422 [2024-11-07 13:43:38.356854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.422 [2024-11-07 13:43:38.356869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:30.422 [2024-11-07 13:43:38.367079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.422 [2024-11-07 13:43:38.367102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.422 [2024-11-07 13:43:38.367111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:30.422 [2024-11-07 13:43:38.378555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.422 [2024-11-07 13:43:38.378579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.422 [2024-11-07 13:43:38.378588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:30.422 [2024-11-07 13:43:38.385958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.422 [2024-11-07 13:43:38.385981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.422 [2024-11-07 13:43:38.385991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:30.422 [2024-11-07 13:43:38.395207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.422 [2024-11-07 13:43:38.395231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.422 [2024-11-07 13:43:38.395239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:30.422 [2024-11-07 13:43:38.406192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.422 [2024-11-07 13:43:38.406216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.422 [2024-11-07 13:43:38.406225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:30.422 [2024-11-07 13:43:38.416436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.422 [2024-11-07 13:43:38.416459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.422 [2024-11-07 13:43:38.416475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:30.683 [2024-11-07 13:43:38.426547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.683 [2024-11-07 13:43:38.426571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.683 [2024-11-07 13:43:38.426580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:30.683 [2024-11-07 13:43:38.435724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.683 [2024-11-07 13:43:38.435750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.683 [2024-11-07 13:43:38.435759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:30.683 [2024-11-07 13:43:38.448238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.683 [2024-11-07 13:43:38.448261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.683 [2024-11-07 13:43:38.448270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:30.683 [2024-11-07 13:43:38.460113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.683 [2024-11-07 13:43:38.460135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.683 [2024-11-07 13:43:38.460144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:30.683 [2024-11-07 13:43:38.472715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.683 [2024-11-07 13:43:38.472738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.683 [2024-11-07 13:43:38.472747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:30.683 [2024-11-07 13:43:38.484030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.683 [2024-11-07 13:43:38.484053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.683 [2024-11-07 13:43:38.484062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:30.683 [2024-11-07 13:43:38.491827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.683 [2024-11-07 13:43:38.491850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.684 [2024-11-07 13:43:38.491859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:30.684 [2024-11-07 13:43:38.496203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.684 [2024-11-07 13:43:38.496226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.684 [2024-11-07 13:43:38.496235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:30.684 [2024-11-07 13:43:38.504603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.684 [2024-11-07 13:43:38.504625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.684 [2024-11-07 13:43:38.504634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:30.684 [2024-11-07 13:43:38.514630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.684 [2024-11-07 13:43:38.514653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.684 [2024-11-07 13:43:38.514665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:30.684 [2024-11-07 13:43:38.522946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.684 [2024-11-07 13:43:38.522968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.684 [2024-11-07 13:43:38.522977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:30.684 [2024-11-07 13:43:38.535071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.684 [2024-11-07 13:43:38.535094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.684 [2024-11-07 13:43:38.535103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:30.684 [2024-11-07 13:43:38.545083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.684 [2024-11-07 13:43:38.545105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.684 [2024-11-07 13:43:38.545115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:30.684 [2024-11-07 13:43:38.555045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.684 [2024-11-07 13:43:38.555067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.684 [2024-11-07 13:43:38.555076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:30.684 [2024-11-07 13:43:38.565456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.684 [2024-11-07 13:43:38.565478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.684 [2024-11-07 13:43:38.565487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:30.684 [2024-11-07 13:43:38.577030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.684 [2024-11-07 13:43:38.577052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.684 [2024-11-07 13:43:38.577061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:30.684 [2024-11-07 13:43:38.585851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.684 [2024-11-07 13:43:38.585880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.684 [2024-11-07 13:43:38.585889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:30.684 [2024-11-07 13:43:38.596483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.684 [2024-11-07 13:43:38.596506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.684 [2024-11-07 13:43:38.596515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:30.684 [2024-11-07 13:43:38.605807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.684 [2024-11-07 13:43:38.605834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.684 [2024-11-07 13:43:38.605843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:30.684 [2024-11-07 13:43:38.617382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.684 [2024-11-07 13:43:38.617405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.684 [2024-11-07 13:43:38.617415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:30.684 [2024-11-07 13:43:38.627991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.684 [2024-11-07 13:43:38.628014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.684 [2024-11-07 13:43:38.628023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:30.684 [2024-11-07 13:43:38.637097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.684 [2024-11-07 13:43:38.637120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.684 [2024-11-07 13:43:38.637129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:30.684 [2024-11-07 13:43:38.647150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.684 [2024-11-07 13:43:38.647174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.684 [2024-11-07 13:43:38.647182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:30.684 [2024-11-07 13:43:38.658666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.684 [2024-11-07 13:43:38.658689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.684 [2024-11-07 13:43:38.658698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:30.684 [2024-11-07 13:43:38.669006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.684 [2024-11-07 13:43:38.669029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.684 [2024-11-07 13:43:38.669038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:30.684 [2024-11-07 13:43:38.679029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.684 [2024-11-07 13:43:38.679053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.684 [2024-11-07 13:43:38.679061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:30.945 [2024-11-07 13:43:38.688264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.945 [2024-11-07 13:43:38.688288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.945 [2024-11-07 13:43:38.688297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:30.945 [2024-11-07 13:43:38.698577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.946 [2024-11-07 13:43:38.698601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.946 [2024-11-07 13:43:38.698610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:30.946 [2024-11-07 13:43:38.709340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.946 [2024-11-07 13:43:38.709364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.946 [2024-11-07 13:43:38.709372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:30.946 [2024-11-07 13:43:38.721169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.946 [2024-11-07 13:43:38.721192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.946 [2024-11-07 13:43:38.721201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:30.946 [2024-11-07 13:43:38.732735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.946 [2024-11-07 13:43:38.732758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.946 [2024-11-07 13:43:38.732767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:30.946 [2024-11-07 13:43:38.743001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.946 [2024-11-07 13:43:38.743024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.946 [2024-11-07 13:43:38.743033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:30.946 [2024-11-07 13:43:38.753501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.946 [2024-11-07 13:43:38.753526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.946 [2024-11-07 13:43:38.753535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:30.946 [2024-11-07 13:43:38.763779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.946 [2024-11-07 13:43:38.763802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.946 [2024-11-07 13:43:38.763811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:30.946 [2024-11-07 13:43:38.773956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.946 [2024-11-07 13:43:38.773979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.946 [2024-11-07 13:43:38.773988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:30.946 [2024-11-07 13:43:38.785130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.946 [2024-11-07 13:43:38.785160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.946 [2024-11-07 13:43:38.785169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:30.946 [2024-11-07 13:43:38.796611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.946 [2024-11-07 13:43:38.796635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.946 [2024-11-07 13:43:38.796644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:30.946 [2024-11-07 13:43:38.804063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.946 [2024-11-07 13:43:38.804086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.946 [2024-11-07 13:43:38.804095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:30.946 [2024-11-07 13:43:38.814543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.946 [2024-11-07 13:43:38.814565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.946 [2024-11-07 13:43:38.814575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:30.946 [2024-11-07 13:43:38.824702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.946 [2024-11-07 13:43:38.824726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.946 [2024-11-07 13:43:38.824735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:30.946 [2024-11-07 13:43:38.835204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.946 [2024-11-07 13:43:38.835227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.946 [2024-11-07 13:43:38.835237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:30.946 [2024-11-07 13:43:38.846706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.946 [2024-11-07 13:43:38.846729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.946 [2024-11-07 13:43:38.846738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:30.946 [2024-11-07 13:43:38.858757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.946 [2024-11-07 13:43:38.858802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.946 [2024-11-07 13:43:38.858811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:30.946 [2024-11-07 13:43:38.872135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.946 [2024-11-07 13:43:38.872159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.946 [2024-11-07 13:43:38.872168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:30.946 [2024-11-07 13:43:38.884982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.946 [2024-11-07 13:43:38.885006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.946 [2024-11-07 13:43:38.885015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:30.946 [2024-11-07 13:43:38.897876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.946 [2024-11-07 13:43:38.897898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.946 [2024-11-07 13:43:38.897907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:30.946 [2024-11-07 13:43:38.910855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.946 [2024-11-07 13:43:38.910884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.946 [2024-11-07 13:43:38.910893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:30.946 [2024-11-07 13:43:38.923558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.946 [2024-11-07 13:43:38.923581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.946 [2024-11-07 13:43:38.923590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:30.946 [2024-11-07 13:43:38.935712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.946 [2024-11-07 13:43:38.935735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.946 [2024-11-07 13:43:38.935744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:30.946 [2024-11-07 13:43:38.948651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:30.946 [2024-11-07 13:43:38.948674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.946 [2024-11-07 13:43:38.948683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:31.207 [2024-11-07 13:43:38.961641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:31.207 [2024-11-07 13:43:38.961665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.207 [2024-11-07 13:43:38.961674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:31.207 [2024-11-07 13:43:38.973946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:31.207 [2024-11-07 13:43:38.973970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.207 [2024-11-07 13:43:38.973979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:31.207 [2024-11-07 13:43:38.985265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:31.207 [2024-11-07 13:43:38.985294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.207 [2024-11-07 13:43:38.985304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:31.207 [2024-11-07 13:43:38.996951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:31.207 [2024-11-07 13:43:38.996974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.207 [2024-11-07 13:43:38.996984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:31.207 [2024-11-07 13:43:39.006907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:31.207 [2024-11-07 13:43:39.006930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.207 [2024-11-07 13:43:39.006939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:31.207 [2024-11-07 13:43:39.016225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:31.207 [2024-11-07 13:43:39.016248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.207 [2024-11-07 13:43:39.016258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:31.208 [2024-11-07 13:43:39.027263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:31.208 [2024-11-07 13:43:39.027288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.208 [2024-11-07 13:43:39.027297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:31.208 [2024-11-07 13:43:39.039402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:31.208 [2024-11-07 13:43:39.039425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.208 [2024-11-07 13:43:39.039434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:31.208 [2024-11-07 13:43:39.046927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:31.208 [2024-11-07 13:43:39.046950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.208 [2024-11-07 13:43:39.046959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:31.208 [2024-11-07 13:43:39.057007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:31.208 [2024-11-07 13:43:39.057030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.208 [2024-11-07 13:43:39.057039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:31.208 [2024-11-07 13:43:39.068655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:31.208 [2024-11-07 13:43:39.068679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.208 [2024-11-07 13:43:39.068688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:31.208 [2024-11-07 13:43:39.078202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:31.208 [2024-11-07 13:43:39.078225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.208 [2024-11-07 13:43:39.078235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:31.208 [2024-11-07 13:43:39.087582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:31.208 [2024-11-07 13:43:39.087605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.208 [2024-11-07 13:43:39.087614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:31.208 [2024-11-07 13:43:39.099214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000417600) 00:38:31.208 [2024-11-07 13:43:39.099238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.208 [2024-11-07 13:43:39.099247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:31.208 2940.00 IOPS, 367.50 MiB/s 00:38:31.208 Latency(us) 00:38:31.208 [2024-11-07T12:43:39.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:31.208 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:38:31.208 nvme0n1 : 2.00 2943.61 367.95 0.00 0.00 5432.42 942.08 13434.88 00:38:31.208 [2024-11-07T12:43:39.215Z] =================================================================================================================== 00:38:31.208 [2024-11-07T12:43:39.215Z] Total : 2943.61 367.95 0.00 0.00 5432.42 942.08 13434.88 00:38:31.208 { 00:38:31.208 "results": [ 00:38:31.208 { 00:38:31.208 "job": "nvme0n1", 00:38:31.208 "core_mask": "0x2", 00:38:31.208 "workload": "randread", 00:38:31.208 "status": "finished", 00:38:31.208 "queue_depth": 16, 00:38:31.208 "io_size": 131072, 00:38:31.208 "runtime": 2.002982, 00:38:31.208 "iops": 2943.611075885854, 00:38:31.208 "mibps": 367.95138448573175, 00:38:31.208 "io_failed": 0, 00:38:31.208 "io_timeout": 0, 00:38:31.208 "avg_latency_us": 5432.420732700136, 00:38:31.208 "min_latency_us": 942.08, 00:38:31.208 "max_latency_us": 13434.88 00:38:31.208 } 00:38:31.208 ], 00:38:31.208 "core_count": 1 00:38:31.208 } 00:38:31.208 13:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:38:31.208 13:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:38:31.208 13:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:38:31.208 13:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:38:31.208 | .driver_specific 00:38:31.208 | .nvme_error 00:38:31.208 | .status_code 00:38:31.208 | .command_transient_transport_error' 00:38:31.468 13:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 190 > 0 )) 00:38:31.468 13:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4131841 00:38:31.468 13:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 4131841 ']' 00:38:31.468 13:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 4131841 00:38:31.468 13:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:38:31.468 13:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:31.468 13:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4131841 00:38:31.469 13:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:31.469 13:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:31.469 13:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4131841' 00:38:31.469 killing process with pid 4131841 00:38:31.469 13:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 4131841 00:38:31.469 Received shutdown signal, test time was about 2.000000 seconds 00:38:31.469 00:38:31.469 Latency(us) 00:38:31.469 [2024-11-07T12:43:39.476Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:31.469 [2024-11-07T12:43:39.476Z] =================================================================================================================== 00:38:31.469 [2024-11-07T12:43:39.476Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:31.469 13:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 4131841 00:38:32.042 13:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:38:32.042 13:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:38:32.042 13:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:38:32.042 13:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:38:32.042 13:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:38:32.042 13:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4132660 00:38:32.042 13:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4132660 /var/tmp/bperf.sock 00:38:32.042 13:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 4132660 ']' 00:38:32.042 13:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:38:32.042 13:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:32.042 13:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:32.042 13:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:32.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:32.042 13:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:32.042 13:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:32.042 [2024-11-07 13:43:39.884595] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:38:32.042 [2024-11-07 13:43:39.884708] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4132660 ] 00:38:32.042 [2024-11-07 13:43:40.030418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:32.304 [2024-11-07 13:43:40.111555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:32.874 13:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:32.874 13:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:38:32.874 13:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:32.874 13:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:32.874 13:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:38:32.874 13:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:32.874 13:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:32.874 13:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:32.874 13:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:32.874 13:43:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:33.134 nvme0n1 00:38:33.396 13:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:38:33.396 13:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:33.396 13:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:33.396 13:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:33.396 13:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:38:33.396 13:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:33.396 Running I/O for 2 seconds... 00:38:33.396 [2024-11-07 13:43:41.271760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2510 00:38:33.396 [2024-11-07 13:43:41.273810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.396 [2024-11-07 13:43:41.273845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:33.396 [2024-11-07 13:43:41.282815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfbcf0 00:38:33.396 [2024-11-07 13:43:41.284307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.396 [2024-11-07 13:43:41.284331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:38:33.396 [2024-11-07 13:43:41.294818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0788 00:38:33.396 [2024-11-07 13:43:41.295796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.396 [2024-11-07 13:43:41.295818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:38:33.396 [2024-11-07 13:43:41.310068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf3e60 00:38:33.396 [2024-11-07 13:43:41.311731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.396 [2024-11-07 13:43:41.311754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:38:33.396 [2024-11-07 13:43:41.321617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf46d0 00:38:33.396 [2024-11-07 13:43:41.322480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.396 [2024-11-07 13:43:41.322502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:38:33.396 [2024-11-07 13:43:41.334940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beb760 00:38:33.396 [2024-11-07 13:43:41.335894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.396 [2024-11-07 13:43:41.335915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:38:33.396 [2024-11-07 13:43:41.349953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb480 00:38:33.396 [2024-11-07 13:43:41.351611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.396 [2024-11-07 13:43:41.351633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:38:33.396 [2024-11-07 13:43:41.363153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016befae0 00:38:33.396 [2024-11-07 13:43:41.364808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.396 [2024-11-07 13:43:41.364829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:38:33.396 [2024-11-07 13:43:41.373817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be73e0 00:38:33.396 [2024-11-07 13:43:41.374757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.396 [2024-11-07 13:43:41.374779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:33.396 [2024-11-07 13:43:41.390531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016befae0 00:38:33.396 [2024-11-07 13:43:41.392201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.396 [2024-11-07 13:43:41.392222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:38:33.659 [2024-11-07 13:43:41.401185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bef270 00:38:33.659 [2024-11-07 13:43:41.402151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.659 [2024-11-07 13:43:41.402172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:38:33.659 [2024-11-07 13:43:41.417045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bef270 00:38:33.659 [2024-11-07 13:43:41.418697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.659 [2024-11-07 13:43:41.418719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:38:33.659 [2024-11-07 13:43:41.430239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be73e0 00:38:33.659 [2024-11-07 13:43:41.431856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.659 [2024-11-07 13:43:41.431881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:33.659 [2024-11-07 13:43:41.445219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be73e0 00:38:33.659 [2024-11-07 13:43:41.447540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.659 [2024-11-07 13:43:41.447562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:38:33.659 [2024-11-07 13:43:41.455893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6b70 00:38:33.659 [2024-11-07 13:43:41.457499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.659 [2024-11-07 13:43:41.457520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:38:33.659 [2024-11-07 13:43:41.470030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016befae0 00:38:33.659 [2024-11-07 13:43:41.471659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:25320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.659 [2024-11-07 13:43:41.471680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:33.659 [2024-11-07 13:43:41.483236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6b70 00:38:33.659 [2024-11-07 13:43:41.484844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.659 [2024-11-07 13:43:41.484870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:38:33.659 [2024-11-07 13:43:41.498224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bef270 00:38:33.659 [2024-11-07 13:43:41.500530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.659 [2024-11-07 13:43:41.500552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:33.659 [2024-11-07 13:43:41.508877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf35f0 00:38:33.659 [2024-11-07 13:43:41.510442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.659 [2024-11-07 13:43:41.510463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:38:33.659 [2024-11-07 13:43:41.524752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be9e10 00:38:33.659 [2024-11-07 13:43:41.527064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.659 [2024-11-07 13:43:41.527086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:33.659 [2024-11-07 13:43:41.535408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beea00 00:38:33.659 [2024-11-07 13:43:41.536898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.659 [2024-11-07 13:43:41.536919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:38:33.659 [2024-11-07 13:43:41.549510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bef270 00:38:33.659 [2024-11-07 13:43:41.551116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.659 [2024-11-07 13:43:41.551138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:38:33.659 [2024-11-07 13:43:41.561919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bea680 00:38:33.659 [2024-11-07 13:43:41.563495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.659 [2024-11-07 13:43:41.563516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:38:33.659 [2024-11-07 13:43:41.576027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be9e10 00:38:33.659 [2024-11-07 13:43:41.577639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:8231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.659 [2024-11-07 13:43:41.577661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:38:33.659 [2024-11-07 13:43:41.589296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be73e0 00:38:33.659 [2024-11-07 13:43:41.590998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.659 [2024-11-07 13:43:41.591019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:33.659 [2024-11-07 13:43:41.604488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6300 00:38:33.659 [2024-11-07 13:43:41.606794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.659 [2024-11-07 13:43:41.606815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:33.659 [2024-11-07 13:43:41.616011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5a90 00:38:33.659 [2024-11-07 13:43:41.617615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.659 [2024-11-07 13:43:41.617636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:38:33.659 [2024-11-07 13:43:41.631000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016befae0 00:38:33.659 [2024-11-07 13:43:41.633304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.659 [2024-11-07 13:43:41.633324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:33.659 [2024-11-07 13:43:41.641651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be9e10 00:38:33.659 [2024-11-07 13:43:41.643254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.659 [2024-11-07 13:43:41.643275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:38:33.659 [2024-11-07 13:43:41.655741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bef270 00:38:33.659 [2024-11-07 13:43:41.657363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:17498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.660 [2024-11-07 13:43:41.657384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:38:33.921 [2024-11-07 13:43:41.668162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2d80 00:38:33.921 [2024-11-07 13:43:41.669754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.921 [2024-11-07 13:43:41.669779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:38:33.921 [2024-11-07 13:43:41.682303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016befae0 00:38:33.921 [2024-11-07 13:43:41.683932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.921 [2024-11-07 13:43:41.683953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:38:33.921 [2024-11-07 13:43:41.694683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be9e10 00:38:33.921 [2024-11-07 13:43:41.696283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.921 [2024-11-07 13:43:41.696305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:38:33.921 [2024-11-07 13:43:41.708811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf3e60 00:38:33.921 [2024-11-07 13:43:41.710434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.921 [2024-11-07 13:43:41.710456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:38:33.921 [2024-11-07 13:43:41.723845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6300 00:38:33.921 [2024-11-07 13:43:41.726152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.921 [2024-11-07 13:43:41.726173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:33.921 [2024-11-07 13:43:41.734509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be9e10 00:38:33.921 [2024-11-07 13:43:41.736110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.921 [2024-11-07 13:43:41.736131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:38:33.921 [2024-11-07 13:43:41.748676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5220 00:38:33.921 [2024-11-07 13:43:41.750228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.921 [2024-11-07 13:43:41.750250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:33.921 [2024-11-07 13:43:41.763677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6300 00:38:33.921 [2024-11-07 13:43:41.765976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.921 [2024-11-07 13:43:41.765998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:33.921 [2024-11-07 13:43:41.774342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be8088 00:38:33.921 [2024-11-07 13:43:41.775925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.921 [2024-11-07 13:43:41.775946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:38:33.921 [2024-11-07 13:43:41.788471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5a90 00:38:33.921 [2024-11-07 13:43:41.790040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.921 [2024-11-07 13:43:41.790061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:33.921 [2024-11-07 13:43:41.803499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6300 00:38:33.921 [2024-11-07 13:43:41.805816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.921 [2024-11-07 13:43:41.805837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:33.921 [2024-11-07 13:43:41.814184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be9e10 00:38:33.921 [2024-11-07 13:43:41.815785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.921 [2024-11-07 13:43:41.815806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:38:33.921 [2024-11-07 13:43:41.828287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5220 00:38:33.921 [2024-11-07 13:43:41.829920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.921 [2024-11-07 13:43:41.829942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:38:33.921 [2024-11-07 13:43:41.841591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be49b0 00:38:33.921 [2024-11-07 13:43:41.843205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.921 [2024-11-07 13:43:41.843227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:38:33.921 [2024-11-07 13:43:41.856583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf35f0 00:38:33.921 [2024-11-07 13:43:41.858882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:3813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.921 [2024-11-07 13:43:41.858903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:33.921 [2024-11-07 13:43:41.868027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2d80 00:38:33.921 [2024-11-07 13:43:41.869630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.921 [2024-11-07 13:43:41.869652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:38:33.921 [2024-11-07 13:43:41.881280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be49b0 00:38:33.921 [2024-11-07 13:43:41.882853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:11638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.922 [2024-11-07 13:43:41.882879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:38:33.922 [2024-11-07 13:43:41.894593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be9e10 00:38:33.922 [2024-11-07 13:43:41.896216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.922 [2024-11-07 13:43:41.896241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:38:33.922 [2024-11-07 13:43:41.906917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be73e0 00:38:33.922 [2024-11-07 13:43:41.908454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.922 [2024-11-07 13:43:41.908474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:38:33.922 [2024-11-07 13:43:41.921059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be49b0 00:38:33.922 [2024-11-07 13:43:41.922660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:33.922 [2024-11-07 13:43:41.922682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:34.183 [2024-11-07 13:43:41.934269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be73e0 00:38:34.183 [2024-11-07 13:43:41.935878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.183 [2024-11-07 13:43:41.935900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:34.183 [2024-11-07 13:43:41.946679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf3e60 00:38:34.183 [2024-11-07 13:43:41.948240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.183 [2024-11-07 13:43:41.948261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:34.183 [2024-11-07 13:43:41.960825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1ca0 00:38:34.183 [2024-11-07 13:43:41.962402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.183 [2024-11-07 13:43:41.962424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:34.183 [2024-11-07 13:43:41.973182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bee190 00:38:34.183 [2024-11-07 13:43:41.974724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.183 [2024-11-07 13:43:41.974746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:34.183 [2024-11-07 13:43:41.987335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be73e0 00:38:34.183 [2024-11-07 13:43:41.988882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.183 [2024-11-07 13:43:41.988903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:34.184 [2024-11-07 13:43:42.002262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be1f80 00:38:34.184 [2024-11-07 13:43:42.004515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.184 [2024-11-07 13:43:42.004536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:38:34.184 [2024-11-07 13:43:42.013799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beea00 00:38:34.184 [2024-11-07 13:43:42.015365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.184 [2024-11-07 13:43:42.015386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:38:34.184 [2024-11-07 13:43:42.027051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be9e10 00:38:34.184 [2024-11-07 13:43:42.028591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.184 [2024-11-07 13:43:42.028612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:38:34.184 [2024-11-07 13:43:42.040352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2510 00:38:34.184 [2024-11-07 13:43:42.041845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.184 [2024-11-07 13:43:42.041870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:38:34.184 [2024-11-07 13:43:42.055255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016befae0 00:38:34.184 [2024-11-07 13:43:42.057489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.184 [2024-11-07 13:43:42.057510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:38:34.184 [2024-11-07 13:43:42.066798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf35f0 00:38:34.184 [2024-11-07 13:43:42.068355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.184 [2024-11-07 13:43:42.068382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:38:34.184 [2024-11-07 13:43:42.081746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beea00 00:38:34.184 [2024-11-07 13:43:42.083983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.184 [2024-11-07 13:43:42.084004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:38:34.184 [2024-11-07 13:43:42.092396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be8088 00:38:34.184 [2024-11-07 13:43:42.093923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.184 [2024-11-07 13:43:42.093944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:38:34.184 [2024-11-07 13:43:42.106713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6890 00:38:34.184 [2024-11-07 13:43:42.108251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.184 [2024-11-07 13:43:42.108273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:38:34.184 [2024-11-07 13:43:42.119043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:38:34.184 [2024-11-07 13:43:42.120555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.184 [2024-11-07 13:43:42.120576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:38:34.184 [2024-11-07 13:43:42.133174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6300 00:38:34.184 [2024-11-07 13:43:42.134731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.184 [2024-11-07 13:43:42.134753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:38:34.184 [2024-11-07 13:43:42.146462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bee190 00:38:34.184 [2024-11-07 13:43:42.147941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.184 [2024-11-07 13:43:42.147962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:38:34.184 [2024-11-07 13:43:42.159708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be73e0 00:38:34.184 [2024-11-07 13:43:42.161213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.184 [2024-11-07 13:43:42.161234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:38:34.184 [2024-11-07 13:43:42.174651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6300 00:38:34.184 [2024-11-07 13:43:42.176856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.184 [2024-11-07 13:43:42.176881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:38:34.184 [2024-11-07 13:43:42.186619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be95a0 00:38:34.445 [2024-11-07 13:43:42.188303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.445 [2024-11-07 13:43:42.188325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:34.445 [2024-11-07 13:43:42.197459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bff3c8 00:38:34.445 [2024-11-07 13:43:42.198441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.445 [2024-11-07 13:43:42.198462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:38:34.445 [2024-11-07 13:43:42.212429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfd208 00:38:34.445 [2024-11-07 13:43:42.214076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.445 [2024-11-07 13:43:42.214097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:38:34.445 [2024-11-07 13:43:42.223936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf4f40 00:38:34.445 [2024-11-07 13:43:42.224908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.445 [2024-11-07 13:43:42.224929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:38:34.445 [2024-11-07 13:43:42.237182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfa3a0 00:38:34.445 [2024-11-07 13:43:42.238139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.445 [2024-11-07 13:43:42.238163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:38:34.445 [2024-11-07 13:43:42.250440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be95a0 00:38:34.445 [2024-11-07 13:43:42.252114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.445 [2024-11-07 13:43:42.252135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:38:34.445 19039.00 IOPS, 74.37 MiB/s [2024-11-07T12:43:42.452Z] [2024-11-07 13:43:42.265331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb048 00:38:34.445 [2024-11-07 13:43:42.266974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.445 [2024-11-07 13:43:42.266995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:38:34.445 [2024-11-07 13:43:42.277281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7100 00:38:34.445 [2024-11-07 13:43:42.278408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.445 [2024-11-07 13:43:42.278429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:38:34.445 [2024-11-07 13:43:42.290718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfbcf0 00:38:34.445 [2024-11-07 13:43:42.291841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.445 [2024-11-07 13:43:42.291865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:38:34.446 [2024-11-07 13:43:42.305723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfcdd0 00:38:34.446 [2024-11-07 13:43:42.307550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.446 [2024-11-07 13:43:42.307571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:38:34.446 [2024-11-07 13:43:42.317259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be84c0 00:38:34.446 [2024-11-07 13:43:42.318397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.446 [2024-11-07 13:43:42.318418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:38:34.446 [2024-11-07 13:43:42.329636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0ff8 00:38:34.446 [2024-11-07 13:43:42.330745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.446 [2024-11-07 13:43:42.330766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:38:34.446 [2024-11-07 13:43:42.343748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfc560 00:38:34.446 [2024-11-07 13:43:42.344857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:17801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.446 [2024-11-07 13:43:42.344883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:38:34.446 [2024-11-07 13:43:42.357016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7970 00:38:34.446 [2024-11-07 13:43:42.358123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.446 [2024-11-07 13:43:42.358144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:38:34.446 [2024-11-07 13:43:42.370303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfc560 00:38:34.446 [2024-11-07 13:43:42.371393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.446 [2024-11-07 13:43:42.371415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:38:34.446 [2024-11-07 13:43:42.385222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7970 00:38:34.446 [2024-11-07 13:43:42.387040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.446 [2024-11-07 13:43:42.387061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:38:34.446 [2024-11-07 13:43:42.395875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beff18 00:38:34.446 [2024-11-07 13:43:42.396970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.446 [2024-11-07 13:43:42.396991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:38:34.446 [2024-11-07 13:43:42.411652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfc560 00:38:34.446 [2024-11-07 13:43:42.413440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.446 [2024-11-07 13:43:42.413461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:38:34.446 [2024-11-07 13:43:42.423169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beff18 00:38:34.446 [2024-11-07 13:43:42.424250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.446 [2024-11-07 13:43:42.424271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:38:34.446 [2024-11-07 13:43:42.435546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5a90 00:38:34.446 [2024-11-07 13:43:42.436614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.446 [2024-11-07 13:43:42.436635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:34.707 [2024-11-07 13:43:42.449634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6890 00:38:34.707 [2024-11-07 13:43:42.450731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.707 [2024-11-07 13:43:42.450752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:38:34.707 [2024-11-07 13:43:42.462895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfc128 00:38:34.707 [2024-11-07 13:43:42.463930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.707 [2024-11-07 13:43:42.463953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:38:34.707 [2024-11-07 13:43:42.475178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb8b8 00:38:34.707 [2024-11-07 13:43:42.476223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:10746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.707 [2024-11-07 13:43:42.476244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:38:34.707 [2024-11-07 13:43:42.489291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfc998 00:38:34.707 [2024-11-07 13:43:42.490330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.707 [2024-11-07 13:43:42.490352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:38:34.707 [2024-11-07 13:43:42.501645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be1710 00:38:34.707 [2024-11-07 13:43:42.502680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.707 [2024-11-07 13:43:42.502701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:38:34.707 [2024-11-07 13:43:42.517495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6890 00:38:34.707 [2024-11-07 13:43:42.519191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.707 [2024-11-07 13:43:42.519212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:38:34.707 [2024-11-07 13:43:42.528159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0ea0 00:38:34.707 [2024-11-07 13:43:42.529197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.707 [2024-11-07 13:43:42.529218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:38:34.707 [2024-11-07 13:43:42.542311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfd208 00:38:34.707 [2024-11-07 13:43:42.543372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.707 [2024-11-07 13:43:42.543393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:38:34.707 [2024-11-07 13:43:42.555618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfc128 00:38:34.707 [2024-11-07 13:43:42.556669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.707 [2024-11-07 13:43:42.556690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:38:34.707 [2024-11-07 13:43:42.568896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be3498 00:38:34.707 [2024-11-07 13:43:42.569943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.707 [2024-11-07 13:43:42.569964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:38:34.707 [2024-11-07 13:43:42.582123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6890 00:38:34.707 [2024-11-07 13:43:42.583118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.707 [2024-11-07 13:43:42.583138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:38:34.708 [2024-11-07 13:43:42.595392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7970 00:38:34.708 [2024-11-07 13:43:42.596426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.708 [2024-11-07 13:43:42.596447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:38:34.708 [2024-11-07 13:43:42.610531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8a50 00:38:34.708 [2024-11-07 13:43:42.612263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.708 [2024-11-07 13:43:42.612284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:38:34.708 [2024-11-07 13:43:42.621220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6300 00:38:34.708 [2024-11-07 13:43:42.622251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.708 [2024-11-07 13:43:42.622272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:38:34.708 [2024-11-07 13:43:42.635356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be3498 00:38:34.708 [2024-11-07 13:43:42.636423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.708 [2024-11-07 13:43:42.636444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:38:34.708 [2024-11-07 13:43:42.648671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2510 00:38:34.708 [2024-11-07 13:43:42.649722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.708 [2024-11-07 13:43:42.649743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:38:34.708 [2024-11-07 13:43:42.661953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be1f80 00:38:34.708 [2024-11-07 13:43:42.663012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.708 [2024-11-07 13:43:42.663032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:38:34.708 [2024-11-07 13:43:42.675196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8a50 00:38:34.708 [2024-11-07 13:43:42.676228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:9166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.708 [2024-11-07 13:43:42.676249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:38:34.708 [2024-11-07 13:43:42.688525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:38:34.708 [2024-11-07 13:43:42.689577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.708 [2024-11-07 13:43:42.689598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:38:34.708 [2024-11-07 13:43:42.701755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6300 00:38:34.708 [2024-11-07 13:43:42.702788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.708 [2024-11-07 13:43:42.702809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:38:34.969 [2024-11-07 13:43:42.715050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be1f80 00:38:34.969 [2024-11-07 13:43:42.716125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.969 [2024-11-07 13:43:42.716146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:38:34.969 [2024-11-07 13:43:42.727427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf81e0 00:38:34.969 [2024-11-07 13:43:42.728414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.969 [2024-11-07 13:43:42.728435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:38:34.969 [2024-11-07 13:43:42.740617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6020 00:38:34.969 [2024-11-07 13:43:42.741636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.969 [2024-11-07 13:43:42.741657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:38:34.969 [2024-11-07 13:43:42.756501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be1f80 00:38:34.969 [2024-11-07 13:43:42.758232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.969 [2024-11-07 13:43:42.758252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:38:34.969 [2024-11-07 13:43:42.768035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016becc78 00:38:34.969 [2024-11-07 13:43:42.769042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.969 [2024-11-07 13:43:42.769062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:38:34.969 [2024-11-07 13:43:42.781298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:38:34.969 [2024-11-07 13:43:42.782310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.969 [2024-11-07 13:43:42.782332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:38:34.969 [2024-11-07 13:43:42.794621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdf118 00:38:34.969 [2024-11-07 13:43:42.795647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:17506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.969 [2024-11-07 13:43:42.795667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:38:34.969 [2024-11-07 13:43:42.807875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf92c0 00:38:34.969 [2024-11-07 13:43:42.808877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.969 [2024-11-07 13:43:42.808898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:38:34.969 [2024-11-07 13:43:42.821158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be1710 00:38:34.969 [2024-11-07 13:43:42.822174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.969 [2024-11-07 13:43:42.822196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:38:34.969 [2024-11-07 13:43:42.836200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be27f0 00:38:34.969 [2024-11-07 13:43:42.837920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.969 [2024-11-07 13:43:42.837941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:38:34.969 [2024-11-07 13:43:42.849394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bde470 00:38:34.969 [2024-11-07 13:43:42.851106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.969 [2024-11-07 13:43:42.851127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:38:34.970 [2024-11-07 13:43:42.860072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf92c0 00:38:34.970 [2024-11-07 13:43:42.861042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.970 [2024-11-07 13:43:42.861063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:38:34.970 [2024-11-07 13:43:42.875843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:38:34.970 [2024-11-07 13:43:42.877561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.970 [2024-11-07 13:43:42.877582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:38:34.970 [2024-11-07 13:43:42.889031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf9f68 00:38:34.970 [2024-11-07 13:43:42.890719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.970 [2024-11-07 13:43:42.890740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:38:34.970 [2024-11-07 13:43:42.900578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfac10 00:38:34.970 [2024-11-07 13:43:42.901578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.970 [2024-11-07 13:43:42.901598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:34.970 [2024-11-07 13:43:42.912900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfef90 00:38:34.970 [2024-11-07 13:43:42.913889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.970 [2024-11-07 13:43:42.913916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:34.970 [2024-11-07 13:43:42.929000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5658 00:38:34.970 [2024-11-07 13:43:42.930677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.970 [2024-11-07 13:43:42.930698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:38:34.970 [2024-11-07 13:43:42.940059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfcdd0 00:38:34.970 [2024-11-07 13:43:42.941212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.970 [2024-11-07 13:43:42.941232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:38:34.970 [2024-11-07 13:43:42.954180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfbcf0 00:38:34.970 [2024-11-07 13:43:42.955301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.970 [2024-11-07 13:43:42.955322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:38:34.970 [2024-11-07 13:43:42.969124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8a50 00:38:34.970 [2024-11-07 13:43:42.970934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.970 [2024-11-07 13:43:42.970955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:38:35.231 [2024-11-07 13:43:42.979745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be8d30 00:38:35.231 [2024-11-07 13:43:42.980892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.231 [2024-11-07 13:43:42.980913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:38:35.231 [2024-11-07 13:43:42.993872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bef270 00:38:35.231 [2024-11-07 13:43:42.995005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.231 [2024-11-07 13:43:42.995026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:38:35.231 [2024-11-07 13:43:43.007146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfbcf0 00:38:35.231 [2024-11-07 13:43:43.008319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.231 [2024-11-07 13:43:43.008339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:38:35.231 [2024-11-07 13:43:43.020316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bef270 00:38:35.231 [2024-11-07 13:43:43.021476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.231 [2024-11-07 13:43:43.021498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:38:35.231 [2024-11-07 13:43:43.033553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfcdd0 00:38:35.231 [2024-11-07 13:43:43.034688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.231 [2024-11-07 13:43:43.034712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:38:35.231 [2024-11-07 13:43:43.048547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7970 00:38:35.231 [2024-11-07 13:43:43.050333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.231 [2024-11-07 13:43:43.050355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:38:35.231 [2024-11-07 13:43:43.059982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016befae0 00:38:35.231 [2024-11-07 13:43:43.061093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.231 [2024-11-07 13:43:43.061115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:38:35.231 [2024-11-07 13:43:43.072383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfd640 00:38:35.231 [2024-11-07 13:43:43.073487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.231 [2024-11-07 13:43:43.073507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:38:35.232 [2024-11-07 13:43:43.088222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfc560 00:38:35.232 [2024-11-07 13:43:43.090036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.232 [2024-11-07 13:43:43.090057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:38:35.232 [2024-11-07 13:43:43.099091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf20d8 00:38:35.232 [2024-11-07 13:43:43.100219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.232 [2024-11-07 13:43:43.100239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:38:35.232 [2024-11-07 13:43:43.115592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be27f0 00:38:35.232 [2024-11-07 13:43:43.117576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.232 [2024-11-07 13:43:43.117597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:38:35.232 [2024-11-07 13:43:43.126227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6738 00:38:35.232 [2024-11-07 13:43:43.127507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.232 [2024-11-07 13:43:43.127528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:38:35.232 [2024-11-07 13:43:43.139438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf5be8 00:38:35.232 [2024-11-07 13:43:43.140718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:25008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.232 [2024-11-07 13:43:43.140739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:38:35.232 [2024-11-07 13:43:43.152652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6738 00:38:35.232 [2024-11-07 13:43:43.153931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.232 [2024-11-07 13:43:43.153952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:38:35.232 [2024-11-07 13:43:43.168419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be4de8 00:38:35.232 [2024-11-07 13:43:43.170400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.232 [2024-11-07 13:43:43.170420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:38:35.232 [2024-11-07 13:43:43.179961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be23b8 00:38:35.232 [2024-11-07 13:43:43.181221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.232 [2024-11-07 13:43:43.181242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:38:35.232 [2024-11-07 13:43:43.192345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf5be8 00:38:35.232 [2024-11-07 13:43:43.193555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.232 [2024-11-07 13:43:43.193576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:38:35.232 [2024-11-07 13:43:43.206434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be9e10 00:38:35.232 [2024-11-07 13:43:43.207714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.232 [2024-11-07 13:43:43.207736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:38:35.232 [2024-11-07 13:43:43.219636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf5be8 00:38:35.232 [2024-11-07 13:43:43.220901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.232 [2024-11-07 13:43:43.220922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:38:35.232 [2024-11-07 13:43:43.234512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6020 00:38:35.493 [2024-11-07 13:43:43.236451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:42 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.493 [2024-11-07 13:43:43.236473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:38:35.493 [2024-11-07 13:43:43.245160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be38d0 00:38:35.493 [2024-11-07 13:43:43.246386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.493 [2024-11-07 13:43:43.246407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:38:35.493 19157.50 IOPS, 74.83 MiB/s 00:38:35.493 Latency(us) 00:38:35.493 [2024-11-07T12:43:43.500Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:35.493 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:35.493 nvme0n1 : 2.00 19179.93 74.92 0.00 0.00 6668.22 2484.91 16930.13 00:38:35.493 [2024-11-07T12:43:43.500Z] =================================================================================================================== 00:38:35.493 [2024-11-07T12:43:43.500Z] Total : 19179.93 74.92 0.00 0.00 6668.22 2484.91 16930.13 00:38:35.493 { 00:38:35.493 "results": [ 00:38:35.493 { 00:38:35.493 "job": "nvme0n1", 00:38:35.493 "core_mask": "0x2", 00:38:35.493 "workload": "randwrite", 00:38:35.493 "status": "finished", 00:38:35.493 "queue_depth": 128, 00:38:35.493 "io_size": 4096, 00:38:35.493 "runtime": 2.004335, 00:38:35.493 "iops": 19179.9275071283, 00:38:35.493 "mibps": 74.92159182471993, 00:38:35.493 "io_failed": 0, 00:38:35.493 "io_timeout": 0, 00:38:35.493 "avg_latency_us": 6668.22471087064, 00:38:35.493 "min_latency_us": 2484.9066666666668, 00:38:35.493 "max_latency_us": 16930.133333333335 00:38:35.493 } 00:38:35.493 ], 00:38:35.493 "core_count": 1 00:38:35.493 } 00:38:35.493 13:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:38:35.493 13:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:38:35.493 13:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:38:35.493 | .driver_specific 00:38:35.493 | .nvme_error 00:38:35.493 | .status_code 00:38:35.493 | .command_transient_transport_error' 00:38:35.493 13:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:38:35.493 13:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 150 > 0 )) 00:38:35.493 13:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4132660 00:38:35.493 13:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 4132660 ']' 00:38:35.493 13:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 4132660 00:38:35.493 13:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:38:35.493 13:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:35.493 13:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4132660 00:38:35.755 13:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:35.755 13:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:35.755 13:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4132660' 00:38:35.755 killing process with pid 4132660 00:38:35.755 13:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 4132660 00:38:35.755 Received shutdown signal, test time was about 2.000000 seconds 00:38:35.755 00:38:35.755 Latency(us) 00:38:35.755 [2024-11-07T12:43:43.762Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:35.755 [2024-11-07T12:43:43.762Z] =================================================================================================================== 00:38:35.755 [2024-11-07T12:43:43.762Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:35.755 13:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 4132660 00:38:36.015 13:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:38:36.015 13:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:38:36.015 13:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:38:36.015 13:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:38:36.016 13:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:38:36.016 13:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4133509 00:38:36.016 13:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4133509 /var/tmp/bperf.sock 00:38:36.016 13:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 4133509 ']' 00:38:36.016 13:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:36.016 13:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:36.016 13:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:36.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:36.016 13:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:36.016 13:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:36.016 13:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:38:36.276 [2024-11-07 13:43:44.025138] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:38:36.276 [2024-11-07 13:43:44.025247] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4133509 ] 00:38:36.276 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:36.276 Zero copy mechanism will not be used. 00:38:36.276 [2024-11-07 13:43:44.167049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:36.276 [2024-11-07 13:43:44.241033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:36.848 13:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:36.848 13:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:38:36.848 13:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:36.848 13:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:37.109 13:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:38:37.110 13:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:37.110 13:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:37.110 13:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:37.110 13:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:37.110 13:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:37.682 nvme0n1 00:38:37.682 13:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:38:37.682 13:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:37.682 13:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:37.682 13:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:37.682 13:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:38:37.682 13:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:37.682 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:37.682 Zero copy mechanism will not be used. 00:38:37.682 Running I/O for 2 seconds... 00:38:37.682 [2024-11-07 13:43:45.497906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.682 [2024-11-07 13:43:45.498156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.682 [2024-11-07 13:43:45.498188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:37.682 [2024-11-07 13:43:45.507887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.682 [2024-11-07 13:43:45.508191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.682 [2024-11-07 13:43:45.508217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:37.682 [2024-11-07 13:43:45.516940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.682 [2024-11-07 13:43:45.517168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.682 [2024-11-07 13:43:45.517190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:37.682 [2024-11-07 13:43:45.523848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.682 [2024-11-07 13:43:45.524087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.682 [2024-11-07 13:43:45.524109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:37.682 [2024-11-07 13:43:45.531968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.682 [2024-11-07 13:43:45.532066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.682 [2024-11-07 13:43:45.532088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:37.682 [2024-11-07 13:43:45.540203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.682 [2024-11-07 13:43:45.540431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.682 [2024-11-07 13:43:45.540453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:37.682 [2024-11-07 13:43:45.548002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.682 [2024-11-07 13:43:45.548212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.682 [2024-11-07 13:43:45.548233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:37.682 [2024-11-07 13:43:45.557005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.682 [2024-11-07 13:43:45.557252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.682 [2024-11-07 13:43:45.557273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:37.682 [2024-11-07 13:43:45.565346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.682 [2024-11-07 13:43:45.565592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.682 [2024-11-07 13:43:45.565614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:37.682 [2024-11-07 13:43:45.574431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.682 [2024-11-07 13:43:45.574680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.682 [2024-11-07 13:43:45.574701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:37.682 [2024-11-07 13:43:45.583169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.682 [2024-11-07 13:43:45.583452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.682 [2024-11-07 13:43:45.583481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:37.682 [2024-11-07 13:43:45.591168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.683 [2024-11-07 13:43:45.591404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.683 [2024-11-07 13:43:45.591425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:37.683 [2024-11-07 13:43:45.597936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.683 [2024-11-07 13:43:45.598200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.683 [2024-11-07 13:43:45.598220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:37.683 [2024-11-07 13:43:45.605664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.683 [2024-11-07 13:43:45.605933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.683 [2024-11-07 13:43:45.605953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:37.683 [2024-11-07 13:43:45.613342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.683 [2024-11-07 13:43:45.613621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.683 [2024-11-07 13:43:45.613643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:37.683 [2024-11-07 13:43:45.622158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.683 [2024-11-07 13:43:45.622438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.683 [2024-11-07 13:43:45.622458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:37.683 [2024-11-07 13:43:45.630845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.683 [2024-11-07 13:43:45.631103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.683 [2024-11-07 13:43:45.631123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:37.683 [2024-11-07 13:43:45.639015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.683 [2024-11-07 13:43:45.639242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.683 [2024-11-07 13:43:45.639263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:37.683 [2024-11-07 13:43:45.649063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.683 [2024-11-07 13:43:45.649369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.683 [2024-11-07 13:43:45.649391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:37.683 [2024-11-07 13:43:45.660847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.683 [2024-11-07 13:43:45.661114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.683 [2024-11-07 13:43:45.661136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:37.683 [2024-11-07 13:43:45.672052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.683 [2024-11-07 13:43:45.672357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.683 [2024-11-07 13:43:45.672379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:37.683 [2024-11-07 13:43:45.683423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.683 [2024-11-07 13:43:45.683613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.683 [2024-11-07 13:43:45.683635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:37.945 [2024-11-07 13:43:45.694685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.945 [2024-11-07 13:43:45.694961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.945 [2024-11-07 13:43:45.694982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:37.945 [2024-11-07 13:43:45.705046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.945 [2024-11-07 13:43:45.705113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.945 [2024-11-07 13:43:45.705133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:37.945 [2024-11-07 13:43:45.712010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.945 [2024-11-07 13:43:45.712086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.945 [2024-11-07 13:43:45.712107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:37.945 [2024-11-07 13:43:45.720825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.945 [2024-11-07 13:43:45.721102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.945 [2024-11-07 13:43:45.721130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:37.945 [2024-11-07 13:43:45.729649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.945 [2024-11-07 13:43:45.729851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.945 [2024-11-07 13:43:45.729876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:37.945 [2024-11-07 13:43:45.738442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.945 [2024-11-07 13:43:45.738721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.945 [2024-11-07 13:43:45.738742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:37.945 [2024-11-07 13:43:45.747620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.945 [2024-11-07 13:43:45.747848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.945 [2024-11-07 13:43:45.747873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:37.945 [2024-11-07 13:43:45.757806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.945 [2024-11-07 13:43:45.758102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.945 [2024-11-07 13:43:45.758122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:37.945 [2024-11-07 13:43:45.767308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.945 [2024-11-07 13:43:45.767502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.945 [2024-11-07 13:43:45.767522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:37.945 [2024-11-07 13:43:45.775002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.945 [2024-11-07 13:43:45.775286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.945 [2024-11-07 13:43:45.775306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:37.945 [2024-11-07 13:43:45.781687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.945 [2024-11-07 13:43:45.781929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.945 [2024-11-07 13:43:45.781949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:37.945 [2024-11-07 13:43:45.787874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.945 [2024-11-07 13:43:45.787940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.945 [2024-11-07 13:43:45.787960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:37.945 [2024-11-07 13:43:45.796263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.945 [2024-11-07 13:43:45.796345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.945 [2024-11-07 13:43:45.796365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:37.945 [2024-11-07 13:43:45.801882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.945 [2024-11-07 13:43:45.802139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.946 [2024-11-07 13:43:45.802159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:37.946 [2024-11-07 13:43:45.809973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.946 [2024-11-07 13:43:45.810218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.946 [2024-11-07 13:43:45.810238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:37.946 [2024-11-07 13:43:45.816718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.946 [2024-11-07 13:43:45.816986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.946 [2024-11-07 13:43:45.817006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:37.946 [2024-11-07 13:43:45.824781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.946 [2024-11-07 13:43:45.825113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.946 [2024-11-07 13:43:45.825134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:37.946 [2024-11-07 13:43:45.832628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.946 [2024-11-07 13:43:45.832881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.946 [2024-11-07 13:43:45.832902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:37.946 [2024-11-07 13:43:45.841549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.946 [2024-11-07 13:43:45.841628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.946 [2024-11-07 13:43:45.841648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:37.946 [2024-11-07 13:43:45.847048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.946 [2024-11-07 13:43:45.847330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.946 [2024-11-07 13:43:45.847351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:37.946 [2024-11-07 13:43:45.854465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.946 [2024-11-07 13:43:45.854534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.946 [2024-11-07 13:43:45.854558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:37.946 [2024-11-07 13:43:45.861258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.946 [2024-11-07 13:43:45.861381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.946 [2024-11-07 13:43:45.861402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:37.946 [2024-11-07 13:43:45.870384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.946 [2024-11-07 13:43:45.870464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.946 [2024-11-07 13:43:45.870485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:37.946 [2024-11-07 13:43:45.878990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.946 [2024-11-07 13:43:45.879217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.946 [2024-11-07 13:43:45.879237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:37.946 [2024-11-07 13:43:45.886286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.946 [2024-11-07 13:43:45.886566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.946 [2024-11-07 13:43:45.886588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:37.946 [2024-11-07 13:43:45.894304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.946 [2024-11-07 13:43:45.894365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.946 [2024-11-07 13:43:45.894385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:37.946 [2024-11-07 13:43:45.902572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.946 [2024-11-07 13:43:45.902777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.946 [2024-11-07 13:43:45.902798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:37.946 [2024-11-07 13:43:45.910272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.946 [2024-11-07 13:43:45.910475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.946 [2024-11-07 13:43:45.910495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:37.946 [2024-11-07 13:43:45.916652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.946 [2024-11-07 13:43:45.916885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.946 [2024-11-07 13:43:45.916905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:37.946 [2024-11-07 13:43:45.922959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.946 [2024-11-07 13:43:45.923216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.946 [2024-11-07 13:43:45.923236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:37.946 [2024-11-07 13:43:45.929753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.946 [2024-11-07 13:43:45.929826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.946 [2024-11-07 13:43:45.929847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:37.946 [2024-11-07 13:43:45.936673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.946 [2024-11-07 13:43:45.936899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.946 [2024-11-07 13:43:45.936919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:37.946 [2024-11-07 13:43:45.945928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:37.946 [2024-11-07 13:43:45.946199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:37.946 [2024-11-07 13:43:45.946219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:38.208 [2024-11-07 13:43:45.956936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.208 [2024-11-07 13:43:45.957194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.208 [2024-11-07 13:43:45.957214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:38.208 [2024-11-07 13:43:45.967818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.208 [2024-11-07 13:43:45.968134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.208 [2024-11-07 13:43:45.968156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:38.208 [2024-11-07 13:43:45.979658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.208 [2024-11-07 13:43:45.979989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.208 [2024-11-07 13:43:45.980010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:38.208 [2024-11-07 13:43:45.991008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.208 [2024-11-07 13:43:45.991290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.208 [2024-11-07 13:43:45.991311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:38.208 [2024-11-07 13:43:46.002673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.208 [2024-11-07 13:43:46.002960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.208 [2024-11-07 13:43:46.002990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:38.208 [2024-11-07 13:43:46.014198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.208 [2024-11-07 13:43:46.014530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.209 [2024-11-07 13:43:46.014559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:38.209 [2024-11-07 13:43:46.025424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.209 [2024-11-07 13:43:46.025692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.209 [2024-11-07 13:43:46.025713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:38.209 [2024-11-07 13:43:46.036195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.209 [2024-11-07 13:43:46.036515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.209 [2024-11-07 13:43:46.036537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:38.209 [2024-11-07 13:43:46.047675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.209 [2024-11-07 13:43:46.047948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.209 [2024-11-07 13:43:46.047968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:38.209 [2024-11-07 13:43:46.056114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.209 [2024-11-07 13:43:46.056381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.209 [2024-11-07 13:43:46.056402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:38.209 [2024-11-07 13:43:46.064284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.209 [2024-11-07 13:43:46.064351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.209 [2024-11-07 13:43:46.064371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:38.209 [2024-11-07 13:43:46.072488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.209 [2024-11-07 13:43:46.072690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.209 [2024-11-07 13:43:46.072710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:38.209 [2024-11-07 13:43:46.081424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.209 [2024-11-07 13:43:46.081518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.209 [2024-11-07 13:43:46.081538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:38.209 [2024-11-07 13:43:46.087564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.209 [2024-11-07 13:43:46.087781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.209 [2024-11-07 13:43:46.087802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:38.209 [2024-11-07 13:43:46.095901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.209 [2024-11-07 13:43:46.096159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.209 [2024-11-07 13:43:46.096181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:38.209 [2024-11-07 13:43:46.103613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.209 [2024-11-07 13:43:46.103829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.209 [2024-11-07 13:43:46.103849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:38.209 [2024-11-07 13:43:46.109956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.209 [2024-11-07 13:43:46.110227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.209 [2024-11-07 13:43:46.110247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:38.209 [2024-11-07 13:43:46.118231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.209 [2024-11-07 13:43:46.118318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.209 [2024-11-07 13:43:46.118339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:38.209 [2024-11-07 13:43:46.128324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.209 [2024-11-07 13:43:46.128420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.209 [2024-11-07 13:43:46.128441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:38.209 [2024-11-07 13:43:46.135557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.209 [2024-11-07 13:43:46.135796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.209 [2024-11-07 13:43:46.135817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:38.209 [2024-11-07 13:43:46.144288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.209 [2024-11-07 13:43:46.144552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.209 [2024-11-07 13:43:46.144572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:38.209 [2024-11-07 13:43:46.151943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.209 [2024-11-07 13:43:46.152024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.209 [2024-11-07 13:43:46.152045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:38.209 [2024-11-07 13:43:46.160871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.209 [2024-11-07 13:43:46.161142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.209 [2024-11-07 13:43:46.161163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:38.209 [2024-11-07 13:43:46.169694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.209 [2024-11-07 13:43:46.169954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.209 [2024-11-07 13:43:46.169974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:38.209 [2024-11-07 13:43:46.178561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.209 [2024-11-07 13:43:46.178783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.209 [2024-11-07 13:43:46.178804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:38.209 [2024-11-07 13:43:46.185771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.209 [2024-11-07 13:43:46.185839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.209 [2024-11-07 13:43:46.185859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:38.209 [2024-11-07 13:43:46.193309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.209 [2024-11-07 13:43:46.193569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.209 [2024-11-07 13:43:46.193590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:38.209 [2024-11-07 13:43:46.201776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.209 [2024-11-07 13:43:46.202053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.209 [2024-11-07 13:43:46.202075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:38.471 [2024-11-07 13:43:46.212850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.471 [2024-11-07 13:43:46.213128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.471 [2024-11-07 13:43:46.213149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:38.471 [2024-11-07 13:43:46.223124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.471 [2024-11-07 13:43:46.223382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.471 [2024-11-07 13:43:46.223403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:38.471 [2024-11-07 13:43:46.234412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.471 [2024-11-07 13:43:46.234694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.471 [2024-11-07 13:43:46.234716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:38.471 [2024-11-07 13:43:46.244931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.471 [2024-11-07 13:43:46.245254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.471 [2024-11-07 13:43:46.245276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:38.471 [2024-11-07 13:43:46.255755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.471 [2024-11-07 13:43:46.256005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.471 [2024-11-07 13:43:46.256025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:38.471 [2024-11-07 13:43:46.266212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.471 [2024-11-07 13:43:46.266490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.471 [2024-11-07 13:43:46.266511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:38.471 [2024-11-07 13:43:46.276665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.471 [2024-11-07 13:43:46.276976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.471 [2024-11-07 13:43:46.276997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:38.471 [2024-11-07 13:43:46.287195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.471 [2024-11-07 13:43:46.287485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.471 [2024-11-07 13:43:46.287505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:38.472 [2024-11-07 13:43:46.297250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.472 [2024-11-07 13:43:46.297548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.472 [2024-11-07 13:43:46.297569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:38.472 [2024-11-07 13:43:46.307710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.472 [2024-11-07 13:43:46.307969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.472 [2024-11-07 13:43:46.307990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:38.472 [2024-11-07 13:43:46.318550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.472 [2024-11-07 13:43:46.318866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.472 [2024-11-07 13:43:46.318887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:38.472 [2024-11-07 13:43:46.329436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.472 [2024-11-07 13:43:46.329750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.472 [2024-11-07 13:43:46.329772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:38.472 [2024-11-07 13:43:46.339992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.472 [2024-11-07 13:43:46.340253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.472 [2024-11-07 13:43:46.340274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:38.472 [2024-11-07 13:43:46.351181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.472 [2024-11-07 13:43:46.351457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.472 [2024-11-07 13:43:46.351477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:38.472 [2024-11-07 13:43:46.361386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.472 [2024-11-07 13:43:46.361629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.472 [2024-11-07 13:43:46.361650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:38.472 [2024-11-07 13:43:46.371602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.472 [2024-11-07 13:43:46.371804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.472 [2024-11-07 13:43:46.371825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:38.472 [2024-11-07 13:43:46.380554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.472 [2024-11-07 13:43:46.380658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.472 [2024-11-07 13:43:46.380679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:38.472 [2024-11-07 13:43:46.388537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.472 [2024-11-07 13:43:46.388837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.472 [2024-11-07 13:43:46.388859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:38.472 [2024-11-07 13:43:46.399398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.472 [2024-11-07 13:43:46.399667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.472 [2024-11-07 13:43:46.399689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:38.472 [2024-11-07 13:43:46.408460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.472 [2024-11-07 13:43:46.408547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.472 [2024-11-07 13:43:46.408571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:38.472 [2024-11-07 13:43:46.417183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.472 [2024-11-07 13:43:46.417457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.472 [2024-11-07 13:43:46.417477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:38.472 [2024-11-07 13:43:46.427066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.472 [2024-11-07 13:43:46.427265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.472 [2024-11-07 13:43:46.427285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:38.472 [2024-11-07 13:43:46.436996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.472 [2024-11-07 13:43:46.437292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.472 [2024-11-07 13:43:46.437314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:38.472 [2024-11-07 13:43:46.446653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.472 [2024-11-07 13:43:46.446883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.472 [2024-11-07 13:43:46.446904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:38.472 [2024-11-07 13:43:46.455375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.472 [2024-11-07 13:43:46.455646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.472 [2024-11-07 13:43:46.455665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:38.472 [2024-11-07 13:43:46.463281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.472 [2024-11-07 13:43:46.463513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.472 [2024-11-07 13:43:46.463533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:38.472 [2024-11-07 13:43:46.471316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.472 [2024-11-07 13:43:46.471497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.472 [2024-11-07 13:43:46.471517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:38.734 [2024-11-07 13:43:46.480890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.734 [2024-11-07 13:43:46.481087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.734 [2024-11-07 13:43:46.481107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:38.734 [2024-11-07 13:43:46.488208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.734 [2024-11-07 13:43:46.488422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.734 [2024-11-07 13:43:46.488443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:38.734 [2024-11-07 13:43:46.495669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.734 [2024-11-07 13:43:46.495911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.734 [2024-11-07 13:43:46.495932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:38.734 3472.00 IOPS, 434.00 MiB/s [2024-11-07T12:43:46.741Z] [2024-11-07 13:43:46.504467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.734 [2024-11-07 13:43:46.504688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.734 [2024-11-07 13:43:46.504710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:38.734 [2024-11-07 13:43:46.512223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.734 [2024-11-07 13:43:46.512339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.734 [2024-11-07 13:43:46.512361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:38.734 [2024-11-07 13:43:46.519072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.734 [2024-11-07 13:43:46.519305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.734 [2024-11-07 13:43:46.519326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:38.734 [2024-11-07 13:43:46.526135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.734 [2024-11-07 13:43:46.526272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.735 [2024-11-07 13:43:46.526293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:38.735 [2024-11-07 13:43:46.532417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.735 [2024-11-07 13:43:46.532627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.735 [2024-11-07 13:43:46.532648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:38.735 [2024-11-07 13:43:46.539501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.735 [2024-11-07 13:43:46.539652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.735 [2024-11-07 13:43:46.539673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:38.735 [2024-11-07 13:43:46.544384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.735 [2024-11-07 13:43:46.544602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.735 [2024-11-07 13:43:46.544626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:38.735 [2024-11-07 13:43:46.551221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.735 [2024-11-07 13:43:46.551546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.735 [2024-11-07 13:43:46.551568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:38.735 [2024-11-07 13:43:46.557846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.735 [2024-11-07 13:43:46.558152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.735 [2024-11-07 13:43:46.558173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:38.735 [2024-11-07 13:43:46.563185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.735 [2024-11-07 13:43:46.563514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.735 [2024-11-07 13:43:46.563535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:38.735 [2024-11-07 13:43:46.571687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.735 [2024-11-07 13:43:46.571970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.735 [2024-11-07 13:43:46.571991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:38.735 [2024-11-07 13:43:46.578496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.735 [2024-11-07 13:43:46.578752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.735 [2024-11-07 13:43:46.578783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:38.735 [2024-11-07 13:43:46.585550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.735 [2024-11-07 13:43:46.585753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.735 [2024-11-07 13:43:46.585774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:38.735 [2024-11-07 13:43:46.592773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.735 [2024-11-07 13:43:46.592896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.735 [2024-11-07 13:43:46.592917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:38.735 [2024-11-07 13:43:46.596923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.735 [2024-11-07 13:43:46.597050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.735 [2024-11-07 13:43:46.597070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:38.735 [2024-11-07 13:43:46.600951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.735 [2024-11-07 13:43:46.601105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.735 [2024-11-07 13:43:46.601125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:38.735 [2024-11-07 13:43:46.604906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.735 [2024-11-07 13:43:46.605233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.735 [2024-11-07 13:43:46.605254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:38.735 [2024-11-07 13:43:46.610666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.735 [2024-11-07 13:43:46.610757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.735 [2024-11-07 13:43:46.610777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:38.735 [2024-11-07 13:43:46.615554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.735 [2024-11-07 13:43:46.615762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.735 [2024-11-07 13:43:46.615784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:38.735 [2024-11-07 13:43:46.620389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.735 [2024-11-07 13:43:46.620526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.735 [2024-11-07 13:43:46.620547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:38.735 [2024-11-07 13:43:46.625045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.735 [2024-11-07 13:43:46.625182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.735 [2024-11-07 13:43:46.625203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:38.735 [2024-11-07 13:43:46.632537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.735 [2024-11-07 13:43:46.632676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.735 [2024-11-07 13:43:46.632697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:38.735 [2024-11-07 13:43:46.641152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.735 [2024-11-07 13:43:46.641423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.735 [2024-11-07 13:43:46.641444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:38.735 [2024-11-07 13:43:46.649523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.735 [2024-11-07 13:43:46.649893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.735 [2024-11-07 13:43:46.649915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:38.735 [2024-11-07 13:43:46.657527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.735 [2024-11-07 13:43:46.657664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.735 [2024-11-07 13:43:46.657685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:38.735 [2024-11-07 13:43:46.662215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.735 [2024-11-07 13:43:46.662360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.735 [2024-11-07 13:43:46.662381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:38.735 [2024-11-07 13:43:46.667502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.735 [2024-11-07 13:43:46.667665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.735 [2024-11-07 13:43:46.667686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:38.735 [2024-11-07 13:43:46.674422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.735 [2024-11-07 13:43:46.674574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.735 [2024-11-07 13:43:46.674595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:38.735 [2024-11-07 13:43:46.680276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.735 [2024-11-07 13:43:46.680445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.735 [2024-11-07 13:43:46.680466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:38.735 [2024-11-07 13:43:46.689235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.735 [2024-11-07 13:43:46.689483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.735 [2024-11-07 13:43:46.689504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:38.735 [2024-11-07 13:43:46.698010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.735 [2024-11-07 13:43:46.698264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.735 [2024-11-07 13:43:46.698286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:38.736 [2024-11-07 13:43:46.708551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.736 [2024-11-07 13:43:46.708754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.736 [2024-11-07 13:43:46.708774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:38.736 [2024-11-07 13:43:46.718881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.736 [2024-11-07 13:43:46.719046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.736 [2024-11-07 13:43:46.719068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:38.736 [2024-11-07 13:43:46.729529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.736 [2024-11-07 13:43:46.729781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.736 [2024-11-07 13:43:46.729802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:38.998 [2024-11-07 13:43:46.739820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.998 [2024-11-07 13:43:46.740063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.998 [2024-11-07 13:43:46.740084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:38.998 [2024-11-07 13:43:46.750446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.998 [2024-11-07 13:43:46.750680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.998 [2024-11-07 13:43:46.750701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:38.998 [2024-11-07 13:43:46.760687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.998 [2024-11-07 13:43:46.761049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.998 [2024-11-07 13:43:46.761071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:38.998 [2024-11-07 13:43:46.770662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.998 [2024-11-07 13:43:46.770882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.998 [2024-11-07 13:43:46.770903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:38.998 [2024-11-07 13:43:46.781211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.998 [2024-11-07 13:43:46.781486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.998 [2024-11-07 13:43:46.781507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:38.998 [2024-11-07 13:43:46.790922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.998 [2024-11-07 13:43:46.791118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.998 [2024-11-07 13:43:46.791138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:38.998 [2024-11-07 13:43:46.801421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.998 [2024-11-07 13:43:46.801653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.998 [2024-11-07 13:43:46.801673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:38.998 [2024-11-07 13:43:46.811663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.998 [2024-11-07 13:43:46.811887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.998 [2024-11-07 13:43:46.811908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:38.998 [2024-11-07 13:43:46.821346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.998 [2024-11-07 13:43:46.821644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.998 [2024-11-07 13:43:46.821665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:38.998 [2024-11-07 13:43:46.831283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.998 [2024-11-07 13:43:46.831352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.998 [2024-11-07 13:43:46.831373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:38.998 [2024-11-07 13:43:46.840928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.998 [2024-11-07 13:43:46.841213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.998 [2024-11-07 13:43:46.841234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:38.998 [2024-11-07 13:43:46.851347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.998 [2024-11-07 13:43:46.851538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.998 [2024-11-07 13:43:46.851558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:38.998 [2024-11-07 13:43:46.861032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.998 [2024-11-07 13:43:46.861315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.998 [2024-11-07 13:43:46.861337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:38.998 [2024-11-07 13:43:46.870437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.998 [2024-11-07 13:43:46.870845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.998 [2024-11-07 13:43:46.870873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:38.998 [2024-11-07 13:43:46.880060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.998 [2024-11-07 13:43:46.880326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.998 [2024-11-07 13:43:46.880346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:38.998 [2024-11-07 13:43:46.890070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.998 [2024-11-07 13:43:46.890313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.998 [2024-11-07 13:43:46.890337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:38.998 [2024-11-07 13:43:46.897724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.998 [2024-11-07 13:43:46.897785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.998 [2024-11-07 13:43:46.897804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:38.998 [2024-11-07 13:43:46.903543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.998 [2024-11-07 13:43:46.903712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.998 [2024-11-07 13:43:46.903733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:38.998 [2024-11-07 13:43:46.912378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.998 [2024-11-07 13:43:46.912498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.998 [2024-11-07 13:43:46.912519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:38.998 [2024-11-07 13:43:46.920196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.998 [2024-11-07 13:43:46.920431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.998 [2024-11-07 13:43:46.920451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:38.998 [2024-11-07 13:43:46.928300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.998 [2024-11-07 13:43:46.928399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.998 [2024-11-07 13:43:46.928419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:38.998 [2024-11-07 13:43:46.935016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.998 [2024-11-07 13:43:46.935277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.998 [2024-11-07 13:43:46.935297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:38.998 [2024-11-07 13:43:46.943219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.998 [2024-11-07 13:43:46.943292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.998 [2024-11-07 13:43:46.943313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:38.998 [2024-11-07 13:43:46.950917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.998 [2024-11-07 13:43:46.951283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.998 [2024-11-07 13:43:46.951304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:38.999 [2024-11-07 13:43:46.960374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.999 [2024-11-07 13:43:46.960455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.999 [2024-11-07 13:43:46.960475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:38.999 [2024-11-07 13:43:46.969631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.999 [2024-11-07 13:43:46.969855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.999 [2024-11-07 13:43:46.969882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:38.999 [2024-11-07 13:43:46.976581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.999 [2024-11-07 13:43:46.976780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.999 [2024-11-07 13:43:46.976801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:38.999 [2024-11-07 13:43:46.984786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.999 [2024-11-07 13:43:46.984892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.999 [2024-11-07 13:43:46.984913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:38.999 [2024-11-07 13:43:46.994156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:38.999 [2024-11-07 13:43:46.994396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.999 [2024-11-07 13:43:46.994416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:39.262 [2024-11-07 13:43:47.003114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.262 [2024-11-07 13:43:47.003245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.262 [2024-11-07 13:43:47.003265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:39.262 [2024-11-07 13:43:47.009048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.262 [2024-11-07 13:43:47.009194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.262 [2024-11-07 13:43:47.009214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:39.262 [2024-11-07 13:43:47.015731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.262 [2024-11-07 13:43:47.015839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.262 [2024-11-07 13:43:47.015860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:39.262 [2024-11-07 13:43:47.024443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.262 [2024-11-07 13:43:47.024639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.262 [2024-11-07 13:43:47.024663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:39.262 [2024-11-07 13:43:47.032378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.262 [2024-11-07 13:43:47.032556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.262 [2024-11-07 13:43:47.032576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:39.262 [2024-11-07 13:43:47.040849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.262 [2024-11-07 13:43:47.040929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.262 [2024-11-07 13:43:47.040950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:39.262 [2024-11-07 13:43:47.049536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.262 [2024-11-07 13:43:47.049657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.262 [2024-11-07 13:43:47.049678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:39.262 [2024-11-07 13:43:47.058066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.262 [2024-11-07 13:43:47.058176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.262 [2024-11-07 13:43:47.058196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:39.262 [2024-11-07 13:43:47.063539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.262 [2024-11-07 13:43:47.063656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.262 [2024-11-07 13:43:47.063676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:39.262 [2024-11-07 13:43:47.069263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.262 [2024-11-07 13:43:47.069324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.262 [2024-11-07 13:43:47.069344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:39.262 [2024-11-07 13:43:47.073703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.262 [2024-11-07 13:43:47.073764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.262 [2024-11-07 13:43:47.073784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:39.262 [2024-11-07 13:43:47.078476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.262 [2024-11-07 13:43:47.078567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.262 [2024-11-07 13:43:47.078587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:39.262 [2024-11-07 13:43:47.083097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.262 [2024-11-07 13:43:47.083169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.262 [2024-11-07 13:43:47.083197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:39.262 [2024-11-07 13:43:47.088816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.262 [2024-11-07 13:43:47.088892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.262 [2024-11-07 13:43:47.088912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:39.262 [2024-11-07 13:43:47.095659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.262 [2024-11-07 13:43:47.095733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.262 [2024-11-07 13:43:47.095754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:39.262 [2024-11-07 13:43:47.101829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.262 [2024-11-07 13:43:47.102134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.262 [2024-11-07 13:43:47.102155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:39.262 [2024-11-07 13:43:47.109721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.262 [2024-11-07 13:43:47.109947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.262 [2024-11-07 13:43:47.109967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:39.262 [2024-11-07 13:43:47.118771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.262 [2024-11-07 13:43:47.118856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.262 [2024-11-07 13:43:47.118882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:39.262 [2024-11-07 13:43:47.125097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.262 [2024-11-07 13:43:47.125185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.262 [2024-11-07 13:43:47.125205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:39.263 [2024-11-07 13:43:47.130495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.263 [2024-11-07 13:43:47.130567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-07 13:43:47.130587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:39.263 [2024-11-07 13:43:47.134534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.263 [2024-11-07 13:43:47.134599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-07 13:43:47.134620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:39.263 [2024-11-07 13:43:47.138546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.263 [2024-11-07 13:43:47.138614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-07 13:43:47.138634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:39.263 [2024-11-07 13:43:47.142544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.263 [2024-11-07 13:43:47.142606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-07 13:43:47.142627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:39.263 [2024-11-07 13:43:47.149270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.263 [2024-11-07 13:43:47.149370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-07 13:43:47.149390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:39.263 [2024-11-07 13:43:47.156717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.263 [2024-11-07 13:43:47.157042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-07 13:43:47.157064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:39.263 [2024-11-07 13:43:47.163051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.263 [2024-11-07 13:43:47.163113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-07 13:43:47.163134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:39.263 [2024-11-07 13:43:47.167849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.263 [2024-11-07 13:43:47.168013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-07 13:43:47.168034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:39.263 [2024-11-07 13:43:47.174660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.263 [2024-11-07 13:43:47.174725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-07 13:43:47.174746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:39.263 [2024-11-07 13:43:47.180257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.263 [2024-11-07 13:43:47.180522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-07 13:43:47.180542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:39.263 [2024-11-07 13:43:47.187922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.263 [2024-11-07 13:43:47.188164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-07 13:43:47.188184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:39.263 [2024-11-07 13:43:47.195494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.263 [2024-11-07 13:43:47.195624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-07 13:43:47.195644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:39.263 [2024-11-07 13:43:47.204122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.263 [2024-11-07 13:43:47.204318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-07 13:43:47.204338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:39.263 [2024-11-07 13:43:47.210248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.263 [2024-11-07 13:43:47.210334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-07 13:43:47.210354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:39.263 [2024-11-07 13:43:47.215682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.263 [2024-11-07 13:43:47.215762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-07 13:43:47.215782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:39.263 [2024-11-07 13:43:47.220320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.263 [2024-11-07 13:43:47.220647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-07 13:43:47.220668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:39.263 [2024-11-07 13:43:47.224610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.263 [2024-11-07 13:43:47.224704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-07 13:43:47.224724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:39.263 [2024-11-07 13:43:47.230182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.263 [2024-11-07 13:43:47.230247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-07 13:43:47.230267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:39.263 [2024-11-07 13:43:47.234440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.263 [2024-11-07 13:43:47.234515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-07 13:43:47.234535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:39.263 [2024-11-07 13:43:47.238603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.263 [2024-11-07 13:43:47.238686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-07 13:43:47.238707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:39.263 [2024-11-07 13:43:47.242625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.263 [2024-11-07 13:43:47.242706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-07 13:43:47.242726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:39.263 [2024-11-07 13:43:47.246818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.263 [2024-11-07 13:43:47.246891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-07 13:43:47.246912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:39.263 [2024-11-07 13:43:47.251134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.263 [2024-11-07 13:43:47.251211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-07 13:43:47.251232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:39.263 [2024-11-07 13:43:47.255236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.263 [2024-11-07 13:43:47.255331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-07 13:43:47.255351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:39.263 [2024-11-07 13:43:47.260891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.263 [2024-11-07 13:43:47.261024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-07 13:43:47.261044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:39.525 [2024-11-07 13:43:47.268658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.525 [2024-11-07 13:43:47.268963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.525 [2024-11-07 13:43:47.268984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:39.525 [2024-11-07 13:43:47.277578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.525 [2024-11-07 13:43:47.277848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.525 [2024-11-07 13:43:47.277873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:39.525 [2024-11-07 13:43:47.287598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.525 [2024-11-07 13:43:47.287902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.525 [2024-11-07 13:43:47.287926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:39.525 [2024-11-07 13:43:47.296376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.525 [2024-11-07 13:43:47.296476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.525 [2024-11-07 13:43:47.296496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:39.525 [2024-11-07 13:43:47.304974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.525 [2024-11-07 13:43:47.305055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.525 [2024-11-07 13:43:47.305076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:39.525 [2024-11-07 13:43:47.311055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.525 [2024-11-07 13:43:47.311166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.525 [2024-11-07 13:43:47.311186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:39.525 [2024-11-07 13:43:47.316177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.525 [2024-11-07 13:43:47.316289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.526 [2024-11-07 13:43:47.316309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:39.526 [2024-11-07 13:43:47.320652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.526 [2024-11-07 13:43:47.320727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.526 [2024-11-07 13:43:47.320747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:39.526 [2024-11-07 13:43:47.328360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.526 [2024-11-07 13:43:47.328477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.526 [2024-11-07 13:43:47.328497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:39.526 [2024-11-07 13:43:47.332642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.526 [2024-11-07 13:43:47.332757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.526 [2024-11-07 13:43:47.332777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:39.526 [2024-11-07 13:43:47.336703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.526 [2024-11-07 13:43:47.336817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.526 [2024-11-07 13:43:47.336837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:39.526 [2024-11-07 13:43:47.341391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.526 [2024-11-07 13:43:47.341650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.526 [2024-11-07 13:43:47.341670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:39.526 [2024-11-07 13:43:47.347848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.526 [2024-11-07 13:43:47.347995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.526 [2024-11-07 13:43:47.348015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:39.526 [2024-11-07 13:43:47.354667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.526 [2024-11-07 13:43:47.354918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.526 [2024-11-07 13:43:47.354939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:39.526 [2024-11-07 13:43:47.361377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.526 [2024-11-07 13:43:47.361529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.526 [2024-11-07 13:43:47.361550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:39.526 [2024-11-07 13:43:47.369590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.526 [2024-11-07 13:43:47.369832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.526 [2024-11-07 13:43:47.369852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:39.526 [2024-11-07 13:43:47.377282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.526 [2024-11-07 13:43:47.377529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.526 [2024-11-07 13:43:47.377550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:39.526 [2024-11-07 13:43:47.384418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.526 [2024-11-07 13:43:47.384538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.526 [2024-11-07 13:43:47.384558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:39.526 [2024-11-07 13:43:47.389371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.526 [2024-11-07 13:43:47.389468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.526 [2024-11-07 13:43:47.389489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:39.526 [2024-11-07 13:43:47.397558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.526 [2024-11-07 13:43:47.397656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.526 [2024-11-07 13:43:47.397680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:39.526 [2024-11-07 13:43:47.406135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.526 [2024-11-07 13:43:47.406320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.526 [2024-11-07 13:43:47.406340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:39.526 [2024-11-07 13:43:47.415062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.526 [2024-11-07 13:43:47.415360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.526 [2024-11-07 13:43:47.415381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:39.526 [2024-11-07 13:43:47.423036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.526 [2024-11-07 13:43:47.423352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.526 [2024-11-07 13:43:47.423374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:39.526 [2024-11-07 13:43:47.430915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.526 [2024-11-07 13:43:47.431146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.526 [2024-11-07 13:43:47.431166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:39.526 [2024-11-07 13:43:47.439619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.526 [2024-11-07 13:43:47.439784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.526 [2024-11-07 13:43:47.439804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:39.526 [2024-11-07 13:43:47.448714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.526 [2024-11-07 13:43:47.448900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.526 [2024-11-07 13:43:47.448920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:39.526 [2024-11-07 13:43:47.456837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.526 [2024-11-07 13:43:47.457087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.526 [2024-11-07 13:43:47.457107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:39.526 [2024-11-07 13:43:47.465526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.526 [2024-11-07 13:43:47.465809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.526 [2024-11-07 13:43:47.465830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:39.526 [2024-11-07 13:43:47.472833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.526 [2024-11-07 13:43:47.473014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.526 [2024-11-07 13:43:47.473035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:39.526 [2024-11-07 13:43:47.482104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.526 [2024-11-07 13:43:47.482318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.526 [2024-11-07 13:43:47.482338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:39.526 [2024-11-07 13:43:47.489930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.527 [2024-11-07 13:43:47.490210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.527 [2024-11-07 13:43:47.490232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:39.527 [2024-11-07 13:43:47.497270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:38:39.527 [2024-11-07 13:43:47.497347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.527 [2024-11-07 13:43:47.497367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:39.527 3893.50 IOPS, 486.69 MiB/s 00:38:39.527 Latency(us) 00:38:39.527 [2024-11-07T12:43:47.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:39.527 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:38:39.527 nvme0n1 : 2.00 3894.62 486.83 0.00 0.00 4102.55 1788.59 11632.64 00:38:39.527 [2024-11-07T12:43:47.534Z] =================================================================================================================== 00:38:39.527 [2024-11-07T12:43:47.534Z] Total : 3894.62 486.83 0.00 0.00 4102.55 1788.59 11632.64 00:38:39.527 { 00:38:39.527 "results": [ 00:38:39.527 { 00:38:39.527 "job": "nvme0n1", 00:38:39.527 "core_mask": "0x2", 00:38:39.527 "workload": "randwrite", 00:38:39.527 "status": "finished", 00:38:39.527 "queue_depth": 16, 00:38:39.527 "io_size": 131072, 00:38:39.527 "runtime": 2.003533, 00:38:39.527 "iops": 3894.6201534988445, 00:38:39.527 "mibps": 486.82751918735556, 00:38:39.527 "io_failed": 0, 00:38:39.527 "io_timeout": 0, 00:38:39.527 "avg_latency_us": 4102.553704985262, 00:38:39.527 "min_latency_us": 1788.5866666666666, 00:38:39.527 "max_latency_us": 11632.64 00:38:39.527 } 00:38:39.527 ], 00:38:39.527 "core_count": 1 00:38:39.527 } 00:38:39.527 13:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:38:39.788 13:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:38:39.788 13:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:38:39.788 | .driver_specific 00:38:39.788 | .nvme_error 00:38:39.788 | .status_code 00:38:39.788 | .command_transient_transport_error' 00:38:39.788 13:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:38:39.788 13:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 252 > 0 )) 00:38:39.788 13:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4133509 00:38:39.788 13:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 4133509 ']' 00:38:39.788 13:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 4133509 00:38:39.788 13:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:38:39.788 13:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:39.788 13:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4133509 00:38:39.788 13:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:39.788 13:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:39.788 13:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4133509' 00:38:39.788 killing process with pid 4133509 00:38:39.788 13:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 4133509 00:38:39.788 Received shutdown signal, test time was about 2.000000 seconds 00:38:39.788 00:38:39.788 Latency(us) 00:38:39.788 [2024-11-07T12:43:47.795Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:39.788 [2024-11-07T12:43:47.795Z] =================================================================================================================== 00:38:39.788 [2024-11-07T12:43:47.795Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:39.788 13:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 4133509 00:38:40.359 13:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 4130835 00:38:40.359 13:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 4130835 ']' 00:38:40.359 13:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 4130835 00:38:40.359 13:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:38:40.359 13:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:40.359 13:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4130835 00:38:40.359 13:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:38:40.359 13:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:38:40.359 13:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4130835' 00:38:40.359 killing process with pid 4130835 00:38:40.359 13:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 4130835 00:38:40.359 13:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 4130835 00:38:41.300 00:38:41.300 real 0m18.523s 00:38:41.300 user 0m35.434s 00:38:41.300 sys 0m3.870s 00:38:41.300 13:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:41.300 13:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:41.300 ************************************ 00:38:41.300 END TEST nvmf_digest_error 00:38:41.300 ************************************ 00:38:41.300 13:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:38:41.300 13:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:38:41.300 13:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:41.300 13:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:38:41.300 13:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:41.300 13:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:38:41.300 13:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:41.300 13:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:41.300 rmmod nvme_tcp 00:38:41.300 rmmod nvme_fabrics 00:38:41.300 rmmod nvme_keyring 00:38:41.300 13:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:41.300 13:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:38:41.300 13:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:38:41.300 13:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 4130835 ']' 00:38:41.300 13:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 4130835 00:38:41.300 13:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 4130835 ']' 00:38:41.300 13:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 4130835 00:38:41.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (4130835) - No such process 00:38:41.300 13:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 4130835 is not found' 00:38:41.300 Process with pid 4130835 is not found 00:38:41.300 13:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:41.300 13:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:41.300 13:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:41.300 13:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:38:41.300 13:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:38:41.300 13:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:41.300 13:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:38:41.300 13:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:41.301 13:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:41.301 13:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:41.301 13:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:41.301 13:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:43.211 13:43:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:43.211 00:38:43.211 real 0m48.723s 00:38:43.211 user 1m15.298s 00:38:43.211 sys 0m13.969s 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:38:43.473 ************************************ 00:38:43.473 END TEST nvmf_digest 00:38:43.473 ************************************ 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:43.473 ************************************ 00:38:43.473 START TEST nvmf_bdevperf 00:38:43.473 ************************************ 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:38:43.473 * Looking for test storage... 00:38:43.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:43.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:43.473 --rc genhtml_branch_coverage=1 00:38:43.473 --rc genhtml_function_coverage=1 00:38:43.473 --rc genhtml_legend=1 00:38:43.473 --rc geninfo_all_blocks=1 00:38:43.473 --rc geninfo_unexecuted_blocks=1 00:38:43.473 00:38:43.473 ' 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:43.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:43.473 --rc genhtml_branch_coverage=1 00:38:43.473 --rc genhtml_function_coverage=1 00:38:43.473 --rc genhtml_legend=1 00:38:43.473 --rc geninfo_all_blocks=1 00:38:43.473 --rc geninfo_unexecuted_blocks=1 00:38:43.473 00:38:43.473 ' 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:43.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:43.473 --rc genhtml_branch_coverage=1 00:38:43.473 --rc genhtml_function_coverage=1 00:38:43.473 --rc genhtml_legend=1 00:38:43.473 --rc geninfo_all_blocks=1 00:38:43.473 --rc geninfo_unexecuted_blocks=1 00:38:43.473 00:38:43.473 ' 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:43.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:43.473 --rc genhtml_branch_coverage=1 00:38:43.473 --rc genhtml_function_coverage=1 00:38:43.473 --rc genhtml_legend=1 00:38:43.473 --rc geninfo_all_blocks=1 00:38:43.473 --rc geninfo_unexecuted_blocks=1 00:38:43.473 00:38:43.473 ' 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:43.473 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:38:43.474 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:43.474 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:43.474 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:43.474 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:43.474 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:43.474 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:43.474 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:38:43.734 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:43.734 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:38:43.734 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:43.734 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:43.734 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:43.734 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:43.734 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:43.734 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:43.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:43.734 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:43.734 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:43.734 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:43.734 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:43.734 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:43.734 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:38:43.734 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:43.734 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:43.734 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:43.734 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:43.734 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:43.734 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:43.734 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:43.734 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:43.734 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:43.734 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:43.734 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:38:43.734 13:43:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:51.870 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:51.870 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:51.870 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:51.871 Found net devices under 0000:31:00.0: cvl_0_0 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:51.871 Found net devices under 0000:31:00.1: cvl_0_1 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:51.871 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:51.871 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:38:51.871 00:38:51.871 --- 10.0.0.2 ping statistics --- 00:38:51.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:51.871 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:51.871 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:51.871 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:38:51.871 00:38:51.871 --- 10.0.0.1 ping statistics --- 00:38:51.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:51.871 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=4139007 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 4139007 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 4139007 ']' 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:51.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:51.871 13:43:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:51.871 [2024-11-07 13:43:59.639693] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:38:51.871 [2024-11-07 13:43:59.639797] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:51.871 [2024-11-07 13:43:59.803461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:52.131 [2024-11-07 13:43:59.906067] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:52.131 [2024-11-07 13:43:59.906116] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:52.131 [2024-11-07 13:43:59.906128] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:52.131 [2024-11-07 13:43:59.906141] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:52.131 [2024-11-07 13:43:59.906150] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:52.131 [2024-11-07 13:43:59.908195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:52.131 [2024-11-07 13:43:59.908316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:52.131 [2024-11-07 13:43:59.908339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:52.702 13:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:52.702 13:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:38:52.702 13:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:52.702 13:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:52.702 13:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:52.702 13:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:52.702 13:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:52.702 13:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:52.702 13:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:52.702 [2024-11-07 13:44:00.441345] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:52.702 13:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:52.702 13:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:52.702 13:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:52.702 13:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:52.702 Malloc0 00:38:52.702 13:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:52.702 13:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:52.702 13:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:52.702 13:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:52.702 13:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:52.702 13:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:52.702 13:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:52.702 13:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:52.702 13:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:52.702 13:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:52.702 13:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:52.702 13:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:52.702 [2024-11-07 13:44:00.544741] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:52.702 13:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:52.702 13:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:38:52.702 13:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:38:52.702 13:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:38:52.702 13:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:38:52.702 13:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:52.702 13:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:52.702 { 00:38:52.702 "params": { 00:38:52.702 "name": "Nvme$subsystem", 00:38:52.702 "trtype": "$TEST_TRANSPORT", 00:38:52.702 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:52.702 "adrfam": "ipv4", 00:38:52.702 "trsvcid": "$NVMF_PORT", 00:38:52.702 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:52.702 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:52.702 "hdgst": ${hdgst:-false}, 00:38:52.702 "ddgst": ${ddgst:-false} 00:38:52.702 }, 00:38:52.702 "method": "bdev_nvme_attach_controller" 00:38:52.702 } 00:38:52.702 EOF 00:38:52.702 )") 00:38:52.703 13:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:38:52.703 13:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:38:52.703 13:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:38:52.703 13:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:52.703 "params": { 00:38:52.703 "name": "Nvme1", 00:38:52.703 "trtype": "tcp", 00:38:52.703 "traddr": "10.0.0.2", 00:38:52.703 "adrfam": "ipv4", 00:38:52.703 "trsvcid": "4420", 00:38:52.703 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:52.703 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:52.703 "hdgst": false, 00:38:52.703 "ddgst": false 00:38:52.703 }, 00:38:52.703 "method": "bdev_nvme_attach_controller" 00:38:52.703 }' 00:38:52.703 [2024-11-07 13:44:00.639172] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:38:52.703 [2024-11-07 13:44:00.639276] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4139204 ] 00:38:52.963 [2024-11-07 13:44:00.777927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:52.963 [2024-11-07 13:44:00.875954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:53.533 Running I/O for 1 seconds... 00:38:54.474 7937.00 IOPS, 31.00 MiB/s 00:38:54.474 Latency(us) 00:38:54.474 [2024-11-07T12:44:02.481Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:54.474 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:38:54.474 Verification LBA range: start 0x0 length 0x4000 00:38:54.474 Nvme1n1 : 1.06 7678.91 30.00 0.00 0.00 15956.13 3208.53 43909.12 00:38:54.474 [2024-11-07T12:44:02.481Z] =================================================================================================================== 00:38:54.474 [2024-11-07T12:44:02.481Z] Total : 7678.91 30.00 0.00 0.00 15956.13 3208.53 43909.12 00:38:55.045 13:44:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=4139595 00:38:55.045 13:44:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:38:55.045 13:44:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:38:55.045 13:44:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:38:55.045 13:44:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:38:55.045 13:44:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:38:55.045 13:44:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:55.045 13:44:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:55.045 { 00:38:55.045 "params": { 00:38:55.045 "name": "Nvme$subsystem", 00:38:55.045 "trtype": "$TEST_TRANSPORT", 00:38:55.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:55.045 "adrfam": "ipv4", 00:38:55.045 "trsvcid": "$NVMF_PORT", 00:38:55.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:55.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:55.045 "hdgst": ${hdgst:-false}, 00:38:55.045 "ddgst": ${ddgst:-false} 00:38:55.045 }, 00:38:55.045 "method": "bdev_nvme_attach_controller" 00:38:55.045 } 00:38:55.045 EOF 00:38:55.045 )") 00:38:55.045 13:44:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:38:55.045 13:44:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:38:55.045 13:44:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:38:55.045 13:44:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:55.045 "params": { 00:38:55.045 "name": "Nvme1", 00:38:55.045 "trtype": "tcp", 00:38:55.045 "traddr": "10.0.0.2", 00:38:55.045 "adrfam": "ipv4", 00:38:55.045 "trsvcid": "4420", 00:38:55.045 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:55.045 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:55.045 "hdgst": false, 00:38:55.045 "ddgst": false 00:38:55.045 }, 00:38:55.045 "method": "bdev_nvme_attach_controller" 00:38:55.045 }' 00:38:55.045 [2024-11-07 13:44:03.025438] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:38:55.045 [2024-11-07 13:44:03.025550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4139595 ] 00:38:55.305 [2024-11-07 13:44:03.160272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:55.305 [2024-11-07 13:44:03.258496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:55.875 Running I/O for 15 seconds... 00:38:57.756 9971.00 IOPS, 38.95 MiB/s [2024-11-07T12:44:06.027Z] 10018.50 IOPS, 39.13 MiB/s [2024-11-07T12:44:06.027Z] 13:44:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 4139007 00:38:58.020 13:44:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:38:58.020 [2024-11-07 13:44:05.963967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:51064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.020 [2024-11-07 13:44:05.964029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.020 [2024-11-07 13:44:05.964066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.020 [2024-11-07 13:44:05.964079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.020 [2024-11-07 13:44:05.964095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:51200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.020 [2024-11-07 13:44:05.964108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.020 [2024-11-07 13:44:05.964122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:51208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.020 [2024-11-07 13:44:05.964133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.020 [2024-11-07 13:44:05.964148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:51216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.020 [2024-11-07 13:44:05.964161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.020 [2024-11-07 13:44:05.964175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:51224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.020 [2024-11-07 13:44:05.964188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.020 [2024-11-07 13:44:05.964203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:51232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.020 [2024-11-07 13:44:05.964215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.020 [2024-11-07 13:44:05.964230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:51240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.020 [2024-11-07 13:44:05.964243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.020 [2024-11-07 13:44:05.964258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:51248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.020 [2024-11-07 13:44:05.964271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.020 [2024-11-07 13:44:05.964285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:51256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.020 [2024-11-07 13:44:05.964301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.020 [2024-11-07 13:44:05.964314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:51264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.020 [2024-11-07 13:44:05.964325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.020 [2024-11-07 13:44:05.964338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:51272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.020 [2024-11-07 13:44:05.964348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.020 [2024-11-07 13:44:05.964361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:51280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.020 [2024-11-07 13:44:05.964372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.020 [2024-11-07 13:44:05.964385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:51288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.020 [2024-11-07 13:44:05.964395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.020 [2024-11-07 13:44:05.964409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:51296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.020 [2024-11-07 13:44:05.964419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.020 [2024-11-07 13:44:05.964432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:51304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.020 [2024-11-07 13:44:05.964443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.020 [2024-11-07 13:44:05.964456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:51312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.020 [2024-11-07 13:44:05.964466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.020 [2024-11-07 13:44:05.964479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:51320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.020 [2024-11-07 13:44:05.964490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.020 [2024-11-07 13:44:05.964503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:51328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.020 [2024-11-07 13:44:05.964513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.020 [2024-11-07 13:44:05.964525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:51336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.020 [2024-11-07 13:44:05.964536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.020 [2024-11-07 13:44:05.964548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:51344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.020 [2024-11-07 13:44:05.964558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.020 [2024-11-07 13:44:05.964571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:51352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.020 [2024-11-07 13:44:05.964581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.020 [2024-11-07 13:44:05.964594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:51360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.020 [2024-11-07 13:44:05.964606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.020 [2024-11-07 13:44:05.964618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:51368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.020 [2024-11-07 13:44:05.964628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.020 [2024-11-07 13:44:05.964641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:51376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.020 [2024-11-07 13:44:05.964651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.020 [2024-11-07 13:44:05.964663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:51384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.020 [2024-11-07 13:44:05.964674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.020 [2024-11-07 13:44:05.964686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:51392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.020 [2024-11-07 13:44:05.964696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.020 [2024-11-07 13:44:05.964709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:51400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.020 [2024-11-07 13:44:05.964719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.020 [2024-11-07 13:44:05.964731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:51408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.020 [2024-11-07 13:44:05.964741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.020 [2024-11-07 13:44:05.964754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:51416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.020 [2024-11-07 13:44:05.964765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.020 [2024-11-07 13:44:05.964777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:51424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.020 [2024-11-07 13:44:05.964787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.021 [2024-11-07 13:44:05.964799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.021 [2024-11-07 13:44:05.964810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.021 [2024-11-07 13:44:05.964822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.021 [2024-11-07 13:44:05.964832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.021 [2024-11-07 13:44:05.964845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:51448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.021 [2024-11-07 13:44:05.964855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.021 [2024-11-07 13:44:05.964874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:51456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.021 [2024-11-07 13:44:05.964885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.021 [2024-11-07 13:44:05.964900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:51464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.021 [2024-11-07 13:44:05.964911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.021 [2024-11-07 13:44:05.964923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:51472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.021 [2024-11-07 13:44:05.964934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.021 [2024-11-07 13:44:05.964947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:51480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.021 [2024-11-07 13:44:05.964957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.021 [2024-11-07 13:44:05.964969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:51488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.021 [2024-11-07 13:44:05.964980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.021 [2024-11-07 13:44:05.964992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:51496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.021 [2024-11-07 13:44:05.965003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.021 [2024-11-07 13:44:05.965016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:51504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.021 [2024-11-07 13:44:05.965026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.021 [2024-11-07 13:44:05.965038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:51512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.021 [2024-11-07 13:44:05.965049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.021 [2024-11-07 13:44:05.965062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:51520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.021 [2024-11-07 13:44:05.965072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.021 [2024-11-07 13:44:05.965084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.021 [2024-11-07 13:44:05.965095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.021 [2024-11-07 13:44:05.965107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:51536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.021 [2024-11-07 13:44:05.965118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.021 [2024-11-07 13:44:05.965130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:51544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.021 [2024-11-07 13:44:05.965140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.021 [2024-11-07 13:44:05.965153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:51552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.021 [2024-11-07 13:44:05.965163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.021 [2024-11-07 13:44:05.965176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:51560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.021 [2024-11-07 13:44:05.965188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.021 [2024-11-07 13:44:05.965201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:51568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.021 [2024-11-07 13:44:05.965211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.021 [2024-11-07 13:44:05.965224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:51576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.021 [2024-11-07 13:44:05.965234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.021 [2024-11-07 13:44:05.965247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:51584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.021 [2024-11-07 13:44:05.965257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.021 [2024-11-07 13:44:05.965270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:51592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.021 [2024-11-07 13:44:05.965280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.021 [2024-11-07 13:44:05.965292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.021 [2024-11-07 13:44:05.965303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.021 [2024-11-07 13:44:05.965315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:51608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.021 [2024-11-07 13:44:05.965325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.021 [2024-11-07 13:44:05.965338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:51616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.021 [2024-11-07 13:44:05.965348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.021 [2024-11-07 13:44:05.965361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:51624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.021 [2024-11-07 13:44:05.965378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.021 [2024-11-07 13:44:05.965391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.021 [2024-11-07 13:44:05.965401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.021 [2024-11-07 13:44:05.965413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:51640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.021 [2024-11-07 13:44:05.965424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.021 [2024-11-07 13:44:05.965436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:51648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.021 [2024-11-07 13:44:05.965447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.021 [2024-11-07 13:44:05.965459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:51656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.021 [2024-11-07 13:44:05.965470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.021 [2024-11-07 13:44:05.965482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.021 [2024-11-07 13:44:05.965497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.021 [2024-11-07 13:44:05.965511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:51672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.021 [2024-11-07 13:44:05.965521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.021 [2024-11-07 13:44:05.965533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.021 [2024-11-07 13:44:05.965543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.021 [2024-11-07 13:44:05.965556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:51688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.021 [2024-11-07 13:44:05.965567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.021 [2024-11-07 13:44:05.965579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.021 [2024-11-07 13:44:05.965589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.021 [2024-11-07 13:44:05.965602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.021 [2024-11-07 13:44:05.965613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.021 [2024-11-07 13:44:05.965627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.021 [2024-11-07 13:44:05.965637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.021 [2024-11-07 13:44:05.965650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:51720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.021 [2024-11-07 13:44:05.965661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.021 [2024-11-07 13:44:05.965673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.021 [2024-11-07 13:44:05.965684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.021 [2024-11-07 13:44:05.965696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:51736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.021 [2024-11-07 13:44:05.965707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.021 [2024-11-07 13:44:05.965719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:51744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.021 [2024-11-07 13:44:05.965730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.021 [2024-11-07 13:44:05.965742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.022 [2024-11-07 13:44:05.965753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.022 [2024-11-07 13:44:05.965765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:51760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.022 [2024-11-07 13:44:05.965776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.022 [2024-11-07 13:44:05.965789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.022 [2024-11-07 13:44:05.965800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.022 [2024-11-07 13:44:05.965812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.022 [2024-11-07 13:44:05.965823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.022 [2024-11-07 13:44:05.965835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.022 [2024-11-07 13:44:05.965846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.022 [2024-11-07 13:44:05.965858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.022 [2024-11-07 13:44:05.965872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.022 [2024-11-07 13:44:05.965885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:51800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.022 [2024-11-07 13:44:05.965895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.022 [2024-11-07 13:44:05.965907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:51808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.022 [2024-11-07 13:44:05.965918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.022 [2024-11-07 13:44:05.965930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:51816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.022 [2024-11-07 13:44:05.965940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.022 [2024-11-07 13:44:05.965953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.022 [2024-11-07 13:44:05.965963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.022 [2024-11-07 13:44:05.965976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:51832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.022 [2024-11-07 13:44:05.965986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.022 [2024-11-07 13:44:05.965998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.022 [2024-11-07 13:44:05.966009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.022 [2024-11-07 13:44:05.966021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:51848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.022 [2024-11-07 13:44:05.966031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.022 [2024-11-07 13:44:05.966044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.022 [2024-11-07 13:44:05.966054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.022 [2024-11-07 13:44:05.966066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.022 [2024-11-07 13:44:05.966078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.022 [2024-11-07 13:44:05.966091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.022 [2024-11-07 13:44:05.966101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.022 [2024-11-07 13:44:05.966114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:51880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.022 [2024-11-07 13:44:05.966124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.022 [2024-11-07 13:44:05.966137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:51888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.022 [2024-11-07 13:44:05.966147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.022 [2024-11-07 13:44:05.966161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:51896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.022 [2024-11-07 13:44:05.966171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.022 [2024-11-07 13:44:05.966184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:51904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.022 [2024-11-07 13:44:05.966194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.022 [2024-11-07 13:44:05.966207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:51912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.022 [2024-11-07 13:44:05.966217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.022 [2024-11-07 13:44:05.966230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:51920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.022 [2024-11-07 13:44:05.966240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.022 [2024-11-07 13:44:05.966252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:51928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.022 [2024-11-07 13:44:05.966263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.022 [2024-11-07 13:44:05.966275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.022 [2024-11-07 13:44:05.966285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.022 [2024-11-07 13:44:05.966298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:51944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.022 [2024-11-07 13:44:05.966308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.022 [2024-11-07 13:44:05.966321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:51952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.022 [2024-11-07 13:44:05.966331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.022 [2024-11-07 13:44:05.966343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:51960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.022 [2024-11-07 13:44:05.966354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.022 [2024-11-07 13:44:05.966368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:51968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.022 [2024-11-07 13:44:05.966379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.022 [2024-11-07 13:44:05.966391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:51976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.022 [2024-11-07 13:44:05.966401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.022 [2024-11-07 13:44:05.966414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:51984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.022 [2024-11-07 13:44:05.966425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.022 [2024-11-07 13:44:05.966437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:51992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.022 [2024-11-07 13:44:05.966447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.022 [2024-11-07 13:44:05.966459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.022 [2024-11-07 13:44:05.966470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.022 [2024-11-07 13:44:05.966483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:52008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.022 [2024-11-07 13:44:05.966493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.022 [2024-11-07 13:44:05.966505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.022 [2024-11-07 13:44:05.966516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.022 [2024-11-07 13:44:05.966528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:52024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.022 [2024-11-07 13:44:05.966539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.022 [2024-11-07 13:44:05.966551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:52032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.022 [2024-11-07 13:44:05.966561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.022 [2024-11-07 13:44:05.966574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:52040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.022 [2024-11-07 13:44:05.966584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.022 [2024-11-07 13:44:05.966597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:52048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.022 [2024-11-07 13:44:05.966607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.022 [2024-11-07 13:44:05.966619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:52056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.022 [2024-11-07 13:44:05.966629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.022 [2024-11-07 13:44:05.966642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:52064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.022 [2024-11-07 13:44:05.966652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.022 [2024-11-07 13:44:05.966666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:52072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.023 [2024-11-07 13:44:05.966676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.023 [2024-11-07 13:44:05.966689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:52080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.023 [2024-11-07 13:44:05.966699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.023 [2024-11-07 13:44:05.966712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:51072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.023 [2024-11-07 13:44:05.966722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.023 [2024-11-07 13:44:05.966735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:51080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.023 [2024-11-07 13:44:05.966746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.023 [2024-11-07 13:44:05.966758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:51088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.023 [2024-11-07 13:44:05.966768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.023 [2024-11-07 13:44:05.966781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:51096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.023 [2024-11-07 13:44:05.966791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.023 [2024-11-07 13:44:05.966803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:51104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.023 [2024-11-07 13:44:05.966814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.023 [2024-11-07 13:44:05.966826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:51112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.023 [2024-11-07 13:44:05.966836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.023 [2024-11-07 13:44:05.966849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:51120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.023 [2024-11-07 13:44:05.966868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.023 [2024-11-07 13:44:05.966883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:51128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.023 [2024-11-07 13:44:05.966893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.023 [2024-11-07 13:44:05.966907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:51136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.023 [2024-11-07 13:44:05.966917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.023 [2024-11-07 13:44:05.966930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:51144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.023 [2024-11-07 13:44:05.966940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.023 [2024-11-07 13:44:05.966952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:51152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.023 [2024-11-07 13:44:05.966964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.023 [2024-11-07 13:44:05.966977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:51160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.023 [2024-11-07 13:44:05.966987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.023 [2024-11-07 13:44:05.966999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:51168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.023 [2024-11-07 13:44:05.967011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.023 [2024-11-07 13:44:05.967024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:51176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.023 [2024-11-07 13:44:05.967034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.023 [2024-11-07 13:44:05.967046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000417b00 is same with the state(6) to be set 00:38:58.023 [2024-11-07 13:44:05.967060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:58.023 [2024-11-07 13:44:05.967070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:58.023 [2024-11-07 13:44:05.967082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51184 len:8 PRP1 0x0 PRP2 0x0 00:38:58.023 [2024-11-07 13:44:05.967093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.023 [2024-11-07 13:44:05.971032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.023 [2024-11-07 13:44:05.971112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.023 [2024-11-07 13:44:05.971806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.023 [2024-11-07 13:44:05.971831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.023 [2024-11-07 13:44:05.971844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.023 [2024-11-07 13:44:05.972089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.023 [2024-11-07 13:44:05.972326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.023 [2024-11-07 13:44:05.972340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.023 [2024-11-07 13:44:05.972352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.023 [2024-11-07 13:44:05.972365] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.023 [2024-11-07 13:44:05.985448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.023 [2024-11-07 13:44:05.986094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.023 [2024-11-07 13:44:05.986140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.023 [2024-11-07 13:44:05.986156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.023 [2024-11-07 13:44:05.986424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.023 [2024-11-07 13:44:05.986662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.023 [2024-11-07 13:44:05.986681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.023 [2024-11-07 13:44:05.986692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.023 [2024-11-07 13:44:05.986705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.023 [2024-11-07 13:44:05.999581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.023 [2024-11-07 13:44:06.000279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.023 [2024-11-07 13:44:06.000325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.023 [2024-11-07 13:44:06.000341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.023 [2024-11-07 13:44:06.000608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.023 [2024-11-07 13:44:06.000846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.023 [2024-11-07 13:44:06.000859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.023 [2024-11-07 13:44:06.000880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.023 [2024-11-07 13:44:06.000892] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.023 [2024-11-07 13:44:06.013752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.023 [2024-11-07 13:44:06.014379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.023 [2024-11-07 13:44:06.014404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.023 [2024-11-07 13:44:06.014416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.023 [2024-11-07 13:44:06.014651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.023 [2024-11-07 13:44:06.014891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.023 [2024-11-07 13:44:06.014904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.023 [2024-11-07 13:44:06.014914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.023 [2024-11-07 13:44:06.014924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.286 [2024-11-07 13:44:06.027778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.286 [2024-11-07 13:44:06.028485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.286 [2024-11-07 13:44:06.028532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.286 [2024-11-07 13:44:06.028548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.286 [2024-11-07 13:44:06.028814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.286 [2024-11-07 13:44:06.029065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.286 [2024-11-07 13:44:06.029079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.286 [2024-11-07 13:44:06.029091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.286 [2024-11-07 13:44:06.029107] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.286 [2024-11-07 13:44:06.041768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.286 [2024-11-07 13:44:06.042462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.286 [2024-11-07 13:44:06.042509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.286 [2024-11-07 13:44:06.042525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.286 [2024-11-07 13:44:06.042791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.286 [2024-11-07 13:44:06.043040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.286 [2024-11-07 13:44:06.043055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.286 [2024-11-07 13:44:06.043066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.286 [2024-11-07 13:44:06.043077] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.286 [2024-11-07 13:44:06.055940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.286 [2024-11-07 13:44:06.056644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.286 [2024-11-07 13:44:06.056690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.286 [2024-11-07 13:44:06.056706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.286 [2024-11-07 13:44:06.056983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.286 [2024-11-07 13:44:06.057223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.286 [2024-11-07 13:44:06.057236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.286 [2024-11-07 13:44:06.057247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.286 [2024-11-07 13:44:06.057258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.286 [2024-11-07 13:44:06.070111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.286 [2024-11-07 13:44:06.070820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.286 [2024-11-07 13:44:06.070874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.286 [2024-11-07 13:44:06.070890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.286 [2024-11-07 13:44:06.071156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.286 [2024-11-07 13:44:06.071394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.286 [2024-11-07 13:44:06.071407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.286 [2024-11-07 13:44:06.071418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.286 [2024-11-07 13:44:06.071430] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.286 [2024-11-07 13:44:06.084273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.286 [2024-11-07 13:44:06.084984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.286 [2024-11-07 13:44:06.085030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.286 [2024-11-07 13:44:06.085046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.286 [2024-11-07 13:44:06.085313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.286 [2024-11-07 13:44:06.085550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.286 [2024-11-07 13:44:06.085563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.286 [2024-11-07 13:44:06.085575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.286 [2024-11-07 13:44:06.085586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.286 [2024-11-07 13:44:06.098663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.286 [2024-11-07 13:44:06.099380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.286 [2024-11-07 13:44:06.099427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.286 [2024-11-07 13:44:06.099443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.286 [2024-11-07 13:44:06.099708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.286 [2024-11-07 13:44:06.099957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.286 [2024-11-07 13:44:06.099972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.286 [2024-11-07 13:44:06.099983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.286 [2024-11-07 13:44:06.099994] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.286 [2024-11-07 13:44:06.112840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.286 [2024-11-07 13:44:06.113453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.286 [2024-11-07 13:44:06.113478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.286 [2024-11-07 13:44:06.113489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.286 [2024-11-07 13:44:06.113724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.286 [2024-11-07 13:44:06.113962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.286 [2024-11-07 13:44:06.113975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.286 [2024-11-07 13:44:06.113984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.286 [2024-11-07 13:44:06.113994] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.286 [2024-11-07 13:44:06.126824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.286 [2024-11-07 13:44:06.127416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.286 [2024-11-07 13:44:06.127439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.286 [2024-11-07 13:44:06.127455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.286 [2024-11-07 13:44:06.127688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.286 [2024-11-07 13:44:06.127927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.286 [2024-11-07 13:44:06.127939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.286 [2024-11-07 13:44:06.127948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.286 [2024-11-07 13:44:06.127969] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.286 [2024-11-07 13:44:06.140822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.286 [2024-11-07 13:44:06.141388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.286 [2024-11-07 13:44:06.141411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.286 [2024-11-07 13:44:06.141422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.286 [2024-11-07 13:44:06.141655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.286 [2024-11-07 13:44:06.141893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.286 [2024-11-07 13:44:06.141905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.286 [2024-11-07 13:44:06.141915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.286 [2024-11-07 13:44:06.141924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.286 [2024-11-07 13:44:06.154975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.286 [2024-11-07 13:44:06.155581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.286 [2024-11-07 13:44:06.155603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.287 [2024-11-07 13:44:06.155614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.287 [2024-11-07 13:44:06.155847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.287 [2024-11-07 13:44:06.156086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.287 [2024-11-07 13:44:06.156098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.287 [2024-11-07 13:44:06.156108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.287 [2024-11-07 13:44:06.156117] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.287 [2024-11-07 13:44:06.168948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.287 [2024-11-07 13:44:06.169624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.287 [2024-11-07 13:44:06.169670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.287 [2024-11-07 13:44:06.169686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.287 [2024-11-07 13:44:06.169969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.287 [2024-11-07 13:44:06.170209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.287 [2024-11-07 13:44:06.170221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.287 [2024-11-07 13:44:06.170232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.287 [2024-11-07 13:44:06.170244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.287 [2024-11-07 13:44:06.183097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.287 [2024-11-07 13:44:06.183706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.287 [2024-11-07 13:44:06.183731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.287 [2024-11-07 13:44:06.183743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.287 [2024-11-07 13:44:06.183983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.287 [2024-11-07 13:44:06.184218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.287 [2024-11-07 13:44:06.184229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.287 [2024-11-07 13:44:06.184239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.287 [2024-11-07 13:44:06.184249] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.287 [2024-11-07 13:44:06.197081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.287 [2024-11-07 13:44:06.197554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.287 [2024-11-07 13:44:06.197577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.287 [2024-11-07 13:44:06.197588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.287 [2024-11-07 13:44:06.197820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.287 [2024-11-07 13:44:06.198060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.287 [2024-11-07 13:44:06.198072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.287 [2024-11-07 13:44:06.198082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.287 [2024-11-07 13:44:06.198091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.287 [2024-11-07 13:44:06.211134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.287 [2024-11-07 13:44:06.211822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.287 [2024-11-07 13:44:06.211875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.287 [2024-11-07 13:44:06.211892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.287 [2024-11-07 13:44:06.212158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.287 [2024-11-07 13:44:06.212396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.287 [2024-11-07 13:44:06.212415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.287 [2024-11-07 13:44:06.212426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.287 [2024-11-07 13:44:06.212437] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.287 [2024-11-07 13:44:06.225290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.287 [2024-11-07 13:44:06.225898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.287 [2024-11-07 13:44:06.225944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.287 [2024-11-07 13:44:06.225962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.287 [2024-11-07 13:44:06.226230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.287 [2024-11-07 13:44:06.226469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.287 [2024-11-07 13:44:06.226482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.287 [2024-11-07 13:44:06.226495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.287 [2024-11-07 13:44:06.226506] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.287 [2024-11-07 13:44:06.239420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.287 [2024-11-07 13:44:06.240158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.287 [2024-11-07 13:44:06.240205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.287 [2024-11-07 13:44:06.240220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.287 [2024-11-07 13:44:06.240486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.287 [2024-11-07 13:44:06.240725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.287 [2024-11-07 13:44:06.240737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.287 [2024-11-07 13:44:06.240748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.287 [2024-11-07 13:44:06.240759] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.287 [2024-11-07 13:44:06.253426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.287 [2024-11-07 13:44:06.254141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.287 [2024-11-07 13:44:06.254186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.287 [2024-11-07 13:44:06.254202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.287 [2024-11-07 13:44:06.254467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.287 [2024-11-07 13:44:06.254706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.287 [2024-11-07 13:44:06.254719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.287 [2024-11-07 13:44:06.254730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.287 [2024-11-07 13:44:06.254747] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.287 [2024-11-07 13:44:06.267620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.287 [2024-11-07 13:44:06.268330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.287 [2024-11-07 13:44:06.268376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.287 [2024-11-07 13:44:06.268392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.287 [2024-11-07 13:44:06.268658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.287 [2024-11-07 13:44:06.268906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.287 [2024-11-07 13:44:06.268920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.287 [2024-11-07 13:44:06.268931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.287 [2024-11-07 13:44:06.268943] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.287 [2024-11-07 13:44:06.281792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.287 [2024-11-07 13:44:06.282505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.287 [2024-11-07 13:44:06.282551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.287 [2024-11-07 13:44:06.282567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.287 [2024-11-07 13:44:06.282833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.287 [2024-11-07 13:44:06.283082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.287 [2024-11-07 13:44:06.283097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.287 [2024-11-07 13:44:06.283108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.287 [2024-11-07 13:44:06.283119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.549 [2024-11-07 13:44:06.295985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.549 [2024-11-07 13:44:06.296696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.549 [2024-11-07 13:44:06.296742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.549 [2024-11-07 13:44:06.296757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.549 [2024-11-07 13:44:06.297033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.549 [2024-11-07 13:44:06.297273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.549 [2024-11-07 13:44:06.297285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.549 [2024-11-07 13:44:06.297296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.549 [2024-11-07 13:44:06.297308] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.549 [2024-11-07 13:44:06.310151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.549 [2024-11-07 13:44:06.310873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.549 [2024-11-07 13:44:06.310919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.549 [2024-11-07 13:44:06.310936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.549 [2024-11-07 13:44:06.311203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.549 [2024-11-07 13:44:06.311441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.549 [2024-11-07 13:44:06.311454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.549 [2024-11-07 13:44:06.311465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.550 [2024-11-07 13:44:06.311476] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.550 [2024-11-07 13:44:06.324333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.550 [2024-11-07 13:44:06.325077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.550 [2024-11-07 13:44:06.325123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.550 [2024-11-07 13:44:06.325139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.550 [2024-11-07 13:44:06.325405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.550 [2024-11-07 13:44:06.325642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.550 [2024-11-07 13:44:06.325655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.550 [2024-11-07 13:44:06.325666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.550 [2024-11-07 13:44:06.325678] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.550 [2024-11-07 13:44:06.338335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.550 [2024-11-07 13:44:06.339094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.550 [2024-11-07 13:44:06.339140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.550 [2024-11-07 13:44:06.339156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.550 [2024-11-07 13:44:06.339422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.550 [2024-11-07 13:44:06.339660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.550 [2024-11-07 13:44:06.339673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.550 [2024-11-07 13:44:06.339684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.550 [2024-11-07 13:44:06.339695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.550 [2024-11-07 13:44:06.352364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.550 [2024-11-07 13:44:06.353090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.550 [2024-11-07 13:44:06.353137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.550 [2024-11-07 13:44:06.353158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.550 [2024-11-07 13:44:06.353424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.550 [2024-11-07 13:44:06.353662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.550 [2024-11-07 13:44:06.353675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.550 [2024-11-07 13:44:06.353686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.550 [2024-11-07 13:44:06.353697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.550 [2024-11-07 13:44:06.366403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.550 [2024-11-07 13:44:06.367136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.550 [2024-11-07 13:44:06.367182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.550 [2024-11-07 13:44:06.367198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.550 [2024-11-07 13:44:06.367465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.550 [2024-11-07 13:44:06.367702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.550 [2024-11-07 13:44:06.367715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.550 [2024-11-07 13:44:06.367727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.550 [2024-11-07 13:44:06.367738] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.550 [2024-11-07 13:44:06.380594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.550 [2024-11-07 13:44:06.381258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.550 [2024-11-07 13:44:06.381306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.550 [2024-11-07 13:44:06.381322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.550 [2024-11-07 13:44:06.381587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.550 [2024-11-07 13:44:06.381825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.550 [2024-11-07 13:44:06.381838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.550 [2024-11-07 13:44:06.381849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.550 [2024-11-07 13:44:06.381860] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.550 [2024-11-07 13:44:06.394723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.550 [2024-11-07 13:44:06.395417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.550 [2024-11-07 13:44:06.395464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.550 [2024-11-07 13:44:06.395480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.550 [2024-11-07 13:44:06.395746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.550 [2024-11-07 13:44:06.396001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.550 [2024-11-07 13:44:06.396015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.550 [2024-11-07 13:44:06.396026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.550 [2024-11-07 13:44:06.396038] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.550 [2024-11-07 13:44:06.408908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.550 [2024-11-07 13:44:06.409584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.550 [2024-11-07 13:44:06.409631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.550 [2024-11-07 13:44:06.409647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.550 [2024-11-07 13:44:06.409922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.550 [2024-11-07 13:44:06.410161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.550 [2024-11-07 13:44:06.410174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.550 [2024-11-07 13:44:06.410185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.550 [2024-11-07 13:44:06.410196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.550 [2024-11-07 13:44:06.423075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.550 [2024-11-07 13:44:06.423692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.550 [2024-11-07 13:44:06.423717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.550 [2024-11-07 13:44:06.423728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.550 [2024-11-07 13:44:06.423969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.550 [2024-11-07 13:44:06.424203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.550 [2024-11-07 13:44:06.424215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.550 [2024-11-07 13:44:06.424224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.550 [2024-11-07 13:44:06.424234] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.550 [2024-11-07 13:44:06.437117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.550 [2024-11-07 13:44:06.437722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.550 [2024-11-07 13:44:06.437745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.550 [2024-11-07 13:44:06.437756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.550 [2024-11-07 13:44:06.437995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.550 [2024-11-07 13:44:06.438229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.550 [2024-11-07 13:44:06.438240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.550 [2024-11-07 13:44:06.438259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.550 [2024-11-07 13:44:06.438269] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.550 [2024-11-07 13:44:06.451160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.550 [2024-11-07 13:44:06.451762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.550 [2024-11-07 13:44:06.451784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.550 [2024-11-07 13:44:06.451795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.551 [2024-11-07 13:44:06.452035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.551 [2024-11-07 13:44:06.452280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.551 [2024-11-07 13:44:06.452292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.551 [2024-11-07 13:44:06.452301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.551 [2024-11-07 13:44:06.452311] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.551 [2024-11-07 13:44:06.465175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.551 [2024-11-07 13:44:06.465815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.551 [2024-11-07 13:44:06.465871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.551 [2024-11-07 13:44:06.465888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.551 [2024-11-07 13:44:06.466154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.551 [2024-11-07 13:44:06.466393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.551 [2024-11-07 13:44:06.466406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.551 [2024-11-07 13:44:06.466417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.551 [2024-11-07 13:44:06.466428] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.551 [2024-11-07 13:44:06.479300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.551 [2024-11-07 13:44:06.479701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.551 [2024-11-07 13:44:06.479727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.551 [2024-11-07 13:44:06.479739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.551 [2024-11-07 13:44:06.479979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.551 [2024-11-07 13:44:06.480218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.551 [2024-11-07 13:44:06.480230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.551 [2024-11-07 13:44:06.480240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.551 [2024-11-07 13:44:06.480250] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.551 [2024-11-07 13:44:06.493348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.551 [2024-11-07 13:44:06.494074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.551 [2024-11-07 13:44:06.494121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.551 [2024-11-07 13:44:06.494136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.551 [2024-11-07 13:44:06.494402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.551 [2024-11-07 13:44:06.494639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.551 [2024-11-07 13:44:06.494652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.551 [2024-11-07 13:44:06.494663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.551 [2024-11-07 13:44:06.494674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.551 [2024-11-07 13:44:06.507334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.551 [2024-11-07 13:44:06.508059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.551 [2024-11-07 13:44:06.508105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.551 [2024-11-07 13:44:06.508122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.551 [2024-11-07 13:44:06.508388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.551 [2024-11-07 13:44:06.508625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.551 [2024-11-07 13:44:06.508638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.551 [2024-11-07 13:44:06.508650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.551 [2024-11-07 13:44:06.508661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.551 [2024-11-07 13:44:06.521341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.551 [2024-11-07 13:44:06.521993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.551 [2024-11-07 13:44:06.522040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.551 [2024-11-07 13:44:06.522057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.551 [2024-11-07 13:44:06.522326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.551 [2024-11-07 13:44:06.522563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.551 [2024-11-07 13:44:06.522577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.551 [2024-11-07 13:44:06.522588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.551 [2024-11-07 13:44:06.522599] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.551 [2024-11-07 13:44:06.535464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.551 [2024-11-07 13:44:06.536195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.551 [2024-11-07 13:44:06.536246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.551 [2024-11-07 13:44:06.536272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.551 [2024-11-07 13:44:06.536537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.551 [2024-11-07 13:44:06.536792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.551 [2024-11-07 13:44:06.536807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.551 [2024-11-07 13:44:06.536817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.551 [2024-11-07 13:44:06.536828] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.551 [2024-11-07 13:44:06.549491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.551 [2024-11-07 13:44:06.550098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.551 [2024-11-07 13:44:06.550124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.551 [2024-11-07 13:44:06.550136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.551 [2024-11-07 13:44:06.550370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.551 [2024-11-07 13:44:06.550603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.551 [2024-11-07 13:44:06.550615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.551 [2024-11-07 13:44:06.550625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.551 [2024-11-07 13:44:06.550634] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.813 [2024-11-07 13:44:06.563529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.813 [2024-11-07 13:44:06.564083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.813 [2024-11-07 13:44:06.564106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.814 [2024-11-07 13:44:06.564118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.814 [2024-11-07 13:44:06.564351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.814 [2024-11-07 13:44:06.564584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.814 [2024-11-07 13:44:06.564596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.814 [2024-11-07 13:44:06.564606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.814 [2024-11-07 13:44:06.564615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.814 [2024-11-07 13:44:06.577689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.814 [2024-11-07 13:44:06.578270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.814 [2024-11-07 13:44:06.578293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.814 [2024-11-07 13:44:06.578309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.814 [2024-11-07 13:44:06.578546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.814 [2024-11-07 13:44:06.578779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.814 [2024-11-07 13:44:06.578790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.814 [2024-11-07 13:44:06.578800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.814 [2024-11-07 13:44:06.578809] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.814 [2024-11-07 13:44:06.591669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.814 [2024-11-07 13:44:06.592350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.814 [2024-11-07 13:44:06.592396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.814 [2024-11-07 13:44:06.592412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.814 [2024-11-07 13:44:06.592678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.814 [2024-11-07 13:44:06.592927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.814 [2024-11-07 13:44:06.592941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.814 [2024-11-07 13:44:06.592952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.814 [2024-11-07 13:44:06.592963] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.814 [2024-11-07 13:44:06.605833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.814 [2024-11-07 13:44:06.606543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.814 [2024-11-07 13:44:06.606590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.814 [2024-11-07 13:44:06.606606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.814 [2024-11-07 13:44:06.606882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.814 [2024-11-07 13:44:06.607122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.814 [2024-11-07 13:44:06.607135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.814 [2024-11-07 13:44:06.607146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.814 [2024-11-07 13:44:06.607157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.814 [2024-11-07 13:44:06.619809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.814 [2024-11-07 13:44:06.620526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.814 [2024-11-07 13:44:06.620572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.814 [2024-11-07 13:44:06.620587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.814 [2024-11-07 13:44:06.620853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.814 [2024-11-07 13:44:06.621107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.814 [2024-11-07 13:44:06.621122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.814 [2024-11-07 13:44:06.621133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.814 [2024-11-07 13:44:06.621144] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.814 [2024-11-07 13:44:06.633793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.814 [2024-11-07 13:44:06.634300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.814 [2024-11-07 13:44:06.634325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.814 [2024-11-07 13:44:06.634337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.814 [2024-11-07 13:44:06.634571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.814 [2024-11-07 13:44:06.634804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.814 [2024-11-07 13:44:06.634816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.814 [2024-11-07 13:44:06.634826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.814 [2024-11-07 13:44:06.634835] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.814 [2024-11-07 13:44:06.647945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.814 [2024-11-07 13:44:06.648549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.814 [2024-11-07 13:44:06.648571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.814 [2024-11-07 13:44:06.648582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.814 [2024-11-07 13:44:06.648814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.814 [2024-11-07 13:44:06.649054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.814 [2024-11-07 13:44:06.649066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.814 [2024-11-07 13:44:06.649076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.814 [2024-11-07 13:44:06.649085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.814 [2024-11-07 13:44:06.661956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.814 [2024-11-07 13:44:06.662518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.814 [2024-11-07 13:44:06.662541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.814 [2024-11-07 13:44:06.662552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.814 [2024-11-07 13:44:06.662785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.814 [2024-11-07 13:44:06.663024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.814 [2024-11-07 13:44:06.663036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.814 [2024-11-07 13:44:06.663049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.814 [2024-11-07 13:44:06.663059] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.814 [2024-11-07 13:44:06.676139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.814 [2024-11-07 13:44:06.676694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.814 [2024-11-07 13:44:06.676715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.814 [2024-11-07 13:44:06.676726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.814 [2024-11-07 13:44:06.676966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.814 [2024-11-07 13:44:06.677199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.814 [2024-11-07 13:44:06.677210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.814 [2024-11-07 13:44:06.677220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.814 [2024-11-07 13:44:06.677229] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.814 7589.00 IOPS, 29.64 MiB/s [2024-11-07T12:44:06.821Z] [2024-11-07 13:44:06.690119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.814 [2024-11-07 13:44:06.690675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.814 [2024-11-07 13:44:06.690697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.815 [2024-11-07 13:44:06.690708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.815 [2024-11-07 13:44:06.690946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.815 [2024-11-07 13:44:06.691179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.815 [2024-11-07 13:44:06.691193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.815 [2024-11-07 13:44:06.691203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.815 [2024-11-07 13:44:06.691213] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.815 [2024-11-07 13:44:06.704284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.815 [2024-11-07 13:44:06.704842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.815 [2024-11-07 13:44:06.704871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.815 [2024-11-07 13:44:06.704883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.815 [2024-11-07 13:44:06.705115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.815 [2024-11-07 13:44:06.705348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.815 [2024-11-07 13:44:06.705359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.815 [2024-11-07 13:44:06.705369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.815 [2024-11-07 13:44:06.705378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.815 [2024-11-07 13:44:06.718454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.815 [2024-11-07 13:44:06.718989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.815 [2024-11-07 13:44:06.719012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.815 [2024-11-07 13:44:06.719023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.815 [2024-11-07 13:44:06.719256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.815 [2024-11-07 13:44:06.719488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.815 [2024-11-07 13:44:06.719500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.815 [2024-11-07 13:44:06.719509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.815 [2024-11-07 13:44:06.719518] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.815 [2024-11-07 13:44:06.732595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.815 [2024-11-07 13:44:06.733177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.815 [2024-11-07 13:44:06.733200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.815 [2024-11-07 13:44:06.733211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.815 [2024-11-07 13:44:06.733445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.815 [2024-11-07 13:44:06.733684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.815 [2024-11-07 13:44:06.733696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.815 [2024-11-07 13:44:06.733705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.815 [2024-11-07 13:44:06.733716] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.815 [2024-11-07 13:44:06.746614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.815 [2024-11-07 13:44:06.747230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.815 [2024-11-07 13:44:06.747253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.815 [2024-11-07 13:44:06.747265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.815 [2024-11-07 13:44:06.747497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.815 [2024-11-07 13:44:06.747730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.815 [2024-11-07 13:44:06.747742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.815 [2024-11-07 13:44:06.747752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.815 [2024-11-07 13:44:06.747762] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.815 [2024-11-07 13:44:06.760624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.815 [2024-11-07 13:44:06.761222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.815 [2024-11-07 13:44:06.761249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.815 [2024-11-07 13:44:06.761260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.815 [2024-11-07 13:44:06.761492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.815 [2024-11-07 13:44:06.761725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.815 [2024-11-07 13:44:06.761736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.815 [2024-11-07 13:44:06.761745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.815 [2024-11-07 13:44:06.761754] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.815 [2024-11-07 13:44:06.774619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.815 [2024-11-07 13:44:06.775161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.815 [2024-11-07 13:44:06.775184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.815 [2024-11-07 13:44:06.775195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.815 [2024-11-07 13:44:06.775427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.815 [2024-11-07 13:44:06.775660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.815 [2024-11-07 13:44:06.775671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.815 [2024-11-07 13:44:06.775680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.815 [2024-11-07 13:44:06.775689] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.815 [2024-11-07 13:44:06.788761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.815 [2024-11-07 13:44:06.789198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.815 [2024-11-07 13:44:06.789222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.815 [2024-11-07 13:44:06.789233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.815 [2024-11-07 13:44:06.789467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.815 [2024-11-07 13:44:06.789699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.815 [2024-11-07 13:44:06.789710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.815 [2024-11-07 13:44:06.789720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.815 [2024-11-07 13:44:06.789729] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.815 [2024-11-07 13:44:06.802813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.815 [2024-11-07 13:44:06.803418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.815 [2024-11-07 13:44:06.803441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:58.815 [2024-11-07 13:44:06.803452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:58.815 [2024-11-07 13:44:06.803688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:58.815 [2024-11-07 13:44:06.803928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.815 [2024-11-07 13:44:06.803941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.815 [2024-11-07 13:44:06.803950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.815 [2024-11-07 13:44:06.803959] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.077 [2024-11-07 13:44:06.816820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.077 [2024-11-07 13:44:06.817403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.077 [2024-11-07 13:44:06.817427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.077 [2024-11-07 13:44:06.817437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.077 [2024-11-07 13:44:06.817670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.077 [2024-11-07 13:44:06.817908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.077 [2024-11-07 13:44:06.817920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.077 [2024-11-07 13:44:06.817929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.077 [2024-11-07 13:44:06.817938] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.077 [2024-11-07 13:44:06.830800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.077 [2024-11-07 13:44:06.831367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.077 [2024-11-07 13:44:06.831390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.077 [2024-11-07 13:44:06.831400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.078 [2024-11-07 13:44:06.831632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.078 [2024-11-07 13:44:06.831870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.078 [2024-11-07 13:44:06.831882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.078 [2024-11-07 13:44:06.831891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.078 [2024-11-07 13:44:06.831900] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.078 [2024-11-07 13:44:06.844773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.078 [2024-11-07 13:44:06.845376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.078 [2024-11-07 13:44:06.845398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.078 [2024-11-07 13:44:06.845409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.078 [2024-11-07 13:44:06.845641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.078 [2024-11-07 13:44:06.845887] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.078 [2024-11-07 13:44:06.845903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.078 [2024-11-07 13:44:06.845913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.078 [2024-11-07 13:44:06.845922] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.078 [2024-11-07 13:44:06.858779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.078 [2024-11-07 13:44:06.859403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.078 [2024-11-07 13:44:06.859427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.078 [2024-11-07 13:44:06.859438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.078 [2024-11-07 13:44:06.859671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.078 [2024-11-07 13:44:06.859909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.078 [2024-11-07 13:44:06.859921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.078 [2024-11-07 13:44:06.859930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.078 [2024-11-07 13:44:06.859940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.078 [2024-11-07 13:44:06.872798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.078 [2024-11-07 13:44:06.873356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.078 [2024-11-07 13:44:06.873379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.078 [2024-11-07 13:44:06.873389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.078 [2024-11-07 13:44:06.873622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.078 [2024-11-07 13:44:06.873854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.078 [2024-11-07 13:44:06.873872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.078 [2024-11-07 13:44:06.873882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.078 [2024-11-07 13:44:06.873891] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.078 [2024-11-07 13:44:06.886978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.078 [2024-11-07 13:44:06.887550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.078 [2024-11-07 13:44:06.887573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.078 [2024-11-07 13:44:06.887583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.078 [2024-11-07 13:44:06.887816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.078 [2024-11-07 13:44:06.888057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.078 [2024-11-07 13:44:06.888070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.078 [2024-11-07 13:44:06.888079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.078 [2024-11-07 13:44:06.888092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.078 [2024-11-07 13:44:06.900963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.078 [2024-11-07 13:44:06.901406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.078 [2024-11-07 13:44:06.901428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.078 [2024-11-07 13:44:06.901439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.078 [2024-11-07 13:44:06.901672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.078 [2024-11-07 13:44:06.901911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.078 [2024-11-07 13:44:06.901923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.078 [2024-11-07 13:44:06.901933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.078 [2024-11-07 13:44:06.901942] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.078 [2024-11-07 13:44:06.915018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.078 [2024-11-07 13:44:06.915471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.078 [2024-11-07 13:44:06.915493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.078 [2024-11-07 13:44:06.915503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.078 [2024-11-07 13:44:06.915736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.078 [2024-11-07 13:44:06.915974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.078 [2024-11-07 13:44:06.915987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.078 [2024-11-07 13:44:06.915996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.078 [2024-11-07 13:44:06.916006] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.078 [2024-11-07 13:44:06.929086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.078 [2024-11-07 13:44:06.929683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.078 [2024-11-07 13:44:06.929730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.078 [2024-11-07 13:44:06.929746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.078 [2024-11-07 13:44:06.930022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.078 [2024-11-07 13:44:06.930262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.078 [2024-11-07 13:44:06.930275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.078 [2024-11-07 13:44:06.930294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.078 [2024-11-07 13:44:06.930305] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.078 [2024-11-07 13:44:06.943194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.078 [2024-11-07 13:44:06.943821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.078 [2024-11-07 13:44:06.943845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.078 [2024-11-07 13:44:06.943857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.078 [2024-11-07 13:44:06.944099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.078 [2024-11-07 13:44:06.944333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.078 [2024-11-07 13:44:06.944344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.078 [2024-11-07 13:44:06.944354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.078 [2024-11-07 13:44:06.944364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.078 [2024-11-07 13:44:06.957242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.078 [2024-11-07 13:44:06.957962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.078 [2024-11-07 13:44:06.958008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.078 [2024-11-07 13:44:06.958026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.078 [2024-11-07 13:44:06.958292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.078 [2024-11-07 13:44:06.958529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.078 [2024-11-07 13:44:06.958541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.078 [2024-11-07 13:44:06.958553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.078 [2024-11-07 13:44:06.958564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.078 [2024-11-07 13:44:06.971214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.079 [2024-11-07 13:44:06.971833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.079 [2024-11-07 13:44:06.971858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.079 [2024-11-07 13:44:06.971875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.079 [2024-11-07 13:44:06.972110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.079 [2024-11-07 13:44:06.972343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.079 [2024-11-07 13:44:06.972354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.079 [2024-11-07 13:44:06.972364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.079 [2024-11-07 13:44:06.972374] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.079 [2024-11-07 13:44:06.985218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.079 [2024-11-07 13:44:06.985891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.079 [2024-11-07 13:44:06.985949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.079 [2024-11-07 13:44:06.985972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.079 [2024-11-07 13:44:06.986239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.079 [2024-11-07 13:44:06.986477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.079 [2024-11-07 13:44:06.986491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.079 [2024-11-07 13:44:06.986502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.079 [2024-11-07 13:44:06.986513] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.079 [2024-11-07 13:44:06.999250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.079 [2024-11-07 13:44:06.999926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.079 [2024-11-07 13:44:06.999972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.079 [2024-11-07 13:44:06.999989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.079 [2024-11-07 13:44:07.000256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.079 [2024-11-07 13:44:07.000494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.079 [2024-11-07 13:44:07.000507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.079 [2024-11-07 13:44:07.000519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.079 [2024-11-07 13:44:07.000531] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.079 [2024-11-07 13:44:07.013399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.079 [2024-11-07 13:44:07.014125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.079 [2024-11-07 13:44:07.014171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.079 [2024-11-07 13:44:07.014187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.079 [2024-11-07 13:44:07.014453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.079 [2024-11-07 13:44:07.014691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.079 [2024-11-07 13:44:07.014704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.079 [2024-11-07 13:44:07.014715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.079 [2024-11-07 13:44:07.014726] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.079 [2024-11-07 13:44:07.027373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.079 [2024-11-07 13:44:07.028007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.079 [2024-11-07 13:44:07.028054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.079 [2024-11-07 13:44:07.028069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.079 [2024-11-07 13:44:07.028336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.079 [2024-11-07 13:44:07.028579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.079 [2024-11-07 13:44:07.028592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.079 [2024-11-07 13:44:07.028603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.079 [2024-11-07 13:44:07.028615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.079 [2024-11-07 13:44:07.041500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.079 [2024-11-07 13:44:07.042133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.079 [2024-11-07 13:44:07.042179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.079 [2024-11-07 13:44:07.042195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.079 [2024-11-07 13:44:07.042461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.079 [2024-11-07 13:44:07.042698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.079 [2024-11-07 13:44:07.042711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.079 [2024-11-07 13:44:07.042722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.079 [2024-11-07 13:44:07.042733] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.079 [2024-11-07 13:44:07.055600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.079 [2024-11-07 13:44:07.056270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.079 [2024-11-07 13:44:07.056316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.079 [2024-11-07 13:44:07.056333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.079 [2024-11-07 13:44:07.056599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.079 [2024-11-07 13:44:07.056836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.079 [2024-11-07 13:44:07.056849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.079 [2024-11-07 13:44:07.056860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.079 [2024-11-07 13:44:07.056880] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.079 [2024-11-07 13:44:07.069726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.079 [2024-11-07 13:44:07.070353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.079 [2024-11-07 13:44:07.070378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.079 [2024-11-07 13:44:07.070390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.079 [2024-11-07 13:44:07.070623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.079 [2024-11-07 13:44:07.070857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.079 [2024-11-07 13:44:07.070879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.079 [2024-11-07 13:44:07.070889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.079 [2024-11-07 13:44:07.070899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.342 [2024-11-07 13:44:07.083736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.342 [2024-11-07 13:44:07.084293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.342 [2024-11-07 13:44:07.084317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.342 [2024-11-07 13:44:07.084328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.342 [2024-11-07 13:44:07.084561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.342 [2024-11-07 13:44:07.084794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.342 [2024-11-07 13:44:07.084806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.342 [2024-11-07 13:44:07.084815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.342 [2024-11-07 13:44:07.084825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.342 [2024-11-07 13:44:07.098088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.342 [2024-11-07 13:44:07.098776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.342 [2024-11-07 13:44:07.098822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.342 [2024-11-07 13:44:07.098839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.342 [2024-11-07 13:44:07.099114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.342 [2024-11-07 13:44:07.099353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.342 [2024-11-07 13:44:07.099366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.342 [2024-11-07 13:44:07.099377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.342 [2024-11-07 13:44:07.099389] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.342 [2024-11-07 13:44:07.112252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.342 [2024-11-07 13:44:07.112857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.342 [2024-11-07 13:44:07.112911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.342 [2024-11-07 13:44:07.112928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.342 [2024-11-07 13:44:07.113195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.342 [2024-11-07 13:44:07.113433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.342 [2024-11-07 13:44:07.113446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.342 [2024-11-07 13:44:07.113457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.342 [2024-11-07 13:44:07.113473] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.342 [2024-11-07 13:44:07.126329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.342 [2024-11-07 13:44:07.127057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.342 [2024-11-07 13:44:07.127104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.342 [2024-11-07 13:44:07.127119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.342 [2024-11-07 13:44:07.127386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.342 [2024-11-07 13:44:07.127624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.342 [2024-11-07 13:44:07.127637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.342 [2024-11-07 13:44:07.127648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.342 [2024-11-07 13:44:07.127659] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.342 [2024-11-07 13:44:07.140326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.342 [2024-11-07 13:44:07.140947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.342 [2024-11-07 13:44:07.140994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.342 [2024-11-07 13:44:07.141010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.342 [2024-11-07 13:44:07.141275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.342 [2024-11-07 13:44:07.141514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.342 [2024-11-07 13:44:07.141527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.342 [2024-11-07 13:44:07.141537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.342 [2024-11-07 13:44:07.141549] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.342 [2024-11-07 13:44:07.154418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.342 [2024-11-07 13:44:07.154987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.342 [2024-11-07 13:44:07.155033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.342 [2024-11-07 13:44:07.155050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.342 [2024-11-07 13:44:07.155316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.342 [2024-11-07 13:44:07.155553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.342 [2024-11-07 13:44:07.155567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.342 [2024-11-07 13:44:07.155578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.343 [2024-11-07 13:44:07.155589] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.343 [2024-11-07 13:44:07.168456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.343 [2024-11-07 13:44:07.169172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.343 [2024-11-07 13:44:07.169218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.343 [2024-11-07 13:44:07.169234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.343 [2024-11-07 13:44:07.169500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.343 [2024-11-07 13:44:07.169737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.343 [2024-11-07 13:44:07.169750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.343 [2024-11-07 13:44:07.169761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.343 [2024-11-07 13:44:07.169772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.343 [2024-11-07 13:44:07.182631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.343 [2024-11-07 13:44:07.183318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.343 [2024-11-07 13:44:07.183364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.343 [2024-11-07 13:44:07.183380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.343 [2024-11-07 13:44:07.183646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.343 [2024-11-07 13:44:07.183894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.343 [2024-11-07 13:44:07.183908] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.343 [2024-11-07 13:44:07.183919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.343 [2024-11-07 13:44:07.183930] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.343 [2024-11-07 13:44:07.196794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.343 [2024-11-07 13:44:07.197509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.343 [2024-11-07 13:44:07.197556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.343 [2024-11-07 13:44:07.197571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.343 [2024-11-07 13:44:07.197837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.343 [2024-11-07 13:44:07.198083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.343 [2024-11-07 13:44:07.198097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.343 [2024-11-07 13:44:07.198108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.343 [2024-11-07 13:44:07.198119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.343 [2024-11-07 13:44:07.210965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.343 [2024-11-07 13:44:07.211631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.343 [2024-11-07 13:44:07.211677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.343 [2024-11-07 13:44:07.211697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.343 [2024-11-07 13:44:07.211973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.343 [2024-11-07 13:44:07.212212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.343 [2024-11-07 13:44:07.212224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.343 [2024-11-07 13:44:07.212235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.343 [2024-11-07 13:44:07.212247] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.343 [2024-11-07 13:44:07.225092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.343 [2024-11-07 13:44:07.225666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.343 [2024-11-07 13:44:07.225691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.343 [2024-11-07 13:44:07.225702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.343 [2024-11-07 13:44:07.225944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.343 [2024-11-07 13:44:07.226178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.343 [2024-11-07 13:44:07.226190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.343 [2024-11-07 13:44:07.226200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.343 [2024-11-07 13:44:07.226209] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.343 [2024-11-07 13:44:07.239066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.343 [2024-11-07 13:44:07.239629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.343 [2024-11-07 13:44:07.239652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.343 [2024-11-07 13:44:07.239663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.343 [2024-11-07 13:44:07.239902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.343 [2024-11-07 13:44:07.240136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.343 [2024-11-07 13:44:07.240147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.343 [2024-11-07 13:44:07.240157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.343 [2024-11-07 13:44:07.240167] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.343 [2024-11-07 13:44:07.253232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.343 [2024-11-07 13:44:07.253788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.343 [2024-11-07 13:44:07.253811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.343 [2024-11-07 13:44:07.253822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.343 [2024-11-07 13:44:07.254060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.343 [2024-11-07 13:44:07.254297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.343 [2024-11-07 13:44:07.254308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.343 [2024-11-07 13:44:07.254318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.343 [2024-11-07 13:44:07.254327] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.343 [2024-11-07 13:44:07.267374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.343 [2024-11-07 13:44:07.268084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.343 [2024-11-07 13:44:07.268130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.343 [2024-11-07 13:44:07.268146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.343 [2024-11-07 13:44:07.268412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.343 [2024-11-07 13:44:07.268651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.343 [2024-11-07 13:44:07.268664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.343 [2024-11-07 13:44:07.268675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.343 [2024-11-07 13:44:07.268686] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.343 [2024-11-07 13:44:07.281542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.343 [2024-11-07 13:44:07.282183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.343 [2024-11-07 13:44:07.282230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.343 [2024-11-07 13:44:07.282245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.343 [2024-11-07 13:44:07.282511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.343 [2024-11-07 13:44:07.282749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.343 [2024-11-07 13:44:07.282762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.343 [2024-11-07 13:44:07.282773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.344 [2024-11-07 13:44:07.282784] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.344 [2024-11-07 13:44:07.295640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.344 [2024-11-07 13:44:07.296309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.344 [2024-11-07 13:44:07.296355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.344 [2024-11-07 13:44:07.296371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.344 [2024-11-07 13:44:07.296637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.344 [2024-11-07 13:44:07.296884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.344 [2024-11-07 13:44:07.296898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.344 [2024-11-07 13:44:07.296917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.344 [2024-11-07 13:44:07.296929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.344 [2024-11-07 13:44:07.309778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.344 [2024-11-07 13:44:07.310497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.344 [2024-11-07 13:44:07.310543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.344 [2024-11-07 13:44:07.310559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.344 [2024-11-07 13:44:07.310825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.344 [2024-11-07 13:44:07.311072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.344 [2024-11-07 13:44:07.311086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.344 [2024-11-07 13:44:07.311097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.344 [2024-11-07 13:44:07.311109] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.344 [2024-11-07 13:44:07.323959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.344 [2024-11-07 13:44:07.324667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.344 [2024-11-07 13:44:07.324712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.344 [2024-11-07 13:44:07.324728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.344 [2024-11-07 13:44:07.325004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.344 [2024-11-07 13:44:07.325243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.344 [2024-11-07 13:44:07.325255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.344 [2024-11-07 13:44:07.325266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.344 [2024-11-07 13:44:07.325278] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.344 [2024-11-07 13:44:07.338143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.344 [2024-11-07 13:44:07.338793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.344 [2024-11-07 13:44:07.338846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.344 [2024-11-07 13:44:07.338870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.344 [2024-11-07 13:44:07.339137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.344 [2024-11-07 13:44:07.339396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.344 [2024-11-07 13:44:07.339411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.344 [2024-11-07 13:44:07.339422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.344 [2024-11-07 13:44:07.339433] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.606 [2024-11-07 13:44:07.352302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.606 [2024-11-07 13:44:07.352962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.606 [2024-11-07 13:44:07.353008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.606 [2024-11-07 13:44:07.353023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.606 [2024-11-07 13:44:07.353290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.606 [2024-11-07 13:44:07.353528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.606 [2024-11-07 13:44:07.353541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.606 [2024-11-07 13:44:07.353552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.606 [2024-11-07 13:44:07.353563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.606 [2024-11-07 13:44:07.366425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.606 [2024-11-07 13:44:07.367123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.606 [2024-11-07 13:44:07.367169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.606 [2024-11-07 13:44:07.367185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.606 [2024-11-07 13:44:07.367451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.606 [2024-11-07 13:44:07.367689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.606 [2024-11-07 13:44:07.367702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.606 [2024-11-07 13:44:07.367713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.606 [2024-11-07 13:44:07.367725] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.606 [2024-11-07 13:44:07.380588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.606 [2024-11-07 13:44:07.381264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.606 [2024-11-07 13:44:07.381310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.606 [2024-11-07 13:44:07.381326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.606 [2024-11-07 13:44:07.381592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.607 [2024-11-07 13:44:07.381830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.607 [2024-11-07 13:44:07.381842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.607 [2024-11-07 13:44:07.381853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.607 [2024-11-07 13:44:07.381875] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.607 [2024-11-07 13:44:07.394718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.607 [2024-11-07 13:44:07.395459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.607 [2024-11-07 13:44:07.395515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.607 [2024-11-07 13:44:07.395531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.607 [2024-11-07 13:44:07.395797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.607 [2024-11-07 13:44:07.396045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.607 [2024-11-07 13:44:07.396059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.607 [2024-11-07 13:44:07.396070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.607 [2024-11-07 13:44:07.396081] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.607 [2024-11-07 13:44:07.408709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.607 [2024-11-07 13:44:07.409408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.607 [2024-11-07 13:44:07.409455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.607 [2024-11-07 13:44:07.409470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.607 [2024-11-07 13:44:07.409736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.607 [2024-11-07 13:44:07.409985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.607 [2024-11-07 13:44:07.409999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.607 [2024-11-07 13:44:07.410010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.607 [2024-11-07 13:44:07.410021] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.607 [2024-11-07 13:44:07.422875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.607 [2024-11-07 13:44:07.423566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.607 [2024-11-07 13:44:07.423612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.607 [2024-11-07 13:44:07.423627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.607 [2024-11-07 13:44:07.423902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.607 [2024-11-07 13:44:07.424141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.607 [2024-11-07 13:44:07.424154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.607 [2024-11-07 13:44:07.424164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.607 [2024-11-07 13:44:07.424176] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.607 [2024-11-07 13:44:07.437040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.607 [2024-11-07 13:44:07.437652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.607 [2024-11-07 13:44:07.437677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.607 [2024-11-07 13:44:07.437689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.607 [2024-11-07 13:44:07.437935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.607 [2024-11-07 13:44:07.438170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.607 [2024-11-07 13:44:07.438183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.607 [2024-11-07 13:44:07.438193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.607 [2024-11-07 13:44:07.438203] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.607 [2024-11-07 13:44:07.451067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.607 [2024-11-07 13:44:07.451649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.607 [2024-11-07 13:44:07.451673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.607 [2024-11-07 13:44:07.451685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.607 [2024-11-07 13:44:07.451925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.607 [2024-11-07 13:44:07.452160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.607 [2024-11-07 13:44:07.452172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.607 [2024-11-07 13:44:07.452182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.607 [2024-11-07 13:44:07.452192] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.607 [2024-11-07 13:44:07.465029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.607 [2024-11-07 13:44:07.465543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.607 [2024-11-07 13:44:07.465566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.607 [2024-11-07 13:44:07.465576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.607 [2024-11-07 13:44:07.465809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.607 [2024-11-07 13:44:07.466049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.607 [2024-11-07 13:44:07.466068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.607 [2024-11-07 13:44:07.466078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.607 [2024-11-07 13:44:07.466088] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.607 [2024-11-07 13:44:07.479139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.607 [2024-11-07 13:44:07.479830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.607 [2024-11-07 13:44:07.479884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.607 [2024-11-07 13:44:07.479901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.607 [2024-11-07 13:44:07.480167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.607 [2024-11-07 13:44:07.480405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.607 [2024-11-07 13:44:07.480423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.607 [2024-11-07 13:44:07.480435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.607 [2024-11-07 13:44:07.480446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.607 [2024-11-07 13:44:07.493310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.607 [2024-11-07 13:44:07.493923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.607 [2024-11-07 13:44:07.493950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.607 [2024-11-07 13:44:07.493961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.607 [2024-11-07 13:44:07.494196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.607 [2024-11-07 13:44:07.494429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.607 [2024-11-07 13:44:07.494442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.607 [2024-11-07 13:44:07.494453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.607 [2024-11-07 13:44:07.494463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.607 [2024-11-07 13:44:07.507311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.607 [2024-11-07 13:44:07.507909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.607 [2024-11-07 13:44:07.507933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.607 [2024-11-07 13:44:07.507944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.607 [2024-11-07 13:44:07.508176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.607 [2024-11-07 13:44:07.508410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.607 [2024-11-07 13:44:07.508422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.607 [2024-11-07 13:44:07.508433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.607 [2024-11-07 13:44:07.508443] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.607 [2024-11-07 13:44:07.521281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.607 [2024-11-07 13:44:07.521825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.607 [2024-11-07 13:44:07.521847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.608 [2024-11-07 13:44:07.521858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.608 [2024-11-07 13:44:07.522097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.608 [2024-11-07 13:44:07.522330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.608 [2024-11-07 13:44:07.522343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.608 [2024-11-07 13:44:07.522356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.608 [2024-11-07 13:44:07.522366] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.608 [2024-11-07 13:44:07.535437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.608 [2024-11-07 13:44:07.535993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.608 [2024-11-07 13:44:07.536042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.608 [2024-11-07 13:44:07.536058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.608 [2024-11-07 13:44:07.536331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.608 [2024-11-07 13:44:07.536571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.608 [2024-11-07 13:44:07.536585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.608 [2024-11-07 13:44:07.536596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.608 [2024-11-07 13:44:07.536607] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.608 [2024-11-07 13:44:07.549496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.608 [2024-11-07 13:44:07.550211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.608 [2024-11-07 13:44:07.550259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.608 [2024-11-07 13:44:07.550275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.608 [2024-11-07 13:44:07.550541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.608 [2024-11-07 13:44:07.550780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.608 [2024-11-07 13:44:07.550794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.608 [2024-11-07 13:44:07.550805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.608 [2024-11-07 13:44:07.550816] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.608 [2024-11-07 13:44:07.563670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.608 [2024-11-07 13:44:07.564361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.608 [2024-11-07 13:44:07.564409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.608 [2024-11-07 13:44:07.564425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.608 [2024-11-07 13:44:07.564691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.608 [2024-11-07 13:44:07.564941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.608 [2024-11-07 13:44:07.564957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.608 [2024-11-07 13:44:07.564968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.608 [2024-11-07 13:44:07.564980] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.608 [2024-11-07 13:44:07.577831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.608 [2024-11-07 13:44:07.578439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.608 [2024-11-07 13:44:07.578465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.608 [2024-11-07 13:44:07.578477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.608 [2024-11-07 13:44:07.578711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.608 [2024-11-07 13:44:07.578999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.608 [2024-11-07 13:44:07.579014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.608 [2024-11-07 13:44:07.579024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.608 [2024-11-07 13:44:07.579035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.608 [2024-11-07 13:44:07.591869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.608 [2024-11-07 13:44:07.592420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.608 [2024-11-07 13:44:07.592443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.608 [2024-11-07 13:44:07.592454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.608 [2024-11-07 13:44:07.592688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.608 [2024-11-07 13:44:07.592928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.608 [2024-11-07 13:44:07.592942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.608 [2024-11-07 13:44:07.592952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.608 [2024-11-07 13:44:07.592961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.608 [2024-11-07 13:44:07.606011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.608 [2024-11-07 13:44:07.606681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.608 [2024-11-07 13:44:07.606728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.608 [2024-11-07 13:44:07.606744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.608 [2024-11-07 13:44:07.607021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.608 [2024-11-07 13:44:07.607261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.608 [2024-11-07 13:44:07.607275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.608 [2024-11-07 13:44:07.607286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.608 [2024-11-07 13:44:07.607298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.870 [2024-11-07 13:44:07.620149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.870 [2024-11-07 13:44:07.620892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.870 [2024-11-07 13:44:07.620940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.870 [2024-11-07 13:44:07.620962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.870 [2024-11-07 13:44:07.621228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.870 [2024-11-07 13:44:07.621466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.870 [2024-11-07 13:44:07.621481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.870 [2024-11-07 13:44:07.621492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.870 [2024-11-07 13:44:07.621503] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.870 [2024-11-07 13:44:07.634156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.870 [2024-11-07 13:44:07.634849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.870 [2024-11-07 13:44:07.634901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.870 [2024-11-07 13:44:07.634917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.870 [2024-11-07 13:44:07.635183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.870 [2024-11-07 13:44:07.635421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.870 [2024-11-07 13:44:07.635435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.870 [2024-11-07 13:44:07.635447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.870 [2024-11-07 13:44:07.635458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.870 [2024-11-07 13:44:07.648345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.870 [2024-11-07 13:44:07.648981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.870 [2024-11-07 13:44:07.649029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.870 [2024-11-07 13:44:07.649047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.870 [2024-11-07 13:44:07.649315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.870 [2024-11-07 13:44:07.649554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.870 [2024-11-07 13:44:07.649569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.870 [2024-11-07 13:44:07.649580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.870 [2024-11-07 13:44:07.649591] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.870 [2024-11-07 13:44:07.662445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.870 [2024-11-07 13:44:07.663150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.870 [2024-11-07 13:44:07.663198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.870 [2024-11-07 13:44:07.663214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.870 [2024-11-07 13:44:07.663485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.870 [2024-11-07 13:44:07.663723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.870 [2024-11-07 13:44:07.663738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.870 [2024-11-07 13:44:07.663749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.870 [2024-11-07 13:44:07.663760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.870 [2024-11-07 13:44:07.676620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.870 [2024-11-07 13:44:07.677302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.870 [2024-11-07 13:44:07.677350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.870 [2024-11-07 13:44:07.677367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.870 [2024-11-07 13:44:07.677633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.870 [2024-11-07 13:44:07.677883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.871 [2024-11-07 13:44:07.677899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.871 [2024-11-07 13:44:07.677910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.871 [2024-11-07 13:44:07.677922] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.871 5691.75 IOPS, 22.23 MiB/s [2024-11-07T12:44:07.878Z] [2024-11-07 13:44:07.690778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.871 [2024-11-07 13:44:07.691391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.871 [2024-11-07 13:44:07.691417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.871 [2024-11-07 13:44:07.691429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.871 [2024-11-07 13:44:07.691663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.871 [2024-11-07 13:44:07.691904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.871 [2024-11-07 13:44:07.691918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.871 [2024-11-07 13:44:07.691928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.871 [2024-11-07 13:44:07.691938] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.871 [2024-11-07 13:44:07.704776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.871 [2024-11-07 13:44:07.705337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.871 [2024-11-07 13:44:07.705361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.871 [2024-11-07 13:44:07.705372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.871 [2024-11-07 13:44:07.705605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.871 [2024-11-07 13:44:07.705838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.871 [2024-11-07 13:44:07.705855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.871 [2024-11-07 13:44:07.705872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.871 [2024-11-07 13:44:07.705883] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.871 [2024-11-07 13:44:07.718799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.871 [2024-11-07 13:44:07.719488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.871 [2024-11-07 13:44:07.719536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.871 [2024-11-07 13:44:07.719552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.871 [2024-11-07 13:44:07.719818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.871 [2024-11-07 13:44:07.720069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.871 [2024-11-07 13:44:07.720083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.871 [2024-11-07 13:44:07.720094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.871 [2024-11-07 13:44:07.720106] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.871 [2024-11-07 13:44:07.732953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.871 [2024-11-07 13:44:07.733632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.871 [2024-11-07 13:44:07.733679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.871 [2024-11-07 13:44:07.733694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.871 [2024-11-07 13:44:07.733970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.871 [2024-11-07 13:44:07.734211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.871 [2024-11-07 13:44:07.734234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.871 [2024-11-07 13:44:07.734245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.871 [2024-11-07 13:44:07.734256] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.871 [2024-11-07 13:44:07.747142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.871 [2024-11-07 13:44:07.747777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.871 [2024-11-07 13:44:07.747825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.871 [2024-11-07 13:44:07.747843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.871 [2024-11-07 13:44:07.748119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.871 [2024-11-07 13:44:07.748359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.871 [2024-11-07 13:44:07.748374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.871 [2024-11-07 13:44:07.748385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.871 [2024-11-07 13:44:07.748401] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.871 [2024-11-07 13:44:07.761259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.871 [2024-11-07 13:44:07.761869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.871 [2024-11-07 13:44:07.761916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.871 [2024-11-07 13:44:07.761935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.871 [2024-11-07 13:44:07.762202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.871 [2024-11-07 13:44:07.762441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.871 [2024-11-07 13:44:07.762455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.871 [2024-11-07 13:44:07.762467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.871 [2024-11-07 13:44:07.762478] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.871 [2024-11-07 13:44:07.775330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.871 [2024-11-07 13:44:07.776074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.871 [2024-11-07 13:44:07.776121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.871 [2024-11-07 13:44:07.776137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.871 [2024-11-07 13:44:07.776403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.871 [2024-11-07 13:44:07.776641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.871 [2024-11-07 13:44:07.776656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.871 [2024-11-07 13:44:07.776667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.871 [2024-11-07 13:44:07.776678] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.871 [2024-11-07 13:44:07.789317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.871 [2024-11-07 13:44:07.789967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.871 [2024-11-07 13:44:07.790015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.871 [2024-11-07 13:44:07.790031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.871 [2024-11-07 13:44:07.790297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.871 [2024-11-07 13:44:07.790535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.871 [2024-11-07 13:44:07.790549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.871 [2024-11-07 13:44:07.790560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.871 [2024-11-07 13:44:07.790572] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.871 [2024-11-07 13:44:07.803431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.871 [2024-11-07 13:44:07.804163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.871 [2024-11-07 13:44:07.804211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.872 [2024-11-07 13:44:07.804227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.872 [2024-11-07 13:44:07.804493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.872 [2024-11-07 13:44:07.804732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.872 [2024-11-07 13:44:07.804746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.872 [2024-11-07 13:44:07.804757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.872 [2024-11-07 13:44:07.804769] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.872 [2024-11-07 13:44:07.817403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.872 [2024-11-07 13:44:07.818167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.872 [2024-11-07 13:44:07.818215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.872 [2024-11-07 13:44:07.818231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.872 [2024-11-07 13:44:07.818496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.872 [2024-11-07 13:44:07.818735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.872 [2024-11-07 13:44:07.818749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.872 [2024-11-07 13:44:07.818760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.872 [2024-11-07 13:44:07.818771] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.872 [2024-11-07 13:44:07.831408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.872 [2024-11-07 13:44:07.831874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.872 [2024-11-07 13:44:07.831903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.872 [2024-11-07 13:44:07.831916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.872 [2024-11-07 13:44:07.832155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.872 [2024-11-07 13:44:07.832390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.872 [2024-11-07 13:44:07.832402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.872 [2024-11-07 13:44:07.832413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.872 [2024-11-07 13:44:07.832423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.872 [2024-11-07 13:44:07.845517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.872 [2024-11-07 13:44:07.846102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.872 [2024-11-07 13:44:07.846151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.872 [2024-11-07 13:44:07.846171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.872 [2024-11-07 13:44:07.846438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.872 [2024-11-07 13:44:07.846676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.872 [2024-11-07 13:44:07.846690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.872 [2024-11-07 13:44:07.846701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.872 [2024-11-07 13:44:07.846712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.872 [2024-11-07 13:44:07.859579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.872 [2024-11-07 13:44:07.860191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.872 [2024-11-07 13:44:07.860217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:38:59.872 [2024-11-07 13:44:07.860229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:38:59.872 [2024-11-07 13:44:07.860464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:38:59.872 [2024-11-07 13:44:07.860699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.872 [2024-11-07 13:44:07.860712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.872 [2024-11-07 13:44:07.860722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.872 [2024-11-07 13:44:07.860732] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.134 [2024-11-07 13:44:07.873574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.134 [2024-11-07 13:44:07.874169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.134 [2024-11-07 13:44:07.874193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.134 [2024-11-07 13:44:07.874205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.134 [2024-11-07 13:44:07.874438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.134 [2024-11-07 13:44:07.874671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.134 [2024-11-07 13:44:07.874684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.134 [2024-11-07 13:44:07.874694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.134 [2024-11-07 13:44:07.874704] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.134 [2024-11-07 13:44:07.887541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.134 [2024-11-07 13:44:07.888232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.134 [2024-11-07 13:44:07.888279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.134 [2024-11-07 13:44:07.888295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.134 [2024-11-07 13:44:07.888561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.134 [2024-11-07 13:44:07.888805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.134 [2024-11-07 13:44:07.888820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.134 [2024-11-07 13:44:07.888831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.134 [2024-11-07 13:44:07.888842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.134 [2024-11-07 13:44:07.901696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.134 [2024-11-07 13:44:07.902282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.134 [2024-11-07 13:44:07.902308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.134 [2024-11-07 13:44:07.902320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.134 [2024-11-07 13:44:07.902554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.134 [2024-11-07 13:44:07.902788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.134 [2024-11-07 13:44:07.902800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.134 [2024-11-07 13:44:07.902810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.134 [2024-11-07 13:44:07.902820] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.134 [2024-11-07 13:44:07.915654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.134 [2024-11-07 13:44:07.916341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.134 [2024-11-07 13:44:07.916388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.134 [2024-11-07 13:44:07.916404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.134 [2024-11-07 13:44:07.916671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.134 [2024-11-07 13:44:07.916920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.134 [2024-11-07 13:44:07.916935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.134 [2024-11-07 13:44:07.916946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.134 [2024-11-07 13:44:07.916958] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.134 [2024-11-07 13:44:07.929804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.134 [2024-11-07 13:44:07.930497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.134 [2024-11-07 13:44:07.930545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.134 [2024-11-07 13:44:07.930561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.134 [2024-11-07 13:44:07.930827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.134 [2024-11-07 13:44:07.931076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.134 [2024-11-07 13:44:07.931092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.134 [2024-11-07 13:44:07.931108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.134 [2024-11-07 13:44:07.931126] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.134 [2024-11-07 13:44:07.943995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.134 [2024-11-07 13:44:07.944599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.134 [2024-11-07 13:44:07.944624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.134 [2024-11-07 13:44:07.944636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.134 [2024-11-07 13:44:07.944878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.134 [2024-11-07 13:44:07.945113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.134 [2024-11-07 13:44:07.945125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.134 [2024-11-07 13:44:07.945136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.134 [2024-11-07 13:44:07.945145] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.134 [2024-11-07 13:44:07.957992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.134 [2024-11-07 13:44:07.958585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.134 [2024-11-07 13:44:07.958608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.134 [2024-11-07 13:44:07.958620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.134 [2024-11-07 13:44:07.958853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.134 [2024-11-07 13:44:07.959093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.134 [2024-11-07 13:44:07.959108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.134 [2024-11-07 13:44:07.959118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.134 [2024-11-07 13:44:07.959127] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.134 [2024-11-07 13:44:07.971961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.134 [2024-11-07 13:44:07.972431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.134 [2024-11-07 13:44:07.972453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.134 [2024-11-07 13:44:07.972465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.134 [2024-11-07 13:44:07.972699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.134 [2024-11-07 13:44:07.972940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.135 [2024-11-07 13:44:07.972953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.135 [2024-11-07 13:44:07.972963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.135 [2024-11-07 13:44:07.972976] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.135 [2024-11-07 13:44:07.986029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.135 [2024-11-07 13:44:07.986578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.135 [2024-11-07 13:44:07.986602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.135 [2024-11-07 13:44:07.986613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.135 [2024-11-07 13:44:07.986846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.135 [2024-11-07 13:44:07.987085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.135 [2024-11-07 13:44:07.987099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.135 [2024-11-07 13:44:07.987108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.135 [2024-11-07 13:44:07.987118] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.135 [2024-11-07 13:44:08.000173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.135 [2024-11-07 13:44:08.000718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.135 [2024-11-07 13:44:08.000741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.135 [2024-11-07 13:44:08.000753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.135 [2024-11-07 13:44:08.000992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.135 [2024-11-07 13:44:08.001227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.135 [2024-11-07 13:44:08.001240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.135 [2024-11-07 13:44:08.001251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.135 [2024-11-07 13:44:08.001261] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.135 [2024-11-07 13:44:08.014317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.135 [2024-11-07 13:44:08.015089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.135 [2024-11-07 13:44:08.015137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.135 [2024-11-07 13:44:08.015154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.135 [2024-11-07 13:44:08.015421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.135 [2024-11-07 13:44:08.015660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.135 [2024-11-07 13:44:08.015675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.135 [2024-11-07 13:44:08.015687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.135 [2024-11-07 13:44:08.015698] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.135 [2024-11-07 13:44:08.028413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.135 [2024-11-07 13:44:08.029123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.135 [2024-11-07 13:44:08.029170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.135 [2024-11-07 13:44:08.029186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.135 [2024-11-07 13:44:08.029452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.135 [2024-11-07 13:44:08.029690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.135 [2024-11-07 13:44:08.029705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.135 [2024-11-07 13:44:08.029717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.135 [2024-11-07 13:44:08.029729] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.135 [2024-11-07 13:44:08.042607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.135 [2024-11-07 13:44:08.043313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.135 [2024-11-07 13:44:08.043361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.135 [2024-11-07 13:44:08.043379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.135 [2024-11-07 13:44:08.043646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.135 [2024-11-07 13:44:08.043896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.135 [2024-11-07 13:44:08.043911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.135 [2024-11-07 13:44:08.043922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.135 [2024-11-07 13:44:08.043934] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.135 [2024-11-07 13:44:08.056574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.135 [2024-11-07 13:44:08.057155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.135 [2024-11-07 13:44:08.057181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.135 [2024-11-07 13:44:08.057193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.135 [2024-11-07 13:44:08.057428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.135 [2024-11-07 13:44:08.057662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.135 [2024-11-07 13:44:08.057674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.135 [2024-11-07 13:44:08.057685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.135 [2024-11-07 13:44:08.057694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.135 [2024-11-07 13:44:08.070749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.135 [2024-11-07 13:44:08.071436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.135 [2024-11-07 13:44:08.071484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.135 [2024-11-07 13:44:08.071504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.135 [2024-11-07 13:44:08.071771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.135 [2024-11-07 13:44:08.072019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.135 [2024-11-07 13:44:08.072034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.135 [2024-11-07 13:44:08.072045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.135 [2024-11-07 13:44:08.072056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.135 [2024-11-07 13:44:08.084909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.135 [2024-11-07 13:44:08.085611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.135 [2024-11-07 13:44:08.085659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.135 [2024-11-07 13:44:08.085674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.135 [2024-11-07 13:44:08.085951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.135 [2024-11-07 13:44:08.086190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.135 [2024-11-07 13:44:08.086204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.135 [2024-11-07 13:44:08.086215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.135 [2024-11-07 13:44:08.086227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.135 [2024-11-07 13:44:08.099051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.135 [2024-11-07 13:44:08.099765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.135 [2024-11-07 13:44:08.099812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.136 [2024-11-07 13:44:08.099828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.136 [2024-11-07 13:44:08.100104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.136 [2024-11-07 13:44:08.100344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.136 [2024-11-07 13:44:08.100358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.136 [2024-11-07 13:44:08.100369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.136 [2024-11-07 13:44:08.100381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.136 [2024-11-07 13:44:08.113235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.136 [2024-11-07 13:44:08.113963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.136 [2024-11-07 13:44:08.114011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.136 [2024-11-07 13:44:08.114027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.136 [2024-11-07 13:44:08.114293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.136 [2024-11-07 13:44:08.114536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.136 [2024-11-07 13:44:08.114550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.136 [2024-11-07 13:44:08.114561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.136 [2024-11-07 13:44:08.114572] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.136 [2024-11-07 13:44:08.127213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.136 [2024-11-07 13:44:08.127930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.136 [2024-11-07 13:44:08.127978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.136 [2024-11-07 13:44:08.127995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.136 [2024-11-07 13:44:08.128262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.136 [2024-11-07 13:44:08.128501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.136 [2024-11-07 13:44:08.128517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.136 [2024-11-07 13:44:08.128529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.136 [2024-11-07 13:44:08.128542] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.398 [2024-11-07 13:44:08.141201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.398 [2024-11-07 13:44:08.141950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.398 [2024-11-07 13:44:08.141998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.398 [2024-11-07 13:44:08.142015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.398 [2024-11-07 13:44:08.142283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.398 [2024-11-07 13:44:08.142538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.398 [2024-11-07 13:44:08.142554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.398 [2024-11-07 13:44:08.142565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.398 [2024-11-07 13:44:08.142576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.398 [2024-11-07 13:44:08.155232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.398 [2024-11-07 13:44:08.155838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.398 [2024-11-07 13:44:08.155870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.398 [2024-11-07 13:44:08.155883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.398 [2024-11-07 13:44:08.156117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.398 [2024-11-07 13:44:08.156351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.398 [2024-11-07 13:44:08.156364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.398 [2024-11-07 13:44:08.156379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.398 [2024-11-07 13:44:08.156389] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.399 [2024-11-07 13:44:08.169237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.399 [2024-11-07 13:44:08.169666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.399 [2024-11-07 13:44:08.169691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.399 [2024-11-07 13:44:08.169703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.399 [2024-11-07 13:44:08.169943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.399 [2024-11-07 13:44:08.170177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.399 [2024-11-07 13:44:08.170190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.399 [2024-11-07 13:44:08.170200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.399 [2024-11-07 13:44:08.170210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.399 [2024-11-07 13:44:08.183272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.399 [2024-11-07 13:44:08.183820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.399 [2024-11-07 13:44:08.183844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.399 [2024-11-07 13:44:08.183855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.399 [2024-11-07 13:44:08.184094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.399 [2024-11-07 13:44:08.184327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.399 [2024-11-07 13:44:08.184339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.399 [2024-11-07 13:44:08.184349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.399 [2024-11-07 13:44:08.184359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.399 [2024-11-07 13:44:08.197421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.399 [2024-11-07 13:44:08.198114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.399 [2024-11-07 13:44:08.198161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.399 [2024-11-07 13:44:08.198177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.399 [2024-11-07 13:44:08.198444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.399 [2024-11-07 13:44:08.198682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.399 [2024-11-07 13:44:08.198696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.399 [2024-11-07 13:44:08.198707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.399 [2024-11-07 13:44:08.198719] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.399 [2024-11-07 13:44:08.211577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.399 [2024-11-07 13:44:08.212208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.399 [2024-11-07 13:44:08.212256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.399 [2024-11-07 13:44:08.212272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.399 [2024-11-07 13:44:08.212538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.399 [2024-11-07 13:44:08.212777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.399 [2024-11-07 13:44:08.212791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.399 [2024-11-07 13:44:08.212802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.399 [2024-11-07 13:44:08.212813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.399 [2024-11-07 13:44:08.225764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.399 [2024-11-07 13:44:08.226448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.399 [2024-11-07 13:44:08.226496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.399 [2024-11-07 13:44:08.226513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.399 [2024-11-07 13:44:08.226780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.399 [2024-11-07 13:44:08.227027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.399 [2024-11-07 13:44:08.227042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.399 [2024-11-07 13:44:08.227054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.399 [2024-11-07 13:44:08.227066] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.399 [2024-11-07 13:44:08.239934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.399 [2024-11-07 13:44:08.240622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.399 [2024-11-07 13:44:08.240670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.399 [2024-11-07 13:44:08.240685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.399 [2024-11-07 13:44:08.240961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.399 [2024-11-07 13:44:08.241201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.399 [2024-11-07 13:44:08.241216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.399 [2024-11-07 13:44:08.241227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.399 [2024-11-07 13:44:08.241239] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.399 [2024-11-07 13:44:08.253922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.399 [2024-11-07 13:44:08.254530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.399 [2024-11-07 13:44:08.254560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.399 [2024-11-07 13:44:08.254572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.399 [2024-11-07 13:44:08.254807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.399 [2024-11-07 13:44:08.255048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.399 [2024-11-07 13:44:08.255063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.399 [2024-11-07 13:44:08.255073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.399 [2024-11-07 13:44:08.255083] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.399 [2024-11-07 13:44:08.267932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.399 [2024-11-07 13:44:08.268479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.399 [2024-11-07 13:44:08.268502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.399 [2024-11-07 13:44:08.268514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.399 [2024-11-07 13:44:08.268747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.399 [2024-11-07 13:44:08.268987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.399 [2024-11-07 13:44:08.269002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.399 [2024-11-07 13:44:08.269014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.399 [2024-11-07 13:44:08.269024] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.399 [2024-11-07 13:44:08.282086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.399 [2024-11-07 13:44:08.282751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.399 [2024-11-07 13:44:08.282798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.399 [2024-11-07 13:44:08.282814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.399 [2024-11-07 13:44:08.283089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.399 [2024-11-07 13:44:08.283328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.399 [2024-11-07 13:44:08.283344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.400 [2024-11-07 13:44:08.283356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.400 [2024-11-07 13:44:08.283367] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.400 [2024-11-07 13:44:08.296221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.400 [2024-11-07 13:44:08.296944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.400 [2024-11-07 13:44:08.296992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.400 [2024-11-07 13:44:08.297008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.400 [2024-11-07 13:44:08.297279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.400 [2024-11-07 13:44:08.297519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.400 [2024-11-07 13:44:08.297533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.400 [2024-11-07 13:44:08.297544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.400 [2024-11-07 13:44:08.297556] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.400 [2024-11-07 13:44:08.310190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.400 [2024-11-07 13:44:08.310764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.400 [2024-11-07 13:44:08.310789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.400 [2024-11-07 13:44:08.310801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.400 [2024-11-07 13:44:08.311040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.400 [2024-11-07 13:44:08.311275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.400 [2024-11-07 13:44:08.311288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.400 [2024-11-07 13:44:08.311298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.400 [2024-11-07 13:44:08.311308] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.400 [2024-11-07 13:44:08.324368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.400 [2024-11-07 13:44:08.324966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.400 [2024-11-07 13:44:08.325014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.400 [2024-11-07 13:44:08.325032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.400 [2024-11-07 13:44:08.325300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.400 [2024-11-07 13:44:08.325538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.400 [2024-11-07 13:44:08.325552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.400 [2024-11-07 13:44:08.325563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.400 [2024-11-07 13:44:08.325575] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.400 [2024-11-07 13:44:08.338443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.400 [2024-11-07 13:44:08.339170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.400 [2024-11-07 13:44:08.339217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.400 [2024-11-07 13:44:08.339242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.400 [2024-11-07 13:44:08.339508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.400 [2024-11-07 13:44:08.339747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.400 [2024-11-07 13:44:08.339765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.400 [2024-11-07 13:44:08.339776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.400 [2024-11-07 13:44:08.339787] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.400 [2024-11-07 13:44:08.352469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.400 [2024-11-07 13:44:08.353214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.400 [2024-11-07 13:44:08.353262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.400 [2024-11-07 13:44:08.353278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.400 [2024-11-07 13:44:08.353544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.400 [2024-11-07 13:44:08.353782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.400 [2024-11-07 13:44:08.353797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.400 [2024-11-07 13:44:08.353808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.400 [2024-11-07 13:44:08.353819] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.400 [2024-11-07 13:44:08.366467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.400 [2024-11-07 13:44:08.367076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.400 [2024-11-07 13:44:08.367103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.400 [2024-11-07 13:44:08.367114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.400 [2024-11-07 13:44:08.367349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.400 [2024-11-07 13:44:08.367583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.400 [2024-11-07 13:44:08.367597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.400 [2024-11-07 13:44:08.367607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.400 [2024-11-07 13:44:08.367617] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.400 [2024-11-07 13:44:08.380470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.400 [2024-11-07 13:44:08.381028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.400 [2024-11-07 13:44:08.381052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.400 [2024-11-07 13:44:08.381064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.400 [2024-11-07 13:44:08.381297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.400 [2024-11-07 13:44:08.381531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.400 [2024-11-07 13:44:08.381543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.400 [2024-11-07 13:44:08.381553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.400 [2024-11-07 13:44:08.381567] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.400 [2024-11-07 13:44:08.394647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.400 [2024-11-07 13:44:08.395132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.400 [2024-11-07 13:44:08.395156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.400 [2024-11-07 13:44:08.395167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.400 [2024-11-07 13:44:08.395400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.400 [2024-11-07 13:44:08.395633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.400 [2024-11-07 13:44:08.395646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.400 [2024-11-07 13:44:08.395656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.400 [2024-11-07 13:44:08.395665] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.662 [2024-11-07 13:44:08.408741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.662 [2024-11-07 13:44:08.409331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.662 [2024-11-07 13:44:08.409355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.662 [2024-11-07 13:44:08.409366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.662 [2024-11-07 13:44:08.409599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.662 [2024-11-07 13:44:08.409833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.662 [2024-11-07 13:44:08.409846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.662 [2024-11-07 13:44:08.409857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.662 [2024-11-07 13:44:08.409874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.662 [2024-11-07 13:44:08.422729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.662 [2024-11-07 13:44:08.423418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.662 [2024-11-07 13:44:08.423466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.662 [2024-11-07 13:44:08.423482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.662 [2024-11-07 13:44:08.423748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.662 [2024-11-07 13:44:08.423998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.662 [2024-11-07 13:44:08.424014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.662 [2024-11-07 13:44:08.424025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.662 [2024-11-07 13:44:08.424037] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.662 [2024-11-07 13:44:08.436906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.662 [2024-11-07 13:44:08.437484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.662 [2024-11-07 13:44:08.437509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.662 [2024-11-07 13:44:08.437521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.662 [2024-11-07 13:44:08.437755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.662 [2024-11-07 13:44:08.437997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.662 [2024-11-07 13:44:08.438011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.662 [2024-11-07 13:44:08.438021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.662 [2024-11-07 13:44:08.438031] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.662 [2024-11-07 13:44:08.450932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.662 [2024-11-07 13:44:08.451481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.662 [2024-11-07 13:44:08.451504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.662 [2024-11-07 13:44:08.451515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.662 [2024-11-07 13:44:08.451748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.662 [2024-11-07 13:44:08.451989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.662 [2024-11-07 13:44:08.452003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.662 [2024-11-07 13:44:08.452013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.662 [2024-11-07 13:44:08.452023] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.662 [2024-11-07 13:44:08.465105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.662 [2024-11-07 13:44:08.465700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.662 [2024-11-07 13:44:08.465724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.662 [2024-11-07 13:44:08.465735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.662 [2024-11-07 13:44:08.465976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.662 [2024-11-07 13:44:08.466211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.662 [2024-11-07 13:44:08.466223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.662 [2024-11-07 13:44:08.466232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.662 [2024-11-07 13:44:08.466242] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.662 [2024-11-07 13:44:08.479100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.662 [2024-11-07 13:44:08.479642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.663 [2024-11-07 13:44:08.479665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.663 [2024-11-07 13:44:08.479680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.663 [2024-11-07 13:44:08.479921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.663 [2024-11-07 13:44:08.480155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.663 [2024-11-07 13:44:08.480169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.663 [2024-11-07 13:44:08.480179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.663 [2024-11-07 13:44:08.480188] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.663 [2024-11-07 13:44:08.493263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.663 [2024-11-07 13:44:08.493802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.663 [2024-11-07 13:44:08.493825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.663 [2024-11-07 13:44:08.493836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.663 [2024-11-07 13:44:08.494076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.663 [2024-11-07 13:44:08.494310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.663 [2024-11-07 13:44:08.494322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.663 [2024-11-07 13:44:08.494332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.663 [2024-11-07 13:44:08.494341] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.663 [2024-11-07 13:44:08.507422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.663 [2024-11-07 13:44:08.507997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.663 [2024-11-07 13:44:08.508020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.663 [2024-11-07 13:44:08.508032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.663 [2024-11-07 13:44:08.508264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.663 [2024-11-07 13:44:08.508498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.663 [2024-11-07 13:44:08.508511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.663 [2024-11-07 13:44:08.508520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.663 [2024-11-07 13:44:08.508530] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.663 [2024-11-07 13:44:08.521399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.663 [2024-11-07 13:44:08.521964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.663 [2024-11-07 13:44:08.521988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.663 [2024-11-07 13:44:08.521999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.663 [2024-11-07 13:44:08.522236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.663 [2024-11-07 13:44:08.522469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.663 [2024-11-07 13:44:08.522484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.663 [2024-11-07 13:44:08.522494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.663 [2024-11-07 13:44:08.522506] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.663 [2024-11-07 13:44:08.535365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.663 [2024-11-07 13:44:08.535955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.663 [2024-11-07 13:44:08.535978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.663 [2024-11-07 13:44:08.535989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.663 [2024-11-07 13:44:08.536222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.663 [2024-11-07 13:44:08.536462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.663 [2024-11-07 13:44:08.536475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.663 [2024-11-07 13:44:08.536484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.663 [2024-11-07 13:44:08.536494] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.663 [2024-11-07 13:44:08.549375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.663 [2024-11-07 13:44:08.549982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.663 [2024-11-07 13:44:08.550006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.663 [2024-11-07 13:44:08.550016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.663 [2024-11-07 13:44:08.550249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.663 [2024-11-07 13:44:08.550482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.663 [2024-11-07 13:44:08.550496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.663 [2024-11-07 13:44:08.550505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.663 [2024-11-07 13:44:08.550515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.663 [2024-11-07 13:44:08.563375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.663 [2024-11-07 13:44:08.564066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.663 [2024-11-07 13:44:08.564114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.663 [2024-11-07 13:44:08.564130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.663 [2024-11-07 13:44:08.564396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.663 [2024-11-07 13:44:08.564635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.663 [2024-11-07 13:44:08.564654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.663 [2024-11-07 13:44:08.564665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.663 [2024-11-07 13:44:08.564677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.663 [2024-11-07 13:44:08.577555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.663 [2024-11-07 13:44:08.578121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.663 [2024-11-07 13:44:08.578148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.663 [2024-11-07 13:44:08.578159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.663 [2024-11-07 13:44:08.578393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.663 [2024-11-07 13:44:08.578628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.663 [2024-11-07 13:44:08.578641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.663 [2024-11-07 13:44:08.578651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.663 [2024-11-07 13:44:08.578661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.663 [2024-11-07 13:44:08.591523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.663 [2024-11-07 13:44:08.592192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.663 [2024-11-07 13:44:08.592240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.663 [2024-11-07 13:44:08.592255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.663 [2024-11-07 13:44:08.592522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.663 [2024-11-07 13:44:08.592760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.663 [2024-11-07 13:44:08.592774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.663 [2024-11-07 13:44:08.592785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.663 [2024-11-07 13:44:08.592797] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.663 [2024-11-07 13:44:08.605657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.663 [2024-11-07 13:44:08.606297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.663 [2024-11-07 13:44:08.606345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.663 [2024-11-07 13:44:08.606361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.663 [2024-11-07 13:44:08.606627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.663 [2024-11-07 13:44:08.606875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.663 [2024-11-07 13:44:08.606890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.663 [2024-11-07 13:44:08.606901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.663 [2024-11-07 13:44:08.606917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.663 [2024-11-07 13:44:08.619783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.664 [2024-11-07 13:44:08.620367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.664 [2024-11-07 13:44:08.620393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.664 [2024-11-07 13:44:08.620405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.664 [2024-11-07 13:44:08.620639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.664 [2024-11-07 13:44:08.620882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.664 [2024-11-07 13:44:08.620896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.664 [2024-11-07 13:44:08.620906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.664 [2024-11-07 13:44:08.620916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.664 [2024-11-07 13:44:08.633767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.664 [2024-11-07 13:44:08.634328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.664 [2024-11-07 13:44:08.634353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.664 [2024-11-07 13:44:08.634364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.664 [2024-11-07 13:44:08.634596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.664 [2024-11-07 13:44:08.634830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.664 [2024-11-07 13:44:08.634843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.664 [2024-11-07 13:44:08.634853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.664 [2024-11-07 13:44:08.634870] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.664 [2024-11-07 13:44:08.647966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.664 [2024-11-07 13:44:08.648511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.664 [2024-11-07 13:44:08.648534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.664 [2024-11-07 13:44:08.648545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.664 [2024-11-07 13:44:08.648778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.664 [2024-11-07 13:44:08.649018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.664 [2024-11-07 13:44:08.649031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.664 [2024-11-07 13:44:08.649042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.664 [2024-11-07 13:44:08.649051] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.664 [2024-11-07 13:44:08.662128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.664 [2024-11-07 13:44:08.662735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.664 [2024-11-07 13:44:08.662759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.664 [2024-11-07 13:44:08.662770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.664 [2024-11-07 13:44:08.663009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.664 [2024-11-07 13:44:08.663244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.664 [2024-11-07 13:44:08.663257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.664 [2024-11-07 13:44:08.663267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.664 [2024-11-07 13:44:08.663276] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.925 [2024-11-07 13:44:08.676191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.925 [2024-11-07 13:44:08.676782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.925 [2024-11-07 13:44:08.676805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.925 [2024-11-07 13:44:08.676816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.925 [2024-11-07 13:44:08.677056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.925 [2024-11-07 13:44:08.677291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.925 [2024-11-07 13:44:08.677304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.925 [2024-11-07 13:44:08.677314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.925 [2024-11-07 13:44:08.677323] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.925 4553.40 IOPS, 17.79 MiB/s [2024-11-07T12:44:08.932Z] [2024-11-07 13:44:08.690177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.925 [2024-11-07 13:44:08.690772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.925 [2024-11-07 13:44:08.690795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.925 [2024-11-07 13:44:08.690807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.925 [2024-11-07 13:44:08.691047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.925 [2024-11-07 13:44:08.691281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.925 [2024-11-07 13:44:08.691294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.925 [2024-11-07 13:44:08.691304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.925 [2024-11-07 13:44:08.691313] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.925 [2024-11-07 13:44:08.704179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.925 [2024-11-07 13:44:08.704767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.925 [2024-11-07 13:44:08.704790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.925 [2024-11-07 13:44:08.704805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.925 [2024-11-07 13:44:08.705043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.925 [2024-11-07 13:44:08.705276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.926 [2024-11-07 13:44:08.705289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.926 [2024-11-07 13:44:08.705298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.926 [2024-11-07 13:44:08.705308] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.926 [2024-11-07 13:44:08.718168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.926 [2024-11-07 13:44:08.718747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.926 [2024-11-07 13:44:08.718770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.926 [2024-11-07 13:44:08.718781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.926 [2024-11-07 13:44:08.719020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.926 [2024-11-07 13:44:08.719254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.926 [2024-11-07 13:44:08.719267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.926 [2024-11-07 13:44:08.719277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.926 [2024-11-07 13:44:08.719286] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.926 [2024-11-07 13:44:08.732133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.926 [2024-11-07 13:44:08.732691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.926 [2024-11-07 13:44:08.732714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.926 [2024-11-07 13:44:08.732726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.926 [2024-11-07 13:44:08.732964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.926 [2024-11-07 13:44:08.733198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.926 [2024-11-07 13:44:08.733212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.926 [2024-11-07 13:44:08.733228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.926 [2024-11-07 13:44:08.733237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.926 [2024-11-07 13:44:08.746313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.926 [2024-11-07 13:44:08.746897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.926 [2024-11-07 13:44:08.746921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.926 [2024-11-07 13:44:08.746932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.926 [2024-11-07 13:44:08.747165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.926 [2024-11-07 13:44:08.747401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.926 [2024-11-07 13:44:08.747415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.926 [2024-11-07 13:44:08.747425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.926 [2024-11-07 13:44:08.747434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.926 [2024-11-07 13:44:08.760294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.926 [2024-11-07 13:44:08.760885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.926 [2024-11-07 13:44:08.760933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.926 [2024-11-07 13:44:08.760951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.926 [2024-11-07 13:44:08.761219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.926 [2024-11-07 13:44:08.761457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.926 [2024-11-07 13:44:08.761471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.926 [2024-11-07 13:44:08.761484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.926 [2024-11-07 13:44:08.761496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.926 [2024-11-07 13:44:08.774356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.926 [2024-11-07 13:44:08.775020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.926 [2024-11-07 13:44:08.775067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.926 [2024-11-07 13:44:08.775085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.926 [2024-11-07 13:44:08.775352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.926 [2024-11-07 13:44:08.775590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.926 [2024-11-07 13:44:08.775604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.926 [2024-11-07 13:44:08.775615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.926 [2024-11-07 13:44:08.775627] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.926 [2024-11-07 13:44:08.788490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.926 [2024-11-07 13:44:08.789191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.926 [2024-11-07 13:44:08.789239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.926 [2024-11-07 13:44:08.789255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.926 [2024-11-07 13:44:08.789521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.926 [2024-11-07 13:44:08.789760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.926 [2024-11-07 13:44:08.789775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.926 [2024-11-07 13:44:08.789790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.926 [2024-11-07 13:44:08.789802] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.926 [2024-11-07 13:44:08.802686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.926 [2024-11-07 13:44:08.803390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.926 [2024-11-07 13:44:08.803443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.926 [2024-11-07 13:44:08.803459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.926 [2024-11-07 13:44:08.803726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.926 [2024-11-07 13:44:08.803974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.926 [2024-11-07 13:44:08.803990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.926 [2024-11-07 13:44:08.804001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.926 [2024-11-07 13:44:08.804012] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.926 [2024-11-07 13:44:08.816668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.926 [2024-11-07 13:44:08.817245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.926 [2024-11-07 13:44:08.817270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.926 [2024-11-07 13:44:08.817282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.926 [2024-11-07 13:44:08.817516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.926 [2024-11-07 13:44:08.817750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.926 [2024-11-07 13:44:08.817762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.926 [2024-11-07 13:44:08.817772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.926 [2024-11-07 13:44:08.817782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.926 [2024-11-07 13:44:08.830648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.926 [2024-11-07 13:44:08.831341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.926 [2024-11-07 13:44:08.831388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.926 [2024-11-07 13:44:08.831403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.926 [2024-11-07 13:44:08.831670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.926 [2024-11-07 13:44:08.831918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.926 [2024-11-07 13:44:08.831933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.926 [2024-11-07 13:44:08.831945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.926 [2024-11-07 13:44:08.831957] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.926 [2024-11-07 13:44:08.844852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.926 [2024-11-07 13:44:08.845455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.927 [2024-11-07 13:44:08.845504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.927 [2024-11-07 13:44:08.845521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.927 [2024-11-07 13:44:08.845787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.927 [2024-11-07 13:44:08.846035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.927 [2024-11-07 13:44:08.846050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.927 [2024-11-07 13:44:08.846062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.927 [2024-11-07 13:44:08.846074] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.927 [2024-11-07 13:44:08.858954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.927 [2024-11-07 13:44:08.859517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.927 [2024-11-07 13:44:08.859542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.927 [2024-11-07 13:44:08.859554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.927 [2024-11-07 13:44:08.859788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.927 [2024-11-07 13:44:08.860030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.927 [2024-11-07 13:44:08.860043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.927 [2024-11-07 13:44:08.860053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.927 [2024-11-07 13:44:08.860064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.927 [2024-11-07 13:44:08.872922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.927 [2024-11-07 13:44:08.873467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.927 [2024-11-07 13:44:08.873491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.927 [2024-11-07 13:44:08.873501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.927 [2024-11-07 13:44:08.873734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.927 [2024-11-07 13:44:08.873978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.927 [2024-11-07 13:44:08.873992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.927 [2024-11-07 13:44:08.874002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.927 [2024-11-07 13:44:08.874011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.927 [2024-11-07 13:44:08.887085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.927 [2024-11-07 13:44:08.887569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.927 [2024-11-07 13:44:08.887592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.927 [2024-11-07 13:44:08.887603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.927 [2024-11-07 13:44:08.887836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.927 [2024-11-07 13:44:08.888077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.927 [2024-11-07 13:44:08.888090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.927 [2024-11-07 13:44:08.888100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.927 [2024-11-07 13:44:08.888110] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.927 [2024-11-07 13:44:08.901190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.927 [2024-11-07 13:44:08.901737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.927 [2024-11-07 13:44:08.901759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.927 [2024-11-07 13:44:08.901770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.927 [2024-11-07 13:44:08.902009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.927 [2024-11-07 13:44:08.902243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.927 [2024-11-07 13:44:08.902257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.927 [2024-11-07 13:44:08.902267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.927 [2024-11-07 13:44:08.902277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.927 [2024-11-07 13:44:08.915335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.927 [2024-11-07 13:44:08.916033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.927 [2024-11-07 13:44:08.916081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:00.927 [2024-11-07 13:44:08.916097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:00.927 [2024-11-07 13:44:08.916363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:00.927 [2024-11-07 13:44:08.916602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.927 [2024-11-07 13:44:08.916616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.927 [2024-11-07 13:44:08.916626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.927 [2024-11-07 13:44:08.916638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.189 [2024-11-07 13:44:08.929511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.189 [2024-11-07 13:44:08.930097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.189 [2024-11-07 13:44:08.930144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.189 [2024-11-07 13:44:08.930161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.189 [2024-11-07 13:44:08.930432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.189 [2024-11-07 13:44:08.930671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.189 [2024-11-07 13:44:08.930685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.189 [2024-11-07 13:44:08.930696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.189 [2024-11-07 13:44:08.930708] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.189 [2024-11-07 13:44:08.943587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.189 [2024-11-07 13:44:08.944271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.189 [2024-11-07 13:44:08.944319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.189 [2024-11-07 13:44:08.944335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.189 [2024-11-07 13:44:08.944601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.189 [2024-11-07 13:44:08.944839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.189 [2024-11-07 13:44:08.944854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.189 [2024-11-07 13:44:08.944875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.189 [2024-11-07 13:44:08.944887] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 4139007 Killed "${NVMF_APP[@]}" "$@" 00:39:01.189 13:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:39:01.189 13:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:39:01.189 13:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:01.189 13:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:01.189 13:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:01.189 [2024-11-07 13:44:08.957564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.189 [2024-11-07 13:44:08.958228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.189 [2024-11-07 13:44:08.958275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.189 [2024-11-07 13:44:08.958291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.189 [2024-11-07 13:44:08.958557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.189 13:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=4140857 00:39:01.189 [2024-11-07 13:44:08.958795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.189 [2024-11-07 13:44:08.958810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.189 [2024-11-07 13:44:08.958821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.189 [2024-11-07 13:44:08.958834] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.189 13:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 4140857 00:39:01.189 13:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:39:01.189 13:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 4140857 ']' 00:39:01.189 13:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:01.189 13:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:01.189 13:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:01.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:01.189 13:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:01.189 13:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:01.189 [2024-11-07 13:44:08.971708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.189 [2024-11-07 13:44:08.972249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.189 [2024-11-07 13:44:08.972274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.189 [2024-11-07 13:44:08.972287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.189 [2024-11-07 13:44:08.972522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.189 [2024-11-07 13:44:08.972756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.189 [2024-11-07 13:44:08.972767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.189 [2024-11-07 13:44:08.972778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.189 [2024-11-07 13:44:08.972788] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.189 [2024-11-07 13:44:08.985871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.189 [2024-11-07 13:44:08.986320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.189 [2024-11-07 13:44:08.986342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.189 [2024-11-07 13:44:08.986353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.189 [2024-11-07 13:44:08.986586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.189 [2024-11-07 13:44:08.986820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.189 [2024-11-07 13:44:08.986832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.189 [2024-11-07 13:44:08.986843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.189 [2024-11-07 13:44:08.986853] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.189 [2024-11-07 13:44:08.999944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.189 [2024-11-07 13:44:09.000602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.189 [2024-11-07 13:44:09.000648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.189 [2024-11-07 13:44:09.000664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.189 [2024-11-07 13:44:09.000943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.189 [2024-11-07 13:44:09.001183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.189 [2024-11-07 13:44:09.001197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.189 [2024-11-07 13:44:09.001208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.189 [2024-11-07 13:44:09.001219] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.189 [2024-11-07 13:44:09.014085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.189 [2024-11-07 13:44:09.014776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.189 [2024-11-07 13:44:09.014823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.189 [2024-11-07 13:44:09.014839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.189 [2024-11-07 13:44:09.015115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.189 [2024-11-07 13:44:09.015355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.189 [2024-11-07 13:44:09.015369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.189 [2024-11-07 13:44:09.015380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.189 [2024-11-07 13:44:09.015392] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.190 [2024-11-07 13:44:09.028267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.190 [2024-11-07 13:44:09.028886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.190 [2024-11-07 13:44:09.028933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.190 [2024-11-07 13:44:09.028949] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.190 [2024-11-07 13:44:09.029216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.190 [2024-11-07 13:44:09.029454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.190 [2024-11-07 13:44:09.029467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.190 [2024-11-07 13:44:09.029479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.190 [2024-11-07 13:44:09.029490] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.190 [2024-11-07 13:44:09.042364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.190 [2024-11-07 13:44:09.042825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.190 [2024-11-07 13:44:09.042850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.190 [2024-11-07 13:44:09.042868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.190 [2024-11-07 13:44:09.043103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.190 [2024-11-07 13:44:09.043336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.190 [2024-11-07 13:44:09.043353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.190 [2024-11-07 13:44:09.043364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.190 [2024-11-07 13:44:09.043374] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.190 [2024-11-07 13:44:09.043612] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:39:01.190 [2024-11-07 13:44:09.043697] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:01.190 [2024-11-07 13:44:09.056348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.190 [2024-11-07 13:44:09.056797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.190 [2024-11-07 13:44:09.056822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.190 [2024-11-07 13:44:09.056834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.190 [2024-11-07 13:44:09.057075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.190 [2024-11-07 13:44:09.057310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.190 [2024-11-07 13:44:09.057321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.190 [2024-11-07 13:44:09.057332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.190 [2024-11-07 13:44:09.057341] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.190 [2024-11-07 13:44:09.070412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.190 [2024-11-07 13:44:09.070992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.190 [2024-11-07 13:44:09.071017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.190 [2024-11-07 13:44:09.071029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.190 [2024-11-07 13:44:09.071264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.190 [2024-11-07 13:44:09.071498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.190 [2024-11-07 13:44:09.071510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.190 [2024-11-07 13:44:09.071520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.190 [2024-11-07 13:44:09.071530] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.190 [2024-11-07 13:44:09.084393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.190 [2024-11-07 13:44:09.084994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.190 [2024-11-07 13:44:09.085018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.190 [2024-11-07 13:44:09.085029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.190 [2024-11-07 13:44:09.085263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.190 [2024-11-07 13:44:09.085502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.190 [2024-11-07 13:44:09.085514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.190 [2024-11-07 13:44:09.085525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.190 [2024-11-07 13:44:09.085534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.190 [2024-11-07 13:44:09.098598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.190 [2024-11-07 13:44:09.099169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.190 [2024-11-07 13:44:09.099193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.190 [2024-11-07 13:44:09.099204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.190 [2024-11-07 13:44:09.099440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.190 [2024-11-07 13:44:09.099674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.190 [2024-11-07 13:44:09.099685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.190 [2024-11-07 13:44:09.099695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.190 [2024-11-07 13:44:09.099705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.190 [2024-11-07 13:44:09.112571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.190 [2024-11-07 13:44:09.113227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.190 [2024-11-07 13:44:09.113273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.190 [2024-11-07 13:44:09.113289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.190 [2024-11-07 13:44:09.113557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.190 [2024-11-07 13:44:09.113796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.190 [2024-11-07 13:44:09.113809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.190 [2024-11-07 13:44:09.113821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.190 [2024-11-07 13:44:09.113833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.190 [2024-11-07 13:44:09.126688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.190 [2024-11-07 13:44:09.127273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.190 [2024-11-07 13:44:09.127299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.190 [2024-11-07 13:44:09.127311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.190 [2024-11-07 13:44:09.127546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.190 [2024-11-07 13:44:09.127779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.190 [2024-11-07 13:44:09.127791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.190 [2024-11-07 13:44:09.127806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.190 [2024-11-07 13:44:09.127816] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.190 [2024-11-07 13:44:09.140672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.190 [2024-11-07 13:44:09.141215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.190 [2024-11-07 13:44:09.141269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.190 [2024-11-07 13:44:09.141286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.190 [2024-11-07 13:44:09.141553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.190 [2024-11-07 13:44:09.141792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.190 [2024-11-07 13:44:09.141805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.190 [2024-11-07 13:44:09.141816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.190 [2024-11-07 13:44:09.141827] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.190 [2024-11-07 13:44:09.154740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.190 [2024-11-07 13:44:09.155424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.190 [2024-11-07 13:44:09.155470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.190 [2024-11-07 13:44:09.155486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.190 [2024-11-07 13:44:09.155753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.190 [2024-11-07 13:44:09.156002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.191 [2024-11-07 13:44:09.156017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.191 [2024-11-07 13:44:09.156028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.191 [2024-11-07 13:44:09.156040] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.191 [2024-11-07 13:44:09.168900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.191 [2024-11-07 13:44:09.169482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.191 [2024-11-07 13:44:09.169528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.191 [2024-11-07 13:44:09.169545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.191 [2024-11-07 13:44:09.169811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.191 [2024-11-07 13:44:09.170059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.191 [2024-11-07 13:44:09.170074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.191 [2024-11-07 13:44:09.170085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.191 [2024-11-07 13:44:09.170097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.191 [2024-11-07 13:44:09.182966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.191 [2024-11-07 13:44:09.183590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.191 [2024-11-07 13:44:09.183615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.191 [2024-11-07 13:44:09.183627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.191 [2024-11-07 13:44:09.183869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.191 [2024-11-07 13:44:09.184104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.191 [2024-11-07 13:44:09.184116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.191 [2024-11-07 13:44:09.184126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.191 [2024-11-07 13:44:09.184136] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.453 [2024-11-07 13:44:09.196999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.453 [2024-11-07 13:44:09.197701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.453 [2024-11-07 13:44:09.197748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.453 [2024-11-07 13:44:09.197766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.453 [2024-11-07 13:44:09.198041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.453 [2024-11-07 13:44:09.198281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.453 [2024-11-07 13:44:09.198294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.453 [2024-11-07 13:44:09.198305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.453 [2024-11-07 13:44:09.198316] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.453 [2024-11-07 13:44:09.198939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:01.453 [2024-11-07 13:44:09.211183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.453 [2024-11-07 13:44:09.211935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.453 [2024-11-07 13:44:09.211982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.453 [2024-11-07 13:44:09.212000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.453 [2024-11-07 13:44:09.212270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.453 [2024-11-07 13:44:09.212510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.453 [2024-11-07 13:44:09.212523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.453 [2024-11-07 13:44:09.212534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.453 [2024-11-07 13:44:09.212546] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.453 [2024-11-07 13:44:09.225203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.453 [2024-11-07 13:44:09.225963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.453 [2024-11-07 13:44:09.226009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.453 [2024-11-07 13:44:09.226025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.453 [2024-11-07 13:44:09.226293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.453 [2024-11-07 13:44:09.226532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.453 [2024-11-07 13:44:09.226545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.453 [2024-11-07 13:44:09.226556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.453 [2024-11-07 13:44:09.226567] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.453 [2024-11-07 13:44:09.239218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.453 [2024-11-07 13:44:09.239955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.453 [2024-11-07 13:44:09.240001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.453 [2024-11-07 13:44:09.240019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.453 [2024-11-07 13:44:09.240286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.453 [2024-11-07 13:44:09.240524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.453 [2024-11-07 13:44:09.240537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.453 [2024-11-07 13:44:09.240549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.453 [2024-11-07 13:44:09.240560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.453 [2024-11-07 13:44:09.253249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.453 [2024-11-07 13:44:09.253878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.453 [2024-11-07 13:44:09.253903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.453 [2024-11-07 13:44:09.253916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.453 [2024-11-07 13:44:09.254151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.453 [2024-11-07 13:44:09.254385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.453 [2024-11-07 13:44:09.254397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.453 [2024-11-07 13:44:09.254407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.453 [2024-11-07 13:44:09.254418] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.453 [2024-11-07 13:44:09.267264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.453 [2024-11-07 13:44:09.267814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.453 [2024-11-07 13:44:09.267870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.453 [2024-11-07 13:44:09.267894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.453 [2024-11-07 13:44:09.268160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.453 [2024-11-07 13:44:09.268400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.453 [2024-11-07 13:44:09.268413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.453 [2024-11-07 13:44:09.268425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.453 [2024-11-07 13:44:09.268437] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.453 [2024-11-07 13:44:09.274136] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:01.453 [2024-11-07 13:44:09.274171] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:01.453 [2024-11-07 13:44:09.274180] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:01.453 [2024-11-07 13:44:09.274190] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:01.453 [2024-11-07 13:44:09.274197] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:01.453 [2024-11-07 13:44:09.275758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:01.453 [2024-11-07 13:44:09.275888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:01.453 [2024-11-07 13:44:09.275913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:01.454 [2024-11-07 13:44:09.281311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.454 [2024-11-07 13:44:09.281979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.454 [2024-11-07 13:44:09.282026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.454 [2024-11-07 13:44:09.282042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.454 [2024-11-07 13:44:09.282311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.454 [2024-11-07 13:44:09.282550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.454 [2024-11-07 13:44:09.282563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.454 [2024-11-07 13:44:09.282575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.454 [2024-11-07 13:44:09.282588] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.454 [2024-11-07 13:44:09.295478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.454 [2024-11-07 13:44:09.296208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.454 [2024-11-07 13:44:09.296255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.454 [2024-11-07 13:44:09.296271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.454 [2024-11-07 13:44:09.296538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.454 [2024-11-07 13:44:09.296779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.454 [2024-11-07 13:44:09.296792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.454 [2024-11-07 13:44:09.296803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.454 [2024-11-07 13:44:09.296819] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.454 [2024-11-07 13:44:09.309477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.454 [2024-11-07 13:44:09.310165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.454 [2024-11-07 13:44:09.310211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.454 [2024-11-07 13:44:09.310227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.454 [2024-11-07 13:44:09.310494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.454 [2024-11-07 13:44:09.310733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.454 [2024-11-07 13:44:09.310746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.454 [2024-11-07 13:44:09.310757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.454 [2024-11-07 13:44:09.310769] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.454 [2024-11-07 13:44:09.323649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.454 [2024-11-07 13:44:09.324268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.454 [2024-11-07 13:44:09.324315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.454 [2024-11-07 13:44:09.324332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.454 [2024-11-07 13:44:09.324601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.454 [2024-11-07 13:44:09.324840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.454 [2024-11-07 13:44:09.324854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.454 [2024-11-07 13:44:09.324874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.454 [2024-11-07 13:44:09.324887] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.454 [2024-11-07 13:44:09.337768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.454 [2024-11-07 13:44:09.338494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.454 [2024-11-07 13:44:09.338541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.454 [2024-11-07 13:44:09.338557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.454 [2024-11-07 13:44:09.338835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.454 [2024-11-07 13:44:09.339083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.454 [2024-11-07 13:44:09.339098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.454 [2024-11-07 13:44:09.339109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.454 [2024-11-07 13:44:09.339120] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.454 [2024-11-07 13:44:09.351816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.454 [2024-11-07 13:44:09.352498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.454 [2024-11-07 13:44:09.352545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.454 [2024-11-07 13:44:09.352561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.454 [2024-11-07 13:44:09.352829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.454 [2024-11-07 13:44:09.353079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.454 [2024-11-07 13:44:09.353093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.454 [2024-11-07 13:44:09.353104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.454 [2024-11-07 13:44:09.353116] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.454 [2024-11-07 13:44:09.365979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.454 [2024-11-07 13:44:09.366748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.454 [2024-11-07 13:44:09.366794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.454 [2024-11-07 13:44:09.366810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.454 [2024-11-07 13:44:09.367086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.454 [2024-11-07 13:44:09.367326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.454 [2024-11-07 13:44:09.367340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.454 [2024-11-07 13:44:09.367351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.454 [2024-11-07 13:44:09.367362] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.454 [2024-11-07 13:44:09.380020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.454 [2024-11-07 13:44:09.380746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.454 [2024-11-07 13:44:09.380792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.454 [2024-11-07 13:44:09.380809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.454 [2024-11-07 13:44:09.381086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.454 [2024-11-07 13:44:09.381326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.454 [2024-11-07 13:44:09.381339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.454 [2024-11-07 13:44:09.381350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.454 [2024-11-07 13:44:09.381362] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.454 [2024-11-07 13:44:09.394011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.454 [2024-11-07 13:44:09.394705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.454 [2024-11-07 13:44:09.394751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.454 [2024-11-07 13:44:09.394772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.454 [2024-11-07 13:44:09.395053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.454 [2024-11-07 13:44:09.395293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.454 [2024-11-07 13:44:09.395307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.454 [2024-11-07 13:44:09.395318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.454 [2024-11-07 13:44:09.395330] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.454 [2024-11-07 13:44:09.408203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.454 [2024-11-07 13:44:09.408934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.454 [2024-11-07 13:44:09.408982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.454 [2024-11-07 13:44:09.408999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.454 [2024-11-07 13:44:09.409268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.454 [2024-11-07 13:44:09.409507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.454 [2024-11-07 13:44:09.409520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.454 [2024-11-07 13:44:09.409531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.455 [2024-11-07 13:44:09.409543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.455 [2024-11-07 13:44:09.422203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.455 [2024-11-07 13:44:09.422777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.455 [2024-11-07 13:44:09.422824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.455 [2024-11-07 13:44:09.422842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.455 [2024-11-07 13:44:09.423119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.455 [2024-11-07 13:44:09.423360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.455 [2024-11-07 13:44:09.423373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.455 [2024-11-07 13:44:09.423384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.455 [2024-11-07 13:44:09.423396] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.455 [2024-11-07 13:44:09.436264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.455 [2024-11-07 13:44:09.436901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.455 [2024-11-07 13:44:09.436927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.455 [2024-11-07 13:44:09.436939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.455 [2024-11-07 13:44:09.437174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.455 [2024-11-07 13:44:09.437414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.455 [2024-11-07 13:44:09.437427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.455 [2024-11-07 13:44:09.437437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.455 [2024-11-07 13:44:09.437447] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.455 [2024-11-07 13:44:09.450355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.455 [2024-11-07 13:44:09.450977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.455 [2024-11-07 13:44:09.451001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.455 [2024-11-07 13:44:09.451012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.455 [2024-11-07 13:44:09.451247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.455 [2024-11-07 13:44:09.451481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.455 [2024-11-07 13:44:09.451493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.455 [2024-11-07 13:44:09.451503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.455 [2024-11-07 13:44:09.451513] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.717 [2024-11-07 13:44:09.464375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.717 [2024-11-07 13:44:09.464975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.717 [2024-11-07 13:44:09.465022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.717 [2024-11-07 13:44:09.465037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.717 [2024-11-07 13:44:09.465305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.717 [2024-11-07 13:44:09.465544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.717 [2024-11-07 13:44:09.465557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.717 [2024-11-07 13:44:09.465568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.717 [2024-11-07 13:44:09.465579] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.717 [2024-11-07 13:44:09.478449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.717 [2024-11-07 13:44:09.479159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.717 [2024-11-07 13:44:09.479206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.717 [2024-11-07 13:44:09.479223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.717 [2024-11-07 13:44:09.479491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.717 [2024-11-07 13:44:09.479731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.717 [2024-11-07 13:44:09.479744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.717 [2024-11-07 13:44:09.479760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.717 [2024-11-07 13:44:09.479772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.717 [2024-11-07 13:44:09.492420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.717 [2024-11-07 13:44:09.493017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.717 [2024-11-07 13:44:09.493042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.717 [2024-11-07 13:44:09.493055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.717 [2024-11-07 13:44:09.493289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.717 [2024-11-07 13:44:09.493523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.717 [2024-11-07 13:44:09.493535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.717 [2024-11-07 13:44:09.493545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.717 [2024-11-07 13:44:09.493555] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.717 [2024-11-07 13:44:09.506418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.717 [2024-11-07 13:44:09.506984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.717 [2024-11-07 13:44:09.507031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.717 [2024-11-07 13:44:09.507048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.717 [2024-11-07 13:44:09.507315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.717 [2024-11-07 13:44:09.507553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.717 [2024-11-07 13:44:09.507566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.717 [2024-11-07 13:44:09.507577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.717 [2024-11-07 13:44:09.507589] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.717 [2024-11-07 13:44:09.520459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.717 [2024-11-07 13:44:09.521078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.717 [2024-11-07 13:44:09.521104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.717 [2024-11-07 13:44:09.521117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.717 [2024-11-07 13:44:09.521351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.717 [2024-11-07 13:44:09.521586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.717 [2024-11-07 13:44:09.521598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.717 [2024-11-07 13:44:09.521608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.717 [2024-11-07 13:44:09.521622] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.717 [2024-11-07 13:44:09.534483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.717 [2024-11-07 13:44:09.535175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.717 [2024-11-07 13:44:09.535221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.717 [2024-11-07 13:44:09.535238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.717 [2024-11-07 13:44:09.535504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.717 [2024-11-07 13:44:09.535742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.717 [2024-11-07 13:44:09.535783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.717 [2024-11-07 13:44:09.535794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.717 [2024-11-07 13:44:09.535806] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.717 [2024-11-07 13:44:09.548495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.717 [2024-11-07 13:44:09.549227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.717 [2024-11-07 13:44:09.549274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.717 [2024-11-07 13:44:09.549290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.717 [2024-11-07 13:44:09.549557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.717 [2024-11-07 13:44:09.549795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.717 [2024-11-07 13:44:09.549808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.717 [2024-11-07 13:44:09.549820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.717 [2024-11-07 13:44:09.549831] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.717 [2024-11-07 13:44:09.562480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.717 [2024-11-07 13:44:09.562980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.717 [2024-11-07 13:44:09.563006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.718 [2024-11-07 13:44:09.563018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.718 [2024-11-07 13:44:09.563253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.718 [2024-11-07 13:44:09.563487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.718 [2024-11-07 13:44:09.563499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.718 [2024-11-07 13:44:09.563509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.718 [2024-11-07 13:44:09.563518] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.718 [2024-11-07 13:44:09.576581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.718 [2024-11-07 13:44:09.577083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.718 [2024-11-07 13:44:09.577106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.718 [2024-11-07 13:44:09.577117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.718 [2024-11-07 13:44:09.577351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.718 [2024-11-07 13:44:09.577584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.718 [2024-11-07 13:44:09.577596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.718 [2024-11-07 13:44:09.577606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.718 [2024-11-07 13:44:09.577615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.718 [2024-11-07 13:44:09.590673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.718 [2024-11-07 13:44:09.591292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.718 [2024-11-07 13:44:09.591316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.718 [2024-11-07 13:44:09.591327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.718 [2024-11-07 13:44:09.591560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.718 [2024-11-07 13:44:09.591793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.718 [2024-11-07 13:44:09.591805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.718 [2024-11-07 13:44:09.591815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.718 [2024-11-07 13:44:09.591824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.718 [2024-11-07 13:44:09.604670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.718 [2024-11-07 13:44:09.605241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.718 [2024-11-07 13:44:09.605264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.718 [2024-11-07 13:44:09.605275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.718 [2024-11-07 13:44:09.605508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.718 [2024-11-07 13:44:09.605741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.718 [2024-11-07 13:44:09.605753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.718 [2024-11-07 13:44:09.605762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.718 [2024-11-07 13:44:09.605772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.718 [2024-11-07 13:44:09.618830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.718 [2024-11-07 13:44:09.619520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.718 [2024-11-07 13:44:09.619566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.718 [2024-11-07 13:44:09.619587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.718 [2024-11-07 13:44:09.619853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.718 [2024-11-07 13:44:09.620101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.718 [2024-11-07 13:44:09.620114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.718 [2024-11-07 13:44:09.620125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.718 [2024-11-07 13:44:09.620136] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.718 [2024-11-07 13:44:09.633000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.718 [2024-11-07 13:44:09.633552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.718 [2024-11-07 13:44:09.633599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.718 [2024-11-07 13:44:09.633616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.718 [2024-11-07 13:44:09.633891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.718 [2024-11-07 13:44:09.634130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.718 [2024-11-07 13:44:09.634144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.718 [2024-11-07 13:44:09.634155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.718 [2024-11-07 13:44:09.634166] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.718 [2024-11-07 13:44:09.647028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.718 [2024-11-07 13:44:09.647712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.718 [2024-11-07 13:44:09.647758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.718 [2024-11-07 13:44:09.647774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.718 [2024-11-07 13:44:09.648072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.718 [2024-11-07 13:44:09.648312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.718 [2024-11-07 13:44:09.648325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.718 [2024-11-07 13:44:09.648336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.718 [2024-11-07 13:44:09.648348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.718 [2024-11-07 13:44:09.661213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.718 [2024-11-07 13:44:09.661935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.718 [2024-11-07 13:44:09.661981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.718 [2024-11-07 13:44:09.661999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.718 [2024-11-07 13:44:09.662266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.718 [2024-11-07 13:44:09.662509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.718 [2024-11-07 13:44:09.662522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.718 [2024-11-07 13:44:09.662533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.718 [2024-11-07 13:44:09.662545] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.718 [2024-11-07 13:44:09.675193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.718 [2024-11-07 13:44:09.675926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.718 [2024-11-07 13:44:09.675973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.718 [2024-11-07 13:44:09.675990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.718 [2024-11-07 13:44:09.676259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.718 [2024-11-07 13:44:09.676498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.718 [2024-11-07 13:44:09.676511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.718 [2024-11-07 13:44:09.676521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.718 [2024-11-07 13:44:09.676533] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.718 3794.50 IOPS, 14.82 MiB/s [2024-11-07T12:44:09.725Z] [2024-11-07 13:44:09.691026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.718 [2024-11-07 13:44:09.691606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.718 [2024-11-07 13:44:09.691653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.718 [2024-11-07 13:44:09.691669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.718 [2024-11-07 13:44:09.691946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.718 [2024-11-07 13:44:09.692186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.718 [2024-11-07 13:44:09.692200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.718 [2024-11-07 13:44:09.692211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.718 [2024-11-07 13:44:09.692222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.718 [2024-11-07 13:44:09.705080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.718 [2024-11-07 13:44:09.705783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.718 [2024-11-07 13:44:09.705829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.719 [2024-11-07 13:44:09.705847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.719 [2024-11-07 13:44:09.706127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.719 [2024-11-07 13:44:09.706366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.719 [2024-11-07 13:44:09.706379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.719 [2024-11-07 13:44:09.706395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.719 [2024-11-07 13:44:09.706407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.719 [2024-11-07 13:44:09.719058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.981 [2024-11-07 13:44:09.719573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.981 [2024-11-07 13:44:09.719620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.981 [2024-11-07 13:44:09.719637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.981 [2024-11-07 13:44:09.719912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.981 [2024-11-07 13:44:09.720151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.981 [2024-11-07 13:44:09.720164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.981 [2024-11-07 13:44:09.720175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.981 [2024-11-07 13:44:09.720187] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.981 [2024-11-07 13:44:09.733039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.981 [2024-11-07 13:44:09.733627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.981 [2024-11-07 13:44:09.733652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.981 [2024-11-07 13:44:09.733663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.981 [2024-11-07 13:44:09.733904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.981 [2024-11-07 13:44:09.734139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.981 [2024-11-07 13:44:09.734151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.981 [2024-11-07 13:44:09.734161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.981 [2024-11-07 13:44:09.734183] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.981 [2024-11-07 13:44:09.747041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.981 [2024-11-07 13:44:09.747715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.981 [2024-11-07 13:44:09.747762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.981 [2024-11-07 13:44:09.747778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.981 [2024-11-07 13:44:09.748060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.981 [2024-11-07 13:44:09.748316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.981 [2024-11-07 13:44:09.748331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.981 [2024-11-07 13:44:09.748342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.981 [2024-11-07 13:44:09.748354] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.981 [2024-11-07 13:44:09.761220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.981 [2024-11-07 13:44:09.761944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.981 [2024-11-07 13:44:09.761991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.981 [2024-11-07 13:44:09.762008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.981 [2024-11-07 13:44:09.762274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.981 [2024-11-07 13:44:09.762513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.981 [2024-11-07 13:44:09.762526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.981 [2024-11-07 13:44:09.762537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.981 [2024-11-07 13:44:09.762549] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.981 [2024-11-07 13:44:09.775192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.981 [2024-11-07 13:44:09.775662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.981 [2024-11-07 13:44:09.775688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.981 [2024-11-07 13:44:09.775700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.981 [2024-11-07 13:44:09.775943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.981 [2024-11-07 13:44:09.776178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.981 [2024-11-07 13:44:09.776191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.981 [2024-11-07 13:44:09.776202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.981 [2024-11-07 13:44:09.776212] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.981 [2024-11-07 13:44:09.789277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.981 [2024-11-07 13:44:09.789967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.981 [2024-11-07 13:44:09.790014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.981 [2024-11-07 13:44:09.790032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.981 [2024-11-07 13:44:09.790299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.981 [2024-11-07 13:44:09.790538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.981 [2024-11-07 13:44:09.790552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.981 [2024-11-07 13:44:09.790564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.981 [2024-11-07 13:44:09.790576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.981 [2024-11-07 13:44:09.803436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.981 [2024-11-07 13:44:09.804168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.981 [2024-11-07 13:44:09.804219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.981 [2024-11-07 13:44:09.804235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.981 [2024-11-07 13:44:09.804501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.981 [2024-11-07 13:44:09.804739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.981 [2024-11-07 13:44:09.804752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.981 [2024-11-07 13:44:09.804763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.981 [2024-11-07 13:44:09.804774] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.981 13:44:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:01.981 13:44:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:39:01.981 13:44:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:01.981 13:44:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:01.981 13:44:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:01.981 [2024-11-07 13:44:09.817422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.981 [2024-11-07 13:44:09.818063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.981 [2024-11-07 13:44:09.818088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.981 [2024-11-07 13:44:09.818100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.981 [2024-11-07 13:44:09.818334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.981 [2024-11-07 13:44:09.818568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.981 [2024-11-07 13:44:09.818581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.981 [2024-11-07 13:44:09.818591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.981 [2024-11-07 13:44:09.818600] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.981 [2024-11-07 13:44:09.831449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.981 [2024-11-07 13:44:09.831973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.981 [2024-11-07 13:44:09.832020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.981 [2024-11-07 13:44:09.832037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.981 [2024-11-07 13:44:09.832305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.981 [2024-11-07 13:44:09.832544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.982 [2024-11-07 13:44:09.832558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.982 [2024-11-07 13:44:09.832569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.982 [2024-11-07 13:44:09.832581] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.982 [2024-11-07 13:44:09.845455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.982 [2024-11-07 13:44:09.845986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.982 [2024-11-07 13:44:09.846033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.982 [2024-11-07 13:44:09.846050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.982 [2024-11-07 13:44:09.846317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.982 [2024-11-07 13:44:09.846555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.982 [2024-11-07 13:44:09.846569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.982 [2024-11-07 13:44:09.846581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.982 [2024-11-07 13:44:09.846592] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.982 13:44:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:01.982 13:44:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:01.982 13:44:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:01.982 13:44:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:01.982 [2024-11-07 13:44:09.858644] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:01.982 [2024-11-07 13:44:09.859486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.982 [2024-11-07 13:44:09.859969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.982 [2024-11-07 13:44:09.859996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.982 [2024-11-07 13:44:09.860008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.982 [2024-11-07 13:44:09.860245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.982 [2024-11-07 13:44:09.860479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.982 [2024-11-07 13:44:09.860491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.982 [2024-11-07 13:44:09.860502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.982 [2024-11-07 13:44:09.860512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.982 13:44:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:01.982 13:44:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:01.982 13:44:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:01.982 13:44:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:01.982 [2024-11-07 13:44:09.873583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.982 [2024-11-07 13:44:09.874299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.982 [2024-11-07 13:44:09.874346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.982 [2024-11-07 13:44:09.874361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.982 [2024-11-07 13:44:09.874628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.982 [2024-11-07 13:44:09.874880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.982 [2024-11-07 13:44:09.874895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.982 [2024-11-07 13:44:09.874906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.982 [2024-11-07 13:44:09.874918] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.982 [2024-11-07 13:44:09.887555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.982 [2024-11-07 13:44:09.888257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.982 [2024-11-07 13:44:09.888305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.982 [2024-11-07 13:44:09.888322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.982 [2024-11-07 13:44:09.888588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.982 [2024-11-07 13:44:09.888828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.982 [2024-11-07 13:44:09.888841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.982 [2024-11-07 13:44:09.888852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.982 [2024-11-07 13:44:09.888871] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.982 [2024-11-07 13:44:09.901738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.982 [2024-11-07 13:44:09.902413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.982 [2024-11-07 13:44:09.902459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.982 [2024-11-07 13:44:09.902476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.982 [2024-11-07 13:44:09.902743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.982 [2024-11-07 13:44:09.902991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.982 [2024-11-07 13:44:09.903005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.982 [2024-11-07 13:44:09.903017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.982 [2024-11-07 13:44:09.903028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.982 [2024-11-07 13:44:09.915900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.982 [2024-11-07 13:44:09.916605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.982 [2024-11-07 13:44:09.916651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.982 [2024-11-07 13:44:09.916667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.982 [2024-11-07 13:44:09.916944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.982 [2024-11-07 13:44:09.917183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.982 [2024-11-07 13:44:09.917197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.982 [2024-11-07 13:44:09.917213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.982 [2024-11-07 13:44:09.917225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.982 Malloc0 00:39:01.982 13:44:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:01.982 13:44:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:01.982 13:44:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:01.982 13:44:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:01.982 [2024-11-07 13:44:09.930087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.982 [2024-11-07 13:44:09.930684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.982 [2024-11-07 13:44:09.930730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.982 [2024-11-07 13:44:09.930746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.982 [2024-11-07 13:44:09.931025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.982 [2024-11-07 13:44:09.931265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.982 [2024-11-07 13:44:09.931278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.982 [2024-11-07 13:44:09.931289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.982 [2024-11-07 13:44:09.931301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.982 13:44:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:01.982 13:44:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:01.982 13:44:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:01.982 13:44:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:01.982 [2024-11-07 13:44:09.944166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.982 13:44:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:01.982 [2024-11-07 13:44:09.944805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.982 [2024-11-07 13:44:09.944852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416c00 with addr=10.0.0.2, port=4420 00:39:01.982 [2024-11-07 13:44:09.944879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:39:01.982 13:44:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:01.982 [2024-11-07 13:44:09.945147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:39:01.982 13:44:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:01.982 [2024-11-07 13:44:09.945386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.982 [2024-11-07 13:44:09.945399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.983 [2024-11-07 13:44:09.945410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.983 [2024-11-07 13:44:09.945422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.983 13:44:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:01.983 [2024-11-07 13:44:09.951782] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:01.983 13:44:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:01.983 13:44:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 4139595 00:39:01.983 [2024-11-07 13:44:09.958344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:02.243 [2024-11-07 13:44:10.031601] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:39:03.755 4226.00 IOPS, 16.51 MiB/s [2024-11-07T12:44:12.704Z] 4953.62 IOPS, 19.35 MiB/s [2024-11-07T12:44:14.087Z] 5515.89 IOPS, 21.55 MiB/s [2024-11-07T12:44:15.027Z] 5978.40 IOPS, 23.35 MiB/s [2024-11-07T12:44:16.066Z] 6348.18 IOPS, 24.80 MiB/s [2024-11-07T12:44:17.005Z] 6647.50 IOPS, 25.97 MiB/s [2024-11-07T12:44:17.945Z] 6910.00 IOPS, 26.99 MiB/s [2024-11-07T12:44:18.885Z] 7131.00 IOPS, 27.86 MiB/s 00:39:10.878 Latency(us) 00:39:10.878 [2024-11-07T12:44:18.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:10.878 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:10.878 Verification LBA range: start 0x0 length 0x4000 00:39:10.878 Nvme1n1 : 15.00 7320.91 28.60 9510.50 0.00 7578.28 860.16 26105.17 00:39:10.878 [2024-11-07T12:44:18.885Z] =================================================================================================================== 00:39:10.878 [2024-11-07T12:44:18.885Z] Total : 7320.91 28.60 9510.50 0.00 7578.28 860.16 26105.17 00:39:11.448 13:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:39:11.448 13:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:11.448 13:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:11.448 13:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:11.448 13:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:11.448 13:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:39:11.448 13:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:39:11.448 13:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:11.448 13:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:39:11.448 13:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:11.448 13:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:39:11.448 13:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:11.448 13:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:11.448 rmmod nvme_tcp 00:39:11.448 rmmod nvme_fabrics 00:39:11.448 rmmod nvme_keyring 00:39:11.448 13:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:11.448 13:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:39:11.448 13:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:39:11.448 13:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 4140857 ']' 00:39:11.448 13:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 4140857 00:39:11.448 13:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 4140857 ']' 00:39:11.448 13:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # kill -0 4140857 00:39:11.448 13:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # uname 00:39:11.448 13:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:11.448 13:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4140857 00:39:11.708 13:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:39:11.708 13:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:39:11.708 13:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4140857' 00:39:11.708 killing process with pid 4140857 00:39:11.708 13:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@971 -- # kill 4140857 00:39:11.708 13:44:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@976 -- # wait 4140857 00:39:12.277 13:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:12.277 13:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:12.277 13:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:12.277 13:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:39:12.277 13:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:39:12.277 13:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:12.277 13:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:39:12.277 13:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:12.277 13:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:12.277 13:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:12.277 13:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:12.277 13:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:14.187 13:44:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:14.448 00:39:14.448 real 0m30.932s 00:39:14.448 user 1m10.410s 00:39:14.448 sys 0m8.203s 00:39:14.448 13:44:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:14.448 13:44:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:14.448 ************************************ 00:39:14.448 END TEST nvmf_bdevperf 00:39:14.448 ************************************ 00:39:14.448 13:44:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:39:14.448 13:44:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:39:14.448 13:44:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:14.448 13:44:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.448 ************************************ 00:39:14.448 START TEST nvmf_target_disconnect 00:39:14.448 ************************************ 00:39:14.448 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:39:14.448 * Looking for test storage... 00:39:14.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:39:14.448 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:14.448 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:39:14.448 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:14.448 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:14.448 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:14.448 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:14.448 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:14.448 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:39:14.448 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:39:14.449 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:39:14.449 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:39:14.449 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:39:14.449 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:39:14.449 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:39:14.449 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:14.449 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:39:14.449 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:39:14.449 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:14.449 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:14.449 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:39:14.449 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:39:14.449 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:14.449 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:39:14.449 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:39:14.449 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:39:14.449 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:39:14.449 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:14.449 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:39:14.449 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:39:14.449 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:14.449 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:14.449 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:39:14.449 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:14.449 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:14.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:14.449 --rc genhtml_branch_coverage=1 00:39:14.449 --rc genhtml_function_coverage=1 00:39:14.449 --rc genhtml_legend=1 00:39:14.449 --rc geninfo_all_blocks=1 00:39:14.449 --rc geninfo_unexecuted_blocks=1 00:39:14.449 00:39:14.449 ' 00:39:14.449 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:14.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:14.449 --rc genhtml_branch_coverage=1 00:39:14.449 --rc genhtml_function_coverage=1 00:39:14.449 --rc genhtml_legend=1 00:39:14.449 --rc geninfo_all_blocks=1 00:39:14.449 --rc geninfo_unexecuted_blocks=1 00:39:14.449 00:39:14.449 ' 00:39:14.449 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:14.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:14.449 --rc genhtml_branch_coverage=1 00:39:14.449 --rc genhtml_function_coverage=1 00:39:14.449 --rc genhtml_legend=1 00:39:14.449 --rc geninfo_all_blocks=1 00:39:14.449 --rc geninfo_unexecuted_blocks=1 00:39:14.449 00:39:14.449 ' 00:39:14.449 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:14.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:14.449 --rc genhtml_branch_coverage=1 00:39:14.449 --rc genhtml_function_coverage=1 00:39:14.449 --rc genhtml_legend=1 00:39:14.449 --rc geninfo_all_blocks=1 00:39:14.449 --rc geninfo_unexecuted_blocks=1 00:39:14.449 00:39:14.449 ' 00:39:14.449 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:14.449 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:39:14.449 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:14.449 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:14.449 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:14.449 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:14.449 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:14.449 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:14.449 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:14.449 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:14.449 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:14.449 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:14.711 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:39:14.711 13:44:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:39:22.848 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:22.848 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:39:22.848 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:22.848 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:22.848 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:22.848 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:22.848 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:22.848 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:39:22.848 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:22.848 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:39:22.848 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:39:22.848 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:39:22.848 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:39:22.848 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:39:22.848 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:39:22.848 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:22.848 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:22.848 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:22.848 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:22.848 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:22.848 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:22.848 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:22.848 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:22.848 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:22.848 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:22.848 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:22.848 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:22.848 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:22.848 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:22.848 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:22.848 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:22.848 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:22.848 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:22.848 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:22.848 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:39:22.848 Found 0000:31:00.0 (0x8086 - 0x159b) 00:39:22.848 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:22.848 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:22.848 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:22.848 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:39:22.849 Found 0000:31:00.1 (0x8086 - 0x159b) 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:39:22.849 Found net devices under 0000:31:00.0: cvl_0_0 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:39:22.849 Found net devices under 0000:31:00.1: cvl_0_1 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:22.849 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:23.109 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:23.109 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:23.109 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:23.109 13:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:23.109 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:23.109 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:23.110 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:23.110 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:23.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:23.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.690 ms 00:39:23.110 00:39:23.110 --- 10.0.0.2 ping statistics --- 00:39:23.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:23.110 rtt min/avg/max/mdev = 0.690/0.690/0.690/0.000 ms 00:39:23.110 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:23.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:23.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:39:23.110 00:39:23.110 --- 10.0.0.1 ping statistics --- 00:39:23.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:23.110 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:39:23.110 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:23.110 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:39:23.110 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:23.110 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:23.110 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:23.110 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:23.110 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:23.110 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:23.110 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:23.110 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:39:23.110 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:39:23.110 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:23.110 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:39:23.110 ************************************ 00:39:23.110 START TEST nvmf_target_disconnect_tc1 00:39:23.110 ************************************ 00:39:23.110 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc1 00:39:23.110 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:23.110 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:39:23.110 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:23.110 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:39:23.110 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:23.110 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:39:23.370 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:23.370 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:39:23.370 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:23.370 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:39:23.370 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:39:23.370 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:23.370 [2024-11-07 13:44:31.323984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.370 [2024-11-07 13:44:31.324083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000416980 with addr=10.0.0.2, port=4420 00:39:23.370 [2024-11-07 13:44:31.324148] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:39:23.370 [2024-11-07 13:44:31.324164] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:39:23.370 [2024-11-07 13:44:31.324178] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:39:23.370 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:39:23.370 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:39:23.370 Initializing NVMe Controllers 00:39:23.370 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:39:23.370 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:23.370 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:23.370 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:23.370 00:39:23.370 real 0m0.248s 00:39:23.370 user 0m0.092s 00:39:23.370 sys 0m0.155s 00:39:23.370 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:23.370 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:39:23.370 ************************************ 00:39:23.370 END TEST nvmf_target_disconnect_tc1 00:39:23.370 ************************************ 00:39:23.630 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:39:23.630 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:39:23.630 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:23.630 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:39:23.630 ************************************ 00:39:23.630 START TEST nvmf_target_disconnect_tc2 00:39:23.630 ************************************ 00:39:23.630 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc2 00:39:23.630 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:39:23.630 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:39:23.630 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:23.630 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:23.630 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:23.630 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:39:23.630 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=4147546 00:39:23.631 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 4147546 00:39:23.631 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 4147546 ']' 00:39:23.631 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:23.631 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:23.631 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:23.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:23.631 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:23.631 13:44:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:23.631 [2024-11-07 13:44:31.506121] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:39:23.631 [2024-11-07 13:44:31.506260] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:23.891 [2024-11-07 13:44:31.666648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:23.891 [2024-11-07 13:44:31.793995] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:23.891 [2024-11-07 13:44:31.794068] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:23.892 [2024-11-07 13:44:31.794082] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:23.892 [2024-11-07 13:44:31.794095] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:23.892 [2024-11-07 13:44:31.794105] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:23.892 [2024-11-07 13:44:31.796991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:39:23.892 [2024-11-07 13:44:31.797127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:39:23.892 [2024-11-07 13:44:31.797234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:39:23.892 [2024-11-07 13:44:31.797259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:39:24.462 13:44:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:24.462 13:44:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:39:24.462 13:44:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:24.462 13:44:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:24.462 13:44:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:24.462 13:44:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:24.462 13:44:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:24.462 13:44:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:24.462 13:44:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:24.462 Malloc0 00:39:24.462 13:44:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:24.462 13:44:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:39:24.462 13:44:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:24.462 13:44:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:24.462 [2024-11-07 13:44:32.423616] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:24.462 13:44:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:24.462 13:44:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:24.462 13:44:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:24.462 13:44:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:24.462 13:44:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:24.462 13:44:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:24.462 13:44:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:24.462 13:44:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:24.462 13:44:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:24.462 13:44:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:24.462 13:44:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:24.462 13:44:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:24.462 [2024-11-07 13:44:32.465747] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:24.723 13:44:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:24.723 13:44:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:24.723 13:44:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:24.723 13:44:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:24.723 13:44:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:24.723 13:44:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=4147890 00:39:24.723 13:44:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:39:24.723 13:44:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:26.641 13:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 4147546 00:39:26.641 13:44:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Write completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Write completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Write completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Write completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Write completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Write completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Write completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Write completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Write completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 [2024-11-07 13:44:34.511607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Write completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Write completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Write completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Write completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Write completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Write completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Write completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Write completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Write completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Read completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Write completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Write completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 Write completed with error (sct=0, sc=8) 00:39:26.641 starting I/O failed 00:39:26.641 [2024-11-07 13:44:34.512100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:26.641 [2024-11-07 13:44:34.512526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.641 [2024-11-07 13:44:34.512555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.641 qpair failed and we were unable to recover it. 00:39:26.641 [2024-11-07 13:44:34.512812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.641 [2024-11-07 13:44:34.512828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.641 qpair failed and we were unable to recover it. 00:39:26.641 [2024-11-07 13:44:34.513246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.641 [2024-11-07 13:44:34.513291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.641 qpair failed and we were unable to recover it. 00:39:26.641 [2024-11-07 13:44:34.513674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.641 [2024-11-07 13:44:34.513691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.641 qpair failed and we were unable to recover it. 00:39:26.641 [2024-11-07 13:44:34.514131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.641 [2024-11-07 13:44:34.514175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.641 qpair failed and we were unable to recover it. 00:39:26.641 [2024-11-07 13:44:34.514409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.641 [2024-11-07 13:44:34.514426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.641 qpair failed and we were unable to recover it. 00:39:26.641 [2024-11-07 13:44:34.514760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.641 [2024-11-07 13:44:34.514775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.641 qpair failed and we were unable to recover it. 00:39:26.641 [2024-11-07 13:44:34.515099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.641 [2024-11-07 13:44:34.515113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.641 qpair failed and we were unable to recover it. 00:39:26.641 [2024-11-07 13:44:34.515269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.641 [2024-11-07 13:44:34.515283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.641 qpair failed and we were unable to recover it. 00:39:26.641 [2024-11-07 13:44:34.515618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.641 [2024-11-07 13:44:34.515633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.641 qpair failed and we were unable to recover it. 00:39:26.641 [2024-11-07 13:44:34.515867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.641 [2024-11-07 13:44:34.515882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.641 qpair failed and we were unable to recover it. 00:39:26.642 [2024-11-07 13:44:34.516196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.642 [2024-11-07 13:44:34.516211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.642 qpair failed and we were unable to recover it. 00:39:26.642 [2024-11-07 13:44:34.516278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.642 [2024-11-07 13:44:34.516293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.642 qpair failed and we were unable to recover it. 00:39:26.642 [2024-11-07 13:44:34.516487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.642 [2024-11-07 13:44:34.516501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.642 qpair failed and we were unable to recover it. 00:39:26.642 [2024-11-07 13:44:34.516706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.642 [2024-11-07 13:44:34.516720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.642 qpair failed and we were unable to recover it. 00:39:26.642 [2024-11-07 13:44:34.516896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.642 [2024-11-07 13:44:34.516911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.642 qpair failed and we were unable to recover it. 00:39:26.642 [2024-11-07 13:44:34.517281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.642 [2024-11-07 13:44:34.517295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.642 qpair failed and we were unable to recover it. 00:39:26.642 [2024-11-07 13:44:34.517607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.642 [2024-11-07 13:44:34.517621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.642 qpair failed and we were unable to recover it. 00:39:26.642 [2024-11-07 13:44:34.517938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.642 [2024-11-07 13:44:34.517952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.642 qpair failed and we were unable to recover it. 00:39:26.642 [2024-11-07 13:44:34.518250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.642 [2024-11-07 13:44:34.518265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.642 qpair failed and we were unable to recover it. 00:39:26.642 [2024-11-07 13:44:34.518554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.642 [2024-11-07 13:44:34.518568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.642 qpair failed and we were unable to recover it. 00:39:26.642 [2024-11-07 13:44:34.518919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.642 [2024-11-07 13:44:34.518933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.642 qpair failed and we were unable to recover it. 00:39:26.642 [2024-11-07 13:44:34.519268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.642 [2024-11-07 13:44:34.519284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.642 qpair failed and we were unable to recover it. 00:39:26.642 [2024-11-07 13:44:34.519589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.642 [2024-11-07 13:44:34.519603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.642 qpair failed and we were unable to recover it. 00:39:26.642 [2024-11-07 13:44:34.519931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.642 [2024-11-07 13:44:34.519945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.642 qpair failed and we were unable to recover it. 00:39:26.642 [2024-11-07 13:44:34.520254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.642 [2024-11-07 13:44:34.520267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.642 qpair failed and we were unable to recover it. 00:39:26.642 [2024-11-07 13:44:34.520598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.642 [2024-11-07 13:44:34.520612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.642 qpair failed and we were unable to recover it. 00:39:26.642 [2024-11-07 13:44:34.520823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.642 [2024-11-07 13:44:34.520838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.642 qpair failed and we were unable to recover it. 00:39:26.642 [2024-11-07 13:44:34.521151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.642 [2024-11-07 13:44:34.521166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.642 qpair failed and we were unable to recover it. 00:39:26.642 [2024-11-07 13:44:34.521500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.642 [2024-11-07 13:44:34.521514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.642 qpair failed and we were unable to recover it. 00:39:26.642 [2024-11-07 13:44:34.521854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.642 [2024-11-07 13:44:34.521873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.642 qpair failed and we were unable to recover it. 00:39:26.642 [2024-11-07 13:44:34.522231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.642 [2024-11-07 13:44:34.522245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.642 qpair failed and we were unable to recover it. 00:39:26.642 [2024-11-07 13:44:34.522414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.642 [2024-11-07 13:44:34.522429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.642 qpair failed and we were unable to recover it. 00:39:26.642 [2024-11-07 13:44:34.522689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.642 [2024-11-07 13:44:34.522703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.642 qpair failed and we were unable to recover it. 00:39:26.642 [2024-11-07 13:44:34.523038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.642 [2024-11-07 13:44:34.523053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.642 qpair failed and we were unable to recover it. 00:39:26.642 [2024-11-07 13:44:34.523359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.642 [2024-11-07 13:44:34.523373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.642 qpair failed and we were unable to recover it. 00:39:26.642 [2024-11-07 13:44:34.523554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.642 [2024-11-07 13:44:34.523570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.642 qpair failed and we were unable to recover it. 00:39:26.642 [2024-11-07 13:44:34.523760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.642 [2024-11-07 13:44:34.523775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.642 qpair failed and we were unable to recover it. 00:39:26.642 [2024-11-07 13:44:34.524110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.642 [2024-11-07 13:44:34.524125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.642 qpair failed and we were unable to recover it. 00:39:26.642 [2024-11-07 13:44:34.524466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.642 [2024-11-07 13:44:34.524480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.642 qpair failed and we were unable to recover it. 00:39:26.642 [2024-11-07 13:44:34.524824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.642 [2024-11-07 13:44:34.524838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.642 qpair failed and we were unable to recover it. 00:39:26.642 [2024-11-07 13:44:34.524924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.642 [2024-11-07 13:44:34.524938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.642 qpair failed and we were unable to recover it. 00:39:26.642 [2024-11-07 13:44:34.525215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.642 [2024-11-07 13:44:34.525230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.642 qpair failed and we were unable to recover it. 00:39:26.642 [2024-11-07 13:44:34.525558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.642 [2024-11-07 13:44:34.525572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.642 qpair failed and we were unable to recover it. 00:39:26.642 [2024-11-07 13:44:34.525890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.642 [2024-11-07 13:44:34.525904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.642 qpair failed and we were unable to recover it. 00:39:26.642 [2024-11-07 13:44:34.526210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.642 [2024-11-07 13:44:34.526224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.642 qpair failed and we were unable to recover it. 00:39:26.642 [2024-11-07 13:44:34.526534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.642 [2024-11-07 13:44:34.526549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.642 qpair failed and we were unable to recover it. 00:39:26.642 [2024-11-07 13:44:34.526777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.642 [2024-11-07 13:44:34.526792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.642 qpair failed and we were unable to recover it. 00:39:26.642 [2024-11-07 13:44:34.527125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.642 [2024-11-07 13:44:34.527139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.643 qpair failed and we were unable to recover it. 00:39:26.643 [2024-11-07 13:44:34.527347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.643 [2024-11-07 13:44:34.527361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.643 qpair failed and we were unable to recover it. 00:39:26.643 [2024-11-07 13:44:34.527534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.643 [2024-11-07 13:44:34.527549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.643 qpair failed and we were unable to recover it. 00:39:26.643 [2024-11-07 13:44:34.527811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.643 [2024-11-07 13:44:34.527824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.643 qpair failed and we were unable to recover it. 00:39:26.643 [2024-11-07 13:44:34.528122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.643 [2024-11-07 13:44:34.528137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.643 qpair failed and we were unable to recover it. 00:39:26.643 [2024-11-07 13:44:34.528458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.643 [2024-11-07 13:44:34.528471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.643 qpair failed and we were unable to recover it. 00:39:26.643 [2024-11-07 13:44:34.528672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.643 [2024-11-07 13:44:34.528686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.643 qpair failed and we were unable to recover it. 00:39:26.643 [2024-11-07 13:44:34.528885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.643 [2024-11-07 13:44:34.528898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.643 qpair failed and we were unable to recover it. 00:39:26.643 [2024-11-07 13:44:34.529299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.643 [2024-11-07 13:44:34.529312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.643 qpair failed and we were unable to recover it. 00:39:26.643 [2024-11-07 13:44:34.529470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.643 [2024-11-07 13:44:34.529484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.643 qpair failed and we were unable to recover it. 00:39:26.643 [2024-11-07 13:44:34.529807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.643 [2024-11-07 13:44:34.529821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.643 qpair failed and we were unable to recover it. 00:39:26.643 [2024-11-07 13:44:34.530117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.643 [2024-11-07 13:44:34.530132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.643 qpair failed and we were unable to recover it. 00:39:26.643 [2024-11-07 13:44:34.530498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.643 [2024-11-07 13:44:34.530512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.643 qpair failed and we were unable to recover it. 00:39:26.643 [2024-11-07 13:44:34.530799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.643 [2024-11-07 13:44:34.530812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.643 qpair failed and we were unable to recover it. 00:39:26.643 [2024-11-07 13:44:34.531136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.643 [2024-11-07 13:44:34.531153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.643 qpair failed and we were unable to recover it. 00:39:26.643 [2024-11-07 13:44:34.531362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.643 [2024-11-07 13:44:34.531387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.643 qpair failed and we were unable to recover it. 00:39:26.643 [2024-11-07 13:44:34.531707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.643 [2024-11-07 13:44:34.531720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.643 qpair failed and we were unable to recover it. 00:39:26.643 [2024-11-07 13:44:34.532023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.643 [2024-11-07 13:44:34.532037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.643 qpair failed and we were unable to recover it. 00:39:26.643 [2024-11-07 13:44:34.532338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.643 [2024-11-07 13:44:34.532352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.643 qpair failed and we were unable to recover it. 00:39:26.643 [2024-11-07 13:44:34.532687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.643 [2024-11-07 13:44:34.532707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.643 qpair failed and we were unable to recover it. 00:39:26.643 [2024-11-07 13:44:34.533045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.643 [2024-11-07 13:44:34.533059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.643 qpair failed and we were unable to recover it. 00:39:26.643 [2024-11-07 13:44:34.533253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.643 [2024-11-07 13:44:34.533268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.643 qpair failed and we were unable to recover it. 00:39:26.643 [2024-11-07 13:44:34.533565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.643 [2024-11-07 13:44:34.533579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.643 qpair failed and we were unable to recover it. 00:39:26.643 [2024-11-07 13:44:34.533883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.643 [2024-11-07 13:44:34.533896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.643 qpair failed and we were unable to recover it. 00:39:26.643 [2024-11-07 13:44:34.534095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.643 [2024-11-07 13:44:34.534109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.643 qpair failed and we were unable to recover it. 00:39:26.643 [2024-11-07 13:44:34.534321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.643 [2024-11-07 13:44:34.534334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.643 qpair failed and we were unable to recover it. 00:39:26.643 [2024-11-07 13:44:34.534632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.643 [2024-11-07 13:44:34.534646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.643 qpair failed and we were unable to recover it. 00:39:26.643 [2024-11-07 13:44:34.534883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.643 [2024-11-07 13:44:34.534898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.643 qpair failed and we were unable to recover it. 00:39:26.643 [2024-11-07 13:44:34.535173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.643 [2024-11-07 13:44:34.535187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.643 qpair failed and we were unable to recover it. 00:39:26.643 [2024-11-07 13:44:34.535489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.643 [2024-11-07 13:44:34.535502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.643 qpair failed and we were unable to recover it. 00:39:26.643 [2024-11-07 13:44:34.535696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.643 [2024-11-07 13:44:34.535710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.643 qpair failed and we were unable to recover it. 00:39:26.643 [2024-11-07 13:44:34.536074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.643 [2024-11-07 13:44:34.536088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.643 qpair failed and we were unable to recover it. 00:39:26.643 [2024-11-07 13:44:34.536259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.643 [2024-11-07 13:44:34.536273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.643 qpair failed and we were unable to recover it. 00:39:26.643 [2024-11-07 13:44:34.536508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.643 [2024-11-07 13:44:34.536521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.643 qpair failed and we were unable to recover it. 00:39:26.643 [2024-11-07 13:44:34.536832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.643 [2024-11-07 13:44:34.536845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.643 qpair failed and we were unable to recover it. 00:39:26.643 [2024-11-07 13:44:34.537151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.643 [2024-11-07 13:44:34.537165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.644 qpair failed and we were unable to recover it. 00:39:26.644 [2024-11-07 13:44:34.537464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.644 [2024-11-07 13:44:34.537477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.644 qpair failed and we were unable to recover it. 00:39:26.644 [2024-11-07 13:44:34.537858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.644 [2024-11-07 13:44:34.537876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.644 qpair failed and we were unable to recover it. 00:39:26.644 [2024-11-07 13:44:34.538054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.644 [2024-11-07 13:44:34.538069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.644 qpair failed and we were unable to recover it. 00:39:26.644 [2024-11-07 13:44:34.538377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.644 [2024-11-07 13:44:34.538390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.644 qpair failed and we were unable to recover it. 00:39:26.644 [2024-11-07 13:44:34.538686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.644 [2024-11-07 13:44:34.538701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.644 qpair failed and we were unable to recover it. 00:39:26.644 [2024-11-07 13:44:34.538999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.644 [2024-11-07 13:44:34.539017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.644 qpair failed and we were unable to recover it. 00:39:26.644 [2024-11-07 13:44:34.539309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.644 [2024-11-07 13:44:34.539327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.644 qpair failed and we were unable to recover it. 00:39:26.644 [2024-11-07 13:44:34.539622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.644 [2024-11-07 13:44:34.539639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.644 qpair failed and we were unable to recover it. 00:39:26.644 [2024-11-07 13:44:34.539827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.644 [2024-11-07 13:44:34.539844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.644 qpair failed and we were unable to recover it. 00:39:26.644 [2024-11-07 13:44:34.540149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.644 [2024-11-07 13:44:34.540168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.644 qpair failed and we were unable to recover it. 00:39:26.644 [2024-11-07 13:44:34.540483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.644 [2024-11-07 13:44:34.540500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.644 qpair failed and we were unable to recover it. 00:39:26.644 [2024-11-07 13:44:34.540817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.644 [2024-11-07 13:44:34.540842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.644 qpair failed and we were unable to recover it. 00:39:26.644 [2024-11-07 13:44:34.541188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.644 [2024-11-07 13:44:34.541206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.644 qpair failed and we were unable to recover it. 00:39:26.644 [2024-11-07 13:44:34.541579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.644 [2024-11-07 13:44:34.541596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.644 qpair failed and we were unable to recover it. 00:39:26.644 [2024-11-07 13:44:34.541933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.644 [2024-11-07 13:44:34.541951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.644 qpair failed and we were unable to recover it. 00:39:26.644 [2024-11-07 13:44:34.542184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.644 [2024-11-07 13:44:34.542200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.644 qpair failed and we were unable to recover it. 00:39:26.644 [2024-11-07 13:44:34.542391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.644 [2024-11-07 13:44:34.542408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.644 qpair failed and we were unable to recover it. 00:39:26.644 [2024-11-07 13:44:34.542767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.644 [2024-11-07 13:44:34.542783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.644 qpair failed and we were unable to recover it. 00:39:26.644 [2024-11-07 13:44:34.543101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.644 [2024-11-07 13:44:34.543122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.644 qpair failed and we were unable to recover it. 00:39:26.644 [2024-11-07 13:44:34.543415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.644 [2024-11-07 13:44:34.543432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.644 qpair failed and we were unable to recover it. 00:39:26.644 [2024-11-07 13:44:34.543804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.644 [2024-11-07 13:44:34.543821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.644 qpair failed and we were unable to recover it. 00:39:26.644 [2024-11-07 13:44:34.544131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.644 [2024-11-07 13:44:34.544148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.644 qpair failed and we were unable to recover it. 00:39:26.644 [2024-11-07 13:44:34.544379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.644 [2024-11-07 13:44:34.544397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.644 qpair failed and we were unable to recover it. 00:39:26.644 [2024-11-07 13:44:34.544730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.644 [2024-11-07 13:44:34.544747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.644 qpair failed and we were unable to recover it. 00:39:26.644 [2024-11-07 13:44:34.545075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.644 [2024-11-07 13:44:34.545092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.644 qpair failed and we were unable to recover it. 00:39:26.644 [2024-11-07 13:44:34.545472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.644 [2024-11-07 13:44:34.545490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.644 qpair failed and we were unable to recover it. 00:39:26.644 [2024-11-07 13:44:34.545792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.644 [2024-11-07 13:44:34.545810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.644 qpair failed and we were unable to recover it. 00:39:26.644 [2024-11-07 13:44:34.546141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.644 [2024-11-07 13:44:34.546159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.644 qpair failed and we were unable to recover it. 00:39:26.644 [2024-11-07 13:44:34.546465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.644 [2024-11-07 13:44:34.546483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.644 qpair failed and we were unable to recover it. 00:39:26.644 [2024-11-07 13:44:34.546672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.644 [2024-11-07 13:44:34.546692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.644 qpair failed and we were unable to recover it. 00:39:26.644 [2024-11-07 13:44:34.546896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.644 [2024-11-07 13:44:34.546918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.644 qpair failed and we were unable to recover it. 00:39:26.644 [2024-11-07 13:44:34.547238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.644 [2024-11-07 13:44:34.547255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.644 qpair failed and we were unable to recover it. 00:39:26.644 [2024-11-07 13:44:34.547569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.644 [2024-11-07 13:44:34.547586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.644 qpair failed and we were unable to recover it. 00:39:26.644 [2024-11-07 13:44:34.547914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.644 [2024-11-07 13:44:34.547931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.644 qpair failed and we were unable to recover it. 00:39:26.645 [2024-11-07 13:44:34.548227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.645 [2024-11-07 13:44:34.548244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.645 qpair failed and we were unable to recover it. 00:39:26.645 [2024-11-07 13:44:34.548564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.645 [2024-11-07 13:44:34.548589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.645 qpair failed and we were unable to recover it. 00:39:26.645 [2024-11-07 13:44:34.548888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.645 [2024-11-07 13:44:34.548905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.645 qpair failed and we were unable to recover it. 00:39:26.645 [2024-11-07 13:44:34.549284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.645 [2024-11-07 13:44:34.549301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.645 qpair failed and we were unable to recover it. 00:39:26.645 [2024-11-07 13:44:34.549629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.645 [2024-11-07 13:44:34.549646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.645 qpair failed and we were unable to recover it. 00:39:26.645 [2024-11-07 13:44:34.549846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.645 [2024-11-07 13:44:34.549876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.645 qpair failed and we were unable to recover it. 00:39:26.645 [2024-11-07 13:44:34.550198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.645 [2024-11-07 13:44:34.550221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.645 qpair failed and we were unable to recover it. 00:39:26.645 [2024-11-07 13:44:34.550542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.645 [2024-11-07 13:44:34.550565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.645 qpair failed and we were unable to recover it. 00:39:26.645 [2024-11-07 13:44:34.550904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.645 [2024-11-07 13:44:34.550929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.645 qpair failed and we were unable to recover it. 00:39:26.645 [2024-11-07 13:44:34.551249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.645 [2024-11-07 13:44:34.551312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.645 qpair failed and we were unable to recover it. 00:39:26.645 [2024-11-07 13:44:34.551649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.645 [2024-11-07 13:44:34.551672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.645 qpair failed and we were unable to recover it. 00:39:26.645 [2024-11-07 13:44:34.551982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.645 [2024-11-07 13:44:34.552007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.645 qpair failed and we were unable to recover it. 00:39:26.645 [2024-11-07 13:44:34.552365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.645 [2024-11-07 13:44:34.552389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.645 qpair failed and we were unable to recover it. 00:39:26.645 [2024-11-07 13:44:34.552703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.645 [2024-11-07 13:44:34.552725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.645 qpair failed and we were unable to recover it. 00:39:26.645 [2024-11-07 13:44:34.552902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.645 [2024-11-07 13:44:34.552926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.645 qpair failed and we were unable to recover it. 00:39:26.645 [2024-11-07 13:44:34.553262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.645 [2024-11-07 13:44:34.553284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.645 qpair failed and we were unable to recover it. 00:39:26.645 [2024-11-07 13:44:34.553597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.645 [2024-11-07 13:44:34.553621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.645 qpair failed and we were unable to recover it. 00:39:26.645 [2024-11-07 13:44:34.553920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.645 [2024-11-07 13:44:34.553943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.645 qpair failed and we were unable to recover it. 00:39:26.645 [2024-11-07 13:44:34.554326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.645 [2024-11-07 13:44:34.554349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.645 qpair failed and we were unable to recover it. 00:39:26.645 [2024-11-07 13:44:34.554703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.645 [2024-11-07 13:44:34.554725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.645 qpair failed and we were unable to recover it. 00:39:26.645 [2024-11-07 13:44:34.555063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.645 [2024-11-07 13:44:34.555087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.645 qpair failed and we were unable to recover it. 00:39:26.645 [2024-11-07 13:44:34.555396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.645 [2024-11-07 13:44:34.555424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.645 qpair failed and we were unable to recover it. 00:39:26.645 [2024-11-07 13:44:34.555738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.645 [2024-11-07 13:44:34.555761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.645 qpair failed and we were unable to recover it. 00:39:26.645 [2024-11-07 13:44:34.556095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.645 [2024-11-07 13:44:34.556118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.645 qpair failed and we were unable to recover it. 00:39:26.645 [2024-11-07 13:44:34.556430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.645 [2024-11-07 13:44:34.556456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.645 qpair failed and we were unable to recover it. 00:39:26.645 [2024-11-07 13:44:34.556667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.645 [2024-11-07 13:44:34.556691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.645 qpair failed and we were unable to recover it. 00:39:26.645 [2024-11-07 13:44:34.557011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.645 [2024-11-07 13:44:34.557034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.645 qpair failed and we were unable to recover it. 00:39:26.645 [2024-11-07 13:44:34.557240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.645 [2024-11-07 13:44:34.557265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.645 qpair failed and we were unable to recover it. 00:39:26.645 [2024-11-07 13:44:34.557571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.645 [2024-11-07 13:44:34.557595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.645 qpair failed and we were unable to recover it. 00:39:26.645 [2024-11-07 13:44:34.557803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.645 [2024-11-07 13:44:34.557825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.645 qpair failed and we were unable to recover it. 00:39:26.645 [2024-11-07 13:44:34.558157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.646 [2024-11-07 13:44:34.558181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.646 qpair failed and we were unable to recover it. 00:39:26.646 [2024-11-07 13:44:34.558489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.646 [2024-11-07 13:44:34.558512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.646 qpair failed and we were unable to recover it. 00:39:26.646 [2024-11-07 13:44:34.558806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.646 [2024-11-07 13:44:34.558828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.646 qpair failed and we were unable to recover it. 00:39:26.646 [2024-11-07 13:44:34.559146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.646 [2024-11-07 13:44:34.559170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.646 qpair failed and we were unable to recover it. 00:39:26.646 [2024-11-07 13:44:34.559517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.646 [2024-11-07 13:44:34.559540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.646 qpair failed and we were unable to recover it. 00:39:26.646 [2024-11-07 13:44:34.559899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.646 [2024-11-07 13:44:34.559930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.646 qpair failed and we were unable to recover it. 00:39:26.646 [2024-11-07 13:44:34.560294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.646 [2024-11-07 13:44:34.560323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.646 qpair failed and we were unable to recover it. 00:39:26.646 [2024-11-07 13:44:34.560682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.646 [2024-11-07 13:44:34.560711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.646 qpair failed and we were unable to recover it. 00:39:26.646 [2024-11-07 13:44:34.561055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.646 [2024-11-07 13:44:34.561087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.646 qpair failed and we were unable to recover it. 00:39:26.646 [2024-11-07 13:44:34.561317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.646 [2024-11-07 13:44:34.561347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.646 qpair failed and we were unable to recover it. 00:39:26.646 [2024-11-07 13:44:34.561677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.646 [2024-11-07 13:44:34.561707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.646 qpair failed and we were unable to recover it. 00:39:26.646 [2024-11-07 13:44:34.562097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.646 [2024-11-07 13:44:34.562127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.646 qpair failed and we were unable to recover it. 00:39:26.646 [2024-11-07 13:44:34.562504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.646 [2024-11-07 13:44:34.562531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.646 qpair failed and we were unable to recover it. 00:39:26.646 [2024-11-07 13:44:34.562889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.646 [2024-11-07 13:44:34.562919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.646 qpair failed and we were unable to recover it. 00:39:26.646 [2024-11-07 13:44:34.563290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.646 [2024-11-07 13:44:34.563320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.646 qpair failed and we were unable to recover it. 00:39:26.646 [2024-11-07 13:44:34.563560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.646 [2024-11-07 13:44:34.563587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.646 qpair failed and we were unable to recover it. 00:39:26.646 [2024-11-07 13:44:34.563916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.646 [2024-11-07 13:44:34.563948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.646 qpair failed and we were unable to recover it. 00:39:26.646 [2024-11-07 13:44:34.564296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.646 [2024-11-07 13:44:34.564324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.646 qpair failed and we were unable to recover it. 00:39:26.646 [2024-11-07 13:44:34.564646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.646 [2024-11-07 13:44:34.564675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.646 qpair failed and we were unable to recover it. 00:39:26.646 [2024-11-07 13:44:34.564881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.646 [2024-11-07 13:44:34.564913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.646 qpair failed and we were unable to recover it. 00:39:26.646 [2024-11-07 13:44:34.565276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.646 [2024-11-07 13:44:34.565305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.646 qpair failed and we were unable to recover it. 00:39:26.646 [2024-11-07 13:44:34.565624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.646 [2024-11-07 13:44:34.565653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.646 qpair failed and we were unable to recover it. 00:39:26.646 [2024-11-07 13:44:34.565992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.646 [2024-11-07 13:44:34.566022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.646 qpair failed and we were unable to recover it. 00:39:26.646 [2024-11-07 13:44:34.566391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.646 [2024-11-07 13:44:34.566420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.646 qpair failed and we were unable to recover it. 00:39:26.646 [2024-11-07 13:44:34.566586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.646 [2024-11-07 13:44:34.566617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.646 qpair failed and we were unable to recover it. 00:39:26.646 [2024-11-07 13:44:34.566931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.646 [2024-11-07 13:44:34.566961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.646 qpair failed and we were unable to recover it. 00:39:26.646 [2024-11-07 13:44:34.567324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.646 [2024-11-07 13:44:34.567353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.646 qpair failed and we were unable to recover it. 00:39:26.646 [2024-11-07 13:44:34.567684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.646 [2024-11-07 13:44:34.567713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.646 qpair failed and we were unable to recover it. 00:39:26.646 [2024-11-07 13:44:34.568074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.646 [2024-11-07 13:44:34.568104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.646 qpair failed and we were unable to recover it. 00:39:26.646 [2024-11-07 13:44:34.568330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.646 [2024-11-07 13:44:34.568360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.646 qpair failed and we were unable to recover it. 00:39:26.646 [2024-11-07 13:44:34.568601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.646 [2024-11-07 13:44:34.568630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.646 qpair failed and we were unable to recover it. 00:39:26.646 [2024-11-07 13:44:34.568973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.646 [2024-11-07 13:44:34.569004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.646 qpair failed and we were unable to recover it. 00:39:26.646 [2024-11-07 13:44:34.569375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.646 [2024-11-07 13:44:34.569403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.646 qpair failed and we were unable to recover it. 00:39:26.646 [2024-11-07 13:44:34.569744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.646 [2024-11-07 13:44:34.569774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.646 qpair failed and we were unable to recover it. 00:39:26.646 [2024-11-07 13:44:34.570021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.646 [2024-11-07 13:44:34.570056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.646 qpair failed and we were unable to recover it. 00:39:26.646 [2024-11-07 13:44:34.570422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.646 [2024-11-07 13:44:34.570468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.646 qpair failed and we were unable to recover it. 00:39:26.646 [2024-11-07 13:44:34.570835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.647 [2024-11-07 13:44:34.570884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.647 qpair failed and we were unable to recover it. 00:39:26.647 [2024-11-07 13:44:34.571218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.647 [2024-11-07 13:44:34.571258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.647 qpair failed and we were unable to recover it. 00:39:26.647 [2024-11-07 13:44:34.571644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.647 [2024-11-07 13:44:34.571685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.647 qpair failed and we were unable to recover it. 00:39:26.647 [2024-11-07 13:44:34.572105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.647 [2024-11-07 13:44:34.572146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.647 qpair failed and we were unable to recover it. 00:39:26.647 [2024-11-07 13:44:34.572516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.647 [2024-11-07 13:44:34.572556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.647 qpair failed and we were unable to recover it. 00:39:26.647 [2024-11-07 13:44:34.572918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.647 [2024-11-07 13:44:34.572959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.647 qpair failed and we were unable to recover it. 00:39:26.647 [2024-11-07 13:44:34.573349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.647 [2024-11-07 13:44:34.573390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.647 qpair failed and we were unable to recover it. 00:39:26.647 [2024-11-07 13:44:34.573749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.647 [2024-11-07 13:44:34.573808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.647 qpair failed and we were unable to recover it. 00:39:26.647 [2024-11-07 13:44:34.574203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.647 [2024-11-07 13:44:34.574244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.647 qpair failed and we were unable to recover it. 00:39:26.647 [2024-11-07 13:44:34.574508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.647 [2024-11-07 13:44:34.574546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.647 qpair failed and we were unable to recover it. 00:39:26.647 [2024-11-07 13:44:34.574938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.647 [2024-11-07 13:44:34.574980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.647 qpair failed and we were unable to recover it. 00:39:26.647 [2024-11-07 13:44:34.575341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.647 [2024-11-07 13:44:34.575380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.647 qpair failed and we were unable to recover it. 00:39:26.647 [2024-11-07 13:44:34.575752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.647 [2024-11-07 13:44:34.575792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.647 qpair failed and we were unable to recover it. 00:39:26.647 [2024-11-07 13:44:34.576144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.647 [2024-11-07 13:44:34.576185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.647 qpair failed and we were unable to recover it. 00:39:26.647 [2024-11-07 13:44:34.576556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.647 [2024-11-07 13:44:34.576597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.647 qpair failed and we were unable to recover it. 00:39:26.647 [2024-11-07 13:44:34.576979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.647 [2024-11-07 13:44:34.577021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.647 qpair failed and we were unable to recover it. 00:39:26.647 [2024-11-07 13:44:34.577393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.647 [2024-11-07 13:44:34.577432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.647 qpair failed and we were unable to recover it. 00:39:26.647 [2024-11-07 13:44:34.577795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.647 [2024-11-07 13:44:34.577841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.647 qpair failed and we were unable to recover it. 00:39:26.647 [2024-11-07 13:44:34.578222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.647 [2024-11-07 13:44:34.578264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.647 qpair failed and we were unable to recover it. 00:39:26.647 [2024-11-07 13:44:34.578628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.647 [2024-11-07 13:44:34.578667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.647 qpair failed and we were unable to recover it. 00:39:26.647 [2024-11-07 13:44:34.579055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.647 [2024-11-07 13:44:34.579096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.647 qpair failed and we were unable to recover it. 00:39:26.647 [2024-11-07 13:44:34.579357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.647 [2024-11-07 13:44:34.579399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.647 qpair failed and we were unable to recover it. 00:39:26.647 [2024-11-07 13:44:34.579793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.647 [2024-11-07 13:44:34.579834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.647 qpair failed and we were unable to recover it. 00:39:26.647 [2024-11-07 13:44:34.580221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.647 [2024-11-07 13:44:34.580260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.647 qpair failed and we were unable to recover it. 00:39:26.647 [2024-11-07 13:44:34.580536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.647 [2024-11-07 13:44:34.580574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.647 qpair failed and we were unable to recover it. 00:39:26.647 [2024-11-07 13:44:34.580949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.647 [2024-11-07 13:44:34.580992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.647 qpair failed and we were unable to recover it. 00:39:26.647 [2024-11-07 13:44:34.581292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.647 [2024-11-07 13:44:34.581332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.647 qpair failed and we were unable to recover it. 00:39:26.647 [2024-11-07 13:44:34.581693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.647 [2024-11-07 13:44:34.581733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.647 qpair failed and we were unable to recover it. 00:39:26.647 [2024-11-07 13:44:34.582068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.647 [2024-11-07 13:44:34.582109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.647 qpair failed and we were unable to recover it. 00:39:26.647 [2024-11-07 13:44:34.582463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.647 [2024-11-07 13:44:34.582502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.647 qpair failed and we were unable to recover it. 00:39:26.647 [2024-11-07 13:44:34.582856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.647 [2024-11-07 13:44:34.582907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.647 qpair failed and we were unable to recover it. 00:39:26.647 [2024-11-07 13:44:34.583301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.647 [2024-11-07 13:44:34.583341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.647 qpair failed and we were unable to recover it. 00:39:26.647 [2024-11-07 13:44:34.583715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.647 [2024-11-07 13:44:34.583753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.647 qpair failed and we were unable to recover it. 00:39:26.647 [2024-11-07 13:44:34.584146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.647 [2024-11-07 13:44:34.584186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.647 qpair failed and we were unable to recover it. 00:39:26.647 [2024-11-07 13:44:34.584602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.647 [2024-11-07 13:44:34.584643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.647 qpair failed and we were unable to recover it. 00:39:26.647 [2024-11-07 13:44:34.585001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.647 [2024-11-07 13:44:34.585042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.647 qpair failed and we were unable to recover it. 00:39:26.647 [2024-11-07 13:44:34.585473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.647 [2024-11-07 13:44:34.585513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.647 qpair failed and we were unable to recover it. 00:39:26.648 [2024-11-07 13:44:34.585887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.648 [2024-11-07 13:44:34.585928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.648 qpair failed and we were unable to recover it. 00:39:26.648 [2024-11-07 13:44:34.586208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.648 [2024-11-07 13:44:34.586256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.648 qpair failed and we were unable to recover it. 00:39:26.648 [2024-11-07 13:44:34.586607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.648 [2024-11-07 13:44:34.586647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.648 qpair failed and we were unable to recover it. 00:39:26.648 [2024-11-07 13:44:34.587019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.648 [2024-11-07 13:44:34.587060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.648 qpair failed and we were unable to recover it. 00:39:26.648 [2024-11-07 13:44:34.587433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.648 [2024-11-07 13:44:34.587473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.648 qpair failed and we were unable to recover it. 00:39:26.648 [2024-11-07 13:44:34.587873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.648 [2024-11-07 13:44:34.587916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.648 qpair failed and we were unable to recover it. 00:39:26.648 [2024-11-07 13:44:34.588180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.648 [2024-11-07 13:44:34.588220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.648 qpair failed and we were unable to recover it. 00:39:26.648 [2024-11-07 13:44:34.588598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.648 [2024-11-07 13:44:34.588639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.648 qpair failed and we were unable to recover it. 00:39:26.648 [2024-11-07 13:44:34.589040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.648 [2024-11-07 13:44:34.589082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.648 qpair failed and we were unable to recover it. 00:39:26.648 [2024-11-07 13:44:34.589470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.648 [2024-11-07 13:44:34.589511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.648 qpair failed and we were unable to recover it. 00:39:26.648 [2024-11-07 13:44:34.589893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.648 [2024-11-07 13:44:34.589934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.648 qpair failed and we were unable to recover it. 00:39:26.648 [2024-11-07 13:44:34.590310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.648 [2024-11-07 13:44:34.590350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.648 qpair failed and we were unable to recover it. 00:39:26.648 [2024-11-07 13:44:34.590740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.648 [2024-11-07 13:44:34.590781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.648 qpair failed and we were unable to recover it. 00:39:26.648 [2024-11-07 13:44:34.591175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.648 [2024-11-07 13:44:34.591218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.648 qpair failed and we were unable to recover it. 00:39:26.648 [2024-11-07 13:44:34.591595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.648 [2024-11-07 13:44:34.591635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.648 qpair failed and we were unable to recover it. 00:39:26.648 [2024-11-07 13:44:34.591912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.648 [2024-11-07 13:44:34.591952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.648 qpair failed and we were unable to recover it. 00:39:26.648 [2024-11-07 13:44:34.592289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.648 [2024-11-07 13:44:34.592330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.648 qpair failed and we were unable to recover it. 00:39:26.648 [2024-11-07 13:44:34.592696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.648 [2024-11-07 13:44:34.592736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.648 qpair failed and we were unable to recover it. 00:39:26.648 [2024-11-07 13:44:34.592927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.648 [2024-11-07 13:44:34.592969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.648 qpair failed and we were unable to recover it. 00:39:26.648 [2024-11-07 13:44:34.593201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.648 [2024-11-07 13:44:34.593241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.648 qpair failed and we were unable to recover it. 00:39:26.648 [2024-11-07 13:44:34.593567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.648 [2024-11-07 13:44:34.593606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.648 qpair failed and we were unable to recover it. 00:39:26.648 [2024-11-07 13:44:34.593892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.648 [2024-11-07 13:44:34.593940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.648 qpair failed and we were unable to recover it. 00:39:26.648 [2024-11-07 13:44:34.594339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.648 [2024-11-07 13:44:34.594378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.648 qpair failed and we were unable to recover it. 00:39:26.648 [2024-11-07 13:44:34.594654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.648 [2024-11-07 13:44:34.594697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.648 qpair failed and we were unable to recover it. 00:39:26.648 [2024-11-07 13:44:34.595059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.648 [2024-11-07 13:44:34.595100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.648 qpair failed and we were unable to recover it. 00:39:26.648 [2024-11-07 13:44:34.595476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.648 [2024-11-07 13:44:34.595516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.648 qpair failed and we were unable to recover it. 00:39:26.648 [2024-11-07 13:44:34.595896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.648 [2024-11-07 13:44:34.595943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.648 qpair failed and we were unable to recover it. 00:39:26.648 [2024-11-07 13:44:34.596283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.648 [2024-11-07 13:44:34.596323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.648 qpair failed and we were unable to recover it. 00:39:26.648 [2024-11-07 13:44:34.596700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.648 [2024-11-07 13:44:34.596739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.648 qpair failed and we were unable to recover it. 00:39:26.648 [2024-11-07 13:44:34.597041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.648 [2024-11-07 13:44:34.597082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.648 qpair failed and we were unable to recover it. 00:39:26.648 [2024-11-07 13:44:34.597449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.648 [2024-11-07 13:44:34.597488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.648 qpair failed and we were unable to recover it. 00:39:26.648 [2024-11-07 13:44:34.597871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.648 [2024-11-07 13:44:34.597912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.648 qpair failed and we were unable to recover it. 00:39:26.648 [2024-11-07 13:44:34.598280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.648 [2024-11-07 13:44:34.598320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.648 qpair failed and we were unable to recover it. 00:39:26.648 [2024-11-07 13:44:34.598687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.648 [2024-11-07 13:44:34.598728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.648 qpair failed and we were unable to recover it. 00:39:26.648 [2024-11-07 13:44:34.599093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.648 [2024-11-07 13:44:34.599149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.648 qpair failed and we were unable to recover it. 00:39:26.648 [2024-11-07 13:44:34.599420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.648 [2024-11-07 13:44:34.599459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.648 qpair failed and we were unable to recover it. 00:39:26.648 [2024-11-07 13:44:34.599833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.648 [2024-11-07 13:44:34.599883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.648 qpair failed and we were unable to recover it. 00:39:26.648 [2024-11-07 13:44:34.600241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.648 [2024-11-07 13:44:34.600281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.649 qpair failed and we were unable to recover it. 00:39:26.649 [2024-11-07 13:44:34.600661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.649 [2024-11-07 13:44:34.600701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.649 qpair failed and we were unable to recover it. 00:39:26.649 [2024-11-07 13:44:34.601082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.649 [2024-11-07 13:44:34.601127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.649 qpair failed and we were unable to recover it. 00:39:26.649 [2024-11-07 13:44:34.601493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.649 [2024-11-07 13:44:34.601532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.649 qpair failed and we were unable to recover it. 00:39:26.649 [2024-11-07 13:44:34.601784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.649 [2024-11-07 13:44:34.601832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.649 qpair failed and we were unable to recover it. 00:39:26.649 [2024-11-07 13:44:34.602225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.649 [2024-11-07 13:44:34.602267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.649 qpair failed and we were unable to recover it. 00:39:26.649 [2024-11-07 13:44:34.602636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.649 [2024-11-07 13:44:34.602675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.649 qpair failed and we were unable to recover it. 00:39:26.649 [2024-11-07 13:44:34.603002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.649 [2024-11-07 13:44:34.603042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.649 qpair failed and we were unable to recover it. 00:39:26.649 [2024-11-07 13:44:34.603418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.649 [2024-11-07 13:44:34.603459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.649 qpair failed and we were unable to recover it. 00:39:26.649 [2024-11-07 13:44:34.603805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.649 [2024-11-07 13:44:34.603845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.649 qpair failed and we were unable to recover it. 00:39:26.649 [2024-11-07 13:44:34.604217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.649 [2024-11-07 13:44:34.604256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.649 qpair failed and we were unable to recover it. 00:39:26.649 [2024-11-07 13:44:34.604616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.649 [2024-11-07 13:44:34.604655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.649 qpair failed and we were unable to recover it. 00:39:26.649 [2024-11-07 13:44:34.605064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.649 [2024-11-07 13:44:34.605106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.649 qpair failed and we were unable to recover it. 00:39:26.649 [2024-11-07 13:44:34.605449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.649 [2024-11-07 13:44:34.605489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.649 qpair failed and we were unable to recover it. 00:39:26.649 [2024-11-07 13:44:34.605856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.649 [2024-11-07 13:44:34.605920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.649 qpair failed and we were unable to recover it. 00:39:26.649 [2024-11-07 13:44:34.606303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.649 [2024-11-07 13:44:34.606343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.649 qpair failed and we were unable to recover it. 00:39:26.649 [2024-11-07 13:44:34.606718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.649 [2024-11-07 13:44:34.606759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.649 qpair failed and we were unable to recover it. 00:39:26.649 [2024-11-07 13:44:34.607048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.649 [2024-11-07 13:44:34.607093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.649 qpair failed and we were unable to recover it. 00:39:26.649 [2024-11-07 13:44:34.607415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.649 [2024-11-07 13:44:34.607457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.649 qpair failed and we were unable to recover it. 00:39:26.649 [2024-11-07 13:44:34.607820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.649 [2024-11-07 13:44:34.607860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.649 qpair failed and we were unable to recover it. 00:39:26.649 [2024-11-07 13:44:34.608242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.649 [2024-11-07 13:44:34.608283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.649 qpair failed and we were unable to recover it. 00:39:26.649 [2024-11-07 13:44:34.608589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.649 [2024-11-07 13:44:34.608627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.649 qpair failed and we were unable to recover it. 00:39:26.649 [2024-11-07 13:44:34.608979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.649 [2024-11-07 13:44:34.609022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.649 qpair failed and we were unable to recover it. 00:39:26.649 [2024-11-07 13:44:34.609394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.649 [2024-11-07 13:44:34.609433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.649 qpair failed and we were unable to recover it. 00:39:26.649 [2024-11-07 13:44:34.609613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.649 [2024-11-07 13:44:34.609656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.649 qpair failed and we were unable to recover it. 00:39:26.649 [2024-11-07 13:44:34.610029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.649 [2024-11-07 13:44:34.610071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.649 qpair failed and we were unable to recover it. 00:39:26.649 [2024-11-07 13:44:34.610401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.649 [2024-11-07 13:44:34.610440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.649 qpair failed and we were unable to recover it. 00:39:26.649 [2024-11-07 13:44:34.610801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.649 [2024-11-07 13:44:34.610840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.649 qpair failed and we were unable to recover it. 00:39:26.649 [2024-11-07 13:44:34.611196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.649 [2024-11-07 13:44:34.611238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.649 qpair failed and we were unable to recover it. 00:39:26.649 [2024-11-07 13:44:34.611483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.649 [2024-11-07 13:44:34.611520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.649 qpair failed and we were unable to recover it. 00:39:26.649 [2024-11-07 13:44:34.611810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.649 [2024-11-07 13:44:34.611849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.649 qpair failed and we were unable to recover it. 00:39:26.649 [2024-11-07 13:44:34.612220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.649 [2024-11-07 13:44:34.612262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.649 qpair failed and we were unable to recover it. 00:39:26.649 [2024-11-07 13:44:34.612628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.649 [2024-11-07 13:44:34.612667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.649 qpair failed and we were unable to recover it. 00:39:26.649 [2024-11-07 13:44:34.613026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.649 [2024-11-07 13:44:34.613068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.649 qpair failed and we were unable to recover it. 00:39:26.649 [2024-11-07 13:44:34.613417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.649 [2024-11-07 13:44:34.613456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.649 qpair failed and we were unable to recover it. 00:39:26.649 [2024-11-07 13:44:34.613879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.649 [2024-11-07 13:44:34.613921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.649 qpair failed and we were unable to recover it. 00:39:26.649 [2024-11-07 13:44:34.614282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.649 [2024-11-07 13:44:34.614323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.649 qpair failed and we were unable to recover it. 00:39:26.649 [2024-11-07 13:44:34.614700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.650 [2024-11-07 13:44:34.614739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.650 qpair failed and we were unable to recover it. 00:39:26.650 [2024-11-07 13:44:34.615122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.650 [2024-11-07 13:44:34.615163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.650 qpair failed and we were unable to recover it. 00:39:26.650 [2024-11-07 13:44:34.615513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.650 [2024-11-07 13:44:34.615554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.650 qpair failed and we were unable to recover it. 00:39:26.650 [2024-11-07 13:44:34.615940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.650 [2024-11-07 13:44:34.615983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.650 qpair failed and we were unable to recover it. 00:39:26.650 [2024-11-07 13:44:34.616368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.650 [2024-11-07 13:44:34.616408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.650 qpair failed and we were unable to recover it. 00:39:26.650 [2024-11-07 13:44:34.616764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.650 [2024-11-07 13:44:34.616804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.650 qpair failed and we were unable to recover it. 00:39:26.650 [2024-11-07 13:44:34.617180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.650 [2024-11-07 13:44:34.617221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.650 qpair failed and we were unable to recover it. 00:39:26.650 [2024-11-07 13:44:34.617581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.650 [2024-11-07 13:44:34.617629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.650 qpair failed and we were unable to recover it. 00:39:26.650 [2024-11-07 13:44:34.618004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.650 [2024-11-07 13:44:34.618046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.650 qpair failed and we were unable to recover it. 00:39:26.650 [2024-11-07 13:44:34.618389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.650 [2024-11-07 13:44:34.618429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.650 qpair failed and we were unable to recover it. 00:39:26.650 [2024-11-07 13:44:34.618794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.650 [2024-11-07 13:44:34.618834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.650 qpair failed and we were unable to recover it. 00:39:26.650 [2024-11-07 13:44:34.619211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.650 [2024-11-07 13:44:34.619252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.650 qpair failed and we were unable to recover it. 00:39:26.650 [2024-11-07 13:44:34.619625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.650 [2024-11-07 13:44:34.619664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.650 qpair failed and we were unable to recover it. 00:39:26.650 [2024-11-07 13:44:34.620018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.650 [2024-11-07 13:44:34.620059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.650 qpair failed and we were unable to recover it. 00:39:26.650 [2024-11-07 13:44:34.620410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.650 [2024-11-07 13:44:34.620450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.650 qpair failed and we were unable to recover it. 00:39:26.650 [2024-11-07 13:44:34.620875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.650 [2024-11-07 13:44:34.620917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.650 qpair failed and we were unable to recover it. 00:39:26.650 [2024-11-07 13:44:34.621272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.650 [2024-11-07 13:44:34.621312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.650 qpair failed and we were unable to recover it. 00:39:26.650 [2024-11-07 13:44:34.621556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.650 [2024-11-07 13:44:34.621597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.650 qpair failed and we were unable to recover it. 00:39:26.650 [2024-11-07 13:44:34.621983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.650 [2024-11-07 13:44:34.622025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.650 qpair failed and we were unable to recover it. 00:39:26.650 [2024-11-07 13:44:34.622393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.650 [2024-11-07 13:44:34.622435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.650 qpair failed and we were unable to recover it. 00:39:26.650 [2024-11-07 13:44:34.622803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.650 [2024-11-07 13:44:34.622842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.650 qpair failed and we were unable to recover it. 00:39:26.650 [2024-11-07 13:44:34.623253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.650 [2024-11-07 13:44:34.623294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.650 qpair failed and we were unable to recover it. 00:39:26.650 [2024-11-07 13:44:34.623657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.650 [2024-11-07 13:44:34.623696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.650 qpair failed and we were unable to recover it. 00:39:26.650 [2024-11-07 13:44:34.624127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.650 [2024-11-07 13:44:34.624171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.650 qpair failed and we were unable to recover it. 00:39:26.650 [2024-11-07 13:44:34.624586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.650 [2024-11-07 13:44:34.624639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.650 qpair failed and we were unable to recover it. 00:39:26.650 [2024-11-07 13:44:34.624961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.650 [2024-11-07 13:44:34.625009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.650 qpair failed and we were unable to recover it. 00:39:26.650 [2024-11-07 13:44:34.625277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.650 [2024-11-07 13:44:34.625316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.650 qpair failed and we were unable to recover it. 00:39:26.650 [2024-11-07 13:44:34.625669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.650 [2024-11-07 13:44:34.625709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.650 qpair failed and we were unable to recover it. 00:39:26.650 [2024-11-07 13:44:34.626079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.650 [2024-11-07 13:44:34.626122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.650 qpair failed and we were unable to recover it. 00:39:26.650 [2024-11-07 13:44:34.626494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.650 [2024-11-07 13:44:34.626535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.650 qpair failed and we were unable to recover it. 00:39:26.650 [2024-11-07 13:44:34.626915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.650 [2024-11-07 13:44:34.626956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.650 qpair failed and we were unable to recover it. 00:39:26.650 [2024-11-07 13:44:34.627323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.650 [2024-11-07 13:44:34.627363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.650 qpair failed and we were unable to recover it. 00:39:26.650 [2024-11-07 13:44:34.627723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.651 [2024-11-07 13:44:34.627763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.651 qpair failed and we were unable to recover it. 00:39:26.651 [2024-11-07 13:44:34.628135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.651 [2024-11-07 13:44:34.628176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.651 qpair failed and we were unable to recover it. 00:39:26.651 [2024-11-07 13:44:34.628572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.651 [2024-11-07 13:44:34.628612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.651 qpair failed and we were unable to recover it. 00:39:26.651 [2024-11-07 13:44:34.628963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.651 [2024-11-07 13:44:34.629005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.651 qpair failed and we were unable to recover it. 00:39:26.651 [2024-11-07 13:44:34.629422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.651 [2024-11-07 13:44:34.629461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.651 qpair failed and we were unable to recover it. 00:39:26.651 [2024-11-07 13:44:34.629820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.651 [2024-11-07 13:44:34.629859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.651 qpair failed and we were unable to recover it. 00:39:26.651 [2024-11-07 13:44:34.630270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.651 [2024-11-07 13:44:34.630310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.651 qpair failed and we were unable to recover it. 00:39:26.651 [2024-11-07 13:44:34.630648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.651 [2024-11-07 13:44:34.630688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.651 qpair failed and we were unable to recover it. 00:39:26.651 [2024-11-07 13:44:34.631044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.651 [2024-11-07 13:44:34.631086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.651 qpair failed and we were unable to recover it. 00:39:26.651 [2024-11-07 13:44:34.631374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.651 [2024-11-07 13:44:34.631413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.651 qpair failed and we were unable to recover it. 00:39:26.651 [2024-11-07 13:44:34.631703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.651 [2024-11-07 13:44:34.631742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.651 qpair failed and we were unable to recover it. 00:39:26.651 [2024-11-07 13:44:34.632135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.651 [2024-11-07 13:44:34.632176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.651 qpair failed and we were unable to recover it. 00:39:26.651 [2024-11-07 13:44:34.632538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.651 [2024-11-07 13:44:34.632579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.651 qpair failed and we were unable to recover it. 00:39:26.651 [2024-11-07 13:44:34.632926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.651 [2024-11-07 13:44:34.632967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.651 qpair failed and we were unable to recover it. 00:39:26.651 [2024-11-07 13:44:34.633327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.651 [2024-11-07 13:44:34.633366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.651 qpair failed and we were unable to recover it. 00:39:26.651 [2024-11-07 13:44:34.633803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.651 [2024-11-07 13:44:34.633848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.651 qpair failed and we were unable to recover it. 00:39:26.651 [2024-11-07 13:44:34.634220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.651 [2024-11-07 13:44:34.634261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.651 qpair failed and we were unable to recover it. 00:39:26.651 [2024-11-07 13:44:34.634527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.651 [2024-11-07 13:44:34.634566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.651 qpair failed and we were unable to recover it. 00:39:26.651 [2024-11-07 13:44:34.634901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.651 [2024-11-07 13:44:34.634942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.651 qpair failed and we were unable to recover it. 00:39:26.651 [2024-11-07 13:44:34.635317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.651 [2024-11-07 13:44:34.635358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.651 qpair failed and we were unable to recover it. 00:39:26.651 [2024-11-07 13:44:34.635719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.651 [2024-11-07 13:44:34.635758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.651 qpair failed and we were unable to recover it. 00:39:26.651 [2024-11-07 13:44:34.636137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.651 [2024-11-07 13:44:34.636177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.651 qpair failed and we were unable to recover it. 00:39:26.651 [2024-11-07 13:44:34.636542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.651 [2024-11-07 13:44:34.636581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.651 qpair failed and we were unable to recover it. 00:39:26.651 [2024-11-07 13:44:34.636952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.651 [2024-11-07 13:44:34.636993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.651 qpair failed and we were unable to recover it. 00:39:26.651 [2024-11-07 13:44:34.637261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.651 [2024-11-07 13:44:34.637304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.651 qpair failed and we were unable to recover it. 00:39:26.651 [2024-11-07 13:44:34.637581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.651 [2024-11-07 13:44:34.637620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.651 qpair failed and we were unable to recover it. 00:39:26.651 [2024-11-07 13:44:34.637972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.651 [2024-11-07 13:44:34.638014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.651 qpair failed and we were unable to recover it. 00:39:26.651 [2024-11-07 13:44:34.638384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.651 [2024-11-07 13:44:34.638424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.651 qpair failed and we were unable to recover it. 00:39:26.926 [2024-11-07 13:44:34.638813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.926 [2024-11-07 13:44:34.638853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.926 qpair failed and we were unable to recover it. 00:39:26.926 [2024-11-07 13:44:34.639240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.926 [2024-11-07 13:44:34.639281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.926 qpair failed and we were unable to recover it. 00:39:26.926 [2024-11-07 13:44:34.639597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.926 [2024-11-07 13:44:34.639636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.926 qpair failed and we were unable to recover it. 00:39:26.926 [2024-11-07 13:44:34.640067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.926 [2024-11-07 13:44:34.640109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.926 qpair failed and we were unable to recover it. 00:39:26.926 [2024-11-07 13:44:34.640457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.926 [2024-11-07 13:44:34.640497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.926 qpair failed and we were unable to recover it. 00:39:26.926 [2024-11-07 13:44:34.640879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.926 [2024-11-07 13:44:34.640921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.926 qpair failed and we were unable to recover it. 00:39:26.926 [2024-11-07 13:44:34.641174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.926 [2024-11-07 13:44:34.641216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.926 qpair failed and we were unable to recover it. 00:39:26.926 [2024-11-07 13:44:34.641569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.926 [2024-11-07 13:44:34.641609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.926 qpair failed and we were unable to recover it. 00:39:26.926 [2024-11-07 13:44:34.641975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.926 [2024-11-07 13:44:34.642015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.926 qpair failed and we were unable to recover it. 00:39:26.926 [2024-11-07 13:44:34.642389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.926 [2024-11-07 13:44:34.642427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.926 qpair failed and we were unable to recover it. 00:39:26.926 [2024-11-07 13:44:34.642701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.926 [2024-11-07 13:44:34.642744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.926 qpair failed and we were unable to recover it. 00:39:26.926 [2024-11-07 13:44:34.643115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.926 [2024-11-07 13:44:34.643158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.926 qpair failed and we were unable to recover it. 00:39:26.926 [2024-11-07 13:44:34.643440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.926 [2024-11-07 13:44:34.643478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.926 qpair failed and we were unable to recover it. 00:39:26.926 [2024-11-07 13:44:34.643799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.926 [2024-11-07 13:44:34.643839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.926 qpair failed and we were unable to recover it. 00:39:26.926 [2024-11-07 13:44:34.644212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.926 [2024-11-07 13:44:34.644254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.926 qpair failed and we were unable to recover it. 00:39:26.926 [2024-11-07 13:44:34.644625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.926 [2024-11-07 13:44:34.644666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.926 qpair failed and we were unable to recover it. 00:39:26.926 [2024-11-07 13:44:34.644967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.926 [2024-11-07 13:44:34.645008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.926 qpair failed and we were unable to recover it. 00:39:26.926 [2024-11-07 13:44:34.645261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.926 [2024-11-07 13:44:34.645302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.926 qpair failed and we were unable to recover it. 00:39:26.926 [2024-11-07 13:44:34.645684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.926 [2024-11-07 13:44:34.645724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.926 qpair failed and we were unable to recover it. 00:39:26.926 [2024-11-07 13:44:34.646100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.926 [2024-11-07 13:44:34.646142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.926 qpair failed and we were unable to recover it. 00:39:26.926 [2024-11-07 13:44:34.646503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.926 [2024-11-07 13:44:34.646542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.926 qpair failed and we were unable to recover it. 00:39:26.926 [2024-11-07 13:44:34.646815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.926 [2024-11-07 13:44:34.646854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.926 qpair failed and we were unable to recover it. 00:39:26.926 [2024-11-07 13:44:34.647206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.926 [2024-11-07 13:44:34.647246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.926 qpair failed and we were unable to recover it. 00:39:26.926 [2024-11-07 13:44:34.647574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.926 [2024-11-07 13:44:34.647614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.926 qpair failed and we were unable to recover it. 00:39:26.926 [2024-11-07 13:44:34.647942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.926 [2024-11-07 13:44:34.647982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.926 qpair failed and we were unable to recover it. 00:39:26.926 [2024-11-07 13:44:34.648349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.926 [2024-11-07 13:44:34.648388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.926 qpair failed and we were unable to recover it. 00:39:26.927 [2024-11-07 13:44:34.648746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.927 [2024-11-07 13:44:34.648785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.927 qpair failed and we were unable to recover it. 00:39:26.927 [2024-11-07 13:44:34.649155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.927 [2024-11-07 13:44:34.649197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.927 qpair failed and we were unable to recover it. 00:39:26.927 [2024-11-07 13:44:34.649587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.927 [2024-11-07 13:44:34.649641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.927 qpair failed and we were unable to recover it. 00:39:26.927 [2024-11-07 13:44:34.650001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.927 [2024-11-07 13:44:34.650045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.927 qpair failed and we were unable to recover it. 00:39:26.927 [2024-11-07 13:44:34.650411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.927 [2024-11-07 13:44:34.650451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.927 qpair failed and we were unable to recover it. 00:39:26.927 [2024-11-07 13:44:34.650808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.927 [2024-11-07 13:44:34.650848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.927 qpair failed and we were unable to recover it. 00:39:26.927 [2024-11-07 13:44:34.651246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.927 [2024-11-07 13:44:34.651286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.927 qpair failed and we were unable to recover it. 00:39:26.927 [2024-11-07 13:44:34.651656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.927 [2024-11-07 13:44:34.651696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.927 qpair failed and we were unable to recover it. 00:39:26.927 [2024-11-07 13:44:34.651980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.927 [2024-11-07 13:44:34.652020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.927 qpair failed and we were unable to recover it. 00:39:26.927 [2024-11-07 13:44:34.652403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.927 [2024-11-07 13:44:34.652443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.927 qpair failed and we were unable to recover it. 00:39:26.927 [2024-11-07 13:44:34.652823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.927 [2024-11-07 13:44:34.652873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.927 qpair failed and we were unable to recover it. 00:39:26.927 [2024-11-07 13:44:34.653223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.927 [2024-11-07 13:44:34.653262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.927 qpair failed and we were unable to recover it. 00:39:26.927 [2024-11-07 13:44:34.653622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.927 [2024-11-07 13:44:34.653661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.927 qpair failed and we were unable to recover it. 00:39:26.927 [2024-11-07 13:44:34.654046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.927 [2024-11-07 13:44:34.654090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.927 qpair failed and we were unable to recover it. 00:39:26.927 [2024-11-07 13:44:34.654460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.927 [2024-11-07 13:44:34.654500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.927 qpair failed and we were unable to recover it. 00:39:26.927 [2024-11-07 13:44:34.654779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.927 [2024-11-07 13:44:34.654823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.927 qpair failed and we were unable to recover it. 00:39:26.927 [2024-11-07 13:44:34.655206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.927 [2024-11-07 13:44:34.655248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.927 qpair failed and we were unable to recover it. 00:39:26.927 [2024-11-07 13:44:34.655562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.927 [2024-11-07 13:44:34.655603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.927 qpair failed and we were unable to recover it. 00:39:26.927 [2024-11-07 13:44:34.655840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.927 [2024-11-07 13:44:34.655889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.927 qpair failed and we were unable to recover it. 00:39:26.927 [2024-11-07 13:44:34.656226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.927 [2024-11-07 13:44:34.656266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.927 qpair failed and we were unable to recover it. 00:39:26.927 [2024-11-07 13:44:34.656629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.927 [2024-11-07 13:44:34.656668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.927 qpair failed and we were unable to recover it. 00:39:26.927 [2024-11-07 13:44:34.657034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.927 [2024-11-07 13:44:34.657077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.927 qpair failed and we were unable to recover it. 00:39:26.927 [2024-11-07 13:44:34.657451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.927 [2024-11-07 13:44:34.657491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.927 qpair failed and we were unable to recover it. 00:39:26.927 [2024-11-07 13:44:34.657891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.927 [2024-11-07 13:44:34.657933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.927 qpair failed and we were unable to recover it. 00:39:26.927 [2024-11-07 13:44:34.658311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.927 [2024-11-07 13:44:34.658350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.927 qpair failed and we were unable to recover it. 00:39:26.927 [2024-11-07 13:44:34.658634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.927 [2024-11-07 13:44:34.658674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.927 qpair failed and we were unable to recover it. 00:39:26.927 [2024-11-07 13:44:34.659034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.927 [2024-11-07 13:44:34.659075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.927 qpair failed and we were unable to recover it. 00:39:26.927 [2024-11-07 13:44:34.659433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.927 [2024-11-07 13:44:34.659472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.927 qpair failed and we were unable to recover it. 00:39:26.927 [2024-11-07 13:44:34.659803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.927 [2024-11-07 13:44:34.659848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.927 qpair failed and we were unable to recover it. 00:39:26.927 [2024-11-07 13:44:34.660196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.927 [2024-11-07 13:44:34.660237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.927 qpair failed and we were unable to recover it. 00:39:26.927 [2024-11-07 13:44:34.660598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.927 [2024-11-07 13:44:34.660637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.927 qpair failed and we were unable to recover it. 00:39:26.927 [2024-11-07 13:44:34.660859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.927 [2024-11-07 13:44:34.660912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.927 qpair failed and we were unable to recover it. 00:39:26.927 [2024-11-07 13:44:34.661300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.927 [2024-11-07 13:44:34.661340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.927 qpair failed and we were unable to recover it. 00:39:26.927 [2024-11-07 13:44:34.661707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.927 [2024-11-07 13:44:34.661747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.927 qpair failed and we were unable to recover it. 00:39:26.927 [2024-11-07 13:44:34.662166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.927 [2024-11-07 13:44:34.662206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.927 qpair failed and we were unable to recover it. 00:39:26.927 [2024-11-07 13:44:34.662620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.927 [2024-11-07 13:44:34.662658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.928 qpair failed and we were unable to recover it. 00:39:26.928 [2024-11-07 13:44:34.662933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.928 [2024-11-07 13:44:34.662973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.928 qpair failed and we were unable to recover it. 00:39:26.928 [2024-11-07 13:44:34.663260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.928 [2024-11-07 13:44:34.663304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.928 qpair failed and we were unable to recover it. 00:39:26.928 [2024-11-07 13:44:34.663637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.928 [2024-11-07 13:44:34.663677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.928 qpair failed and we were unable to recover it. 00:39:26.928 [2024-11-07 13:44:34.664047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.928 [2024-11-07 13:44:34.664088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.928 qpair failed and we were unable to recover it. 00:39:26.928 [2024-11-07 13:44:34.664450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.928 [2024-11-07 13:44:34.664489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.928 qpair failed and we were unable to recover it. 00:39:26.928 [2024-11-07 13:44:34.664854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.928 [2024-11-07 13:44:34.664905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.928 qpair failed and we were unable to recover it. 00:39:26.928 [2024-11-07 13:44:34.665278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.928 [2024-11-07 13:44:34.665318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.928 qpair failed and we were unable to recover it. 00:39:26.928 [2024-11-07 13:44:34.665653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.928 [2024-11-07 13:44:34.665692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.928 qpair failed and we were unable to recover it. 00:39:26.928 [2024-11-07 13:44:34.666031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.928 [2024-11-07 13:44:34.666073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.928 qpair failed and we were unable to recover it. 00:39:26.928 [2024-11-07 13:44:34.666446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.928 [2024-11-07 13:44:34.666487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.928 qpair failed and we were unable to recover it. 00:39:26.928 [2024-11-07 13:44:34.666837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.928 [2024-11-07 13:44:34.666896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.928 qpair failed and we were unable to recover it. 00:39:26.928 [2024-11-07 13:44:34.667292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.928 [2024-11-07 13:44:34.667332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.928 qpair failed and we were unable to recover it. 00:39:26.928 [2024-11-07 13:44:34.667765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.928 [2024-11-07 13:44:34.667804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.928 qpair failed and we were unable to recover it. 00:39:26.928 [2024-11-07 13:44:34.668076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.928 [2024-11-07 13:44:34.668120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.928 qpair failed and we were unable to recover it. 00:39:26.928 [2024-11-07 13:44:34.668488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.928 [2024-11-07 13:44:34.668528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.928 qpair failed and we were unable to recover it. 00:39:26.928 [2024-11-07 13:44:34.668874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.928 [2024-11-07 13:44:34.668916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.928 qpair failed and we were unable to recover it. 00:39:26.928 [2024-11-07 13:44:34.669196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.928 [2024-11-07 13:44:34.669234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.928 qpair failed and we were unable to recover it. 00:39:26.928 [2024-11-07 13:44:34.669618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.928 [2024-11-07 13:44:34.669657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.928 qpair failed and we were unable to recover it. 00:39:26.928 [2024-11-07 13:44:34.670024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.928 [2024-11-07 13:44:34.670066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.928 qpair failed and we were unable to recover it. 00:39:26.928 [2024-11-07 13:44:34.670448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.928 [2024-11-07 13:44:34.670487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.928 qpair failed and we were unable to recover it. 00:39:26.928 [2024-11-07 13:44:34.670848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.928 [2024-11-07 13:44:34.670911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.928 qpair failed and we were unable to recover it. 00:39:26.928 [2024-11-07 13:44:34.671288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.928 [2024-11-07 13:44:34.671328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.928 qpair failed and we were unable to recover it. 00:39:26.928 [2024-11-07 13:44:34.671701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.928 [2024-11-07 13:44:34.671740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.928 qpair failed and we were unable to recover it. 00:39:26.928 [2024-11-07 13:44:34.672158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.928 [2024-11-07 13:44:34.672199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.928 qpair failed and we were unable to recover it. 00:39:26.928 [2024-11-07 13:44:34.672517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.928 [2024-11-07 13:44:34.672554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.928 qpair failed and we were unable to recover it. 00:39:26.928 [2024-11-07 13:44:34.672927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.928 [2024-11-07 13:44:34.672969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.928 qpair failed and we were unable to recover it. 00:39:26.928 [2024-11-07 13:44:34.673314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.928 [2024-11-07 13:44:34.673353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.928 qpair failed and we were unable to recover it. 00:39:26.928 [2024-11-07 13:44:34.673610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.928 [2024-11-07 13:44:34.673648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.928 qpair failed and we were unable to recover it. 00:39:26.928 [2024-11-07 13:44:34.673911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.928 [2024-11-07 13:44:34.673954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.928 qpair failed and we were unable to recover it. 00:39:26.928 [2024-11-07 13:44:34.674330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.928 [2024-11-07 13:44:34.674371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.928 qpair failed and we were unable to recover it. 00:39:26.928 [2024-11-07 13:44:34.674721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.928 [2024-11-07 13:44:34.674773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.928 qpair failed and we were unable to recover it. 00:39:26.928 [2024-11-07 13:44:34.675104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.928 [2024-11-07 13:44:34.675146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.928 qpair failed and we were unable to recover it. 00:39:26.928 [2024-11-07 13:44:34.675394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.929 [2024-11-07 13:44:34.675444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.929 qpair failed and we were unable to recover it. 00:39:26.929 [2024-11-07 13:44:34.675883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.929 [2024-11-07 13:44:34.675925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.929 qpair failed and we were unable to recover it. 00:39:26.929 [2024-11-07 13:44:34.676322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.929 [2024-11-07 13:44:34.676361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.929 qpair failed and we were unable to recover it. 00:39:26.929 [2024-11-07 13:44:34.676727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.929 [2024-11-07 13:44:34.676765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.929 qpair failed and we were unable to recover it. 00:39:26.929 [2024-11-07 13:44:34.677183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.929 [2024-11-07 13:44:34.677223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.929 qpair failed and we were unable to recover it. 00:39:26.929 [2024-11-07 13:44:34.677601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.929 [2024-11-07 13:44:34.677642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.929 qpair failed and we were unable to recover it. 00:39:26.929 [2024-11-07 13:44:34.678017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.929 [2024-11-07 13:44:34.678059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.929 qpair failed and we were unable to recover it. 00:39:26.929 [2024-11-07 13:44:34.678402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.929 [2024-11-07 13:44:34.678441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.929 qpair failed and we were unable to recover it. 00:39:26.929 [2024-11-07 13:44:34.678775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.929 [2024-11-07 13:44:34.678814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.929 qpair failed and we were unable to recover it. 00:39:26.929 [2024-11-07 13:44:34.679187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.929 [2024-11-07 13:44:34.679229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.929 qpair failed and we were unable to recover it. 00:39:26.929 [2024-11-07 13:44:34.679582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.929 [2024-11-07 13:44:34.679622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.929 qpair failed and we were unable to recover it. 00:39:26.929 [2024-11-07 13:44:34.679976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.929 [2024-11-07 13:44:34.680017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.929 qpair failed and we were unable to recover it. 00:39:26.929 [2024-11-07 13:44:34.680398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.929 [2024-11-07 13:44:34.680437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.929 qpair failed and we were unable to recover it. 00:39:26.929 [2024-11-07 13:44:34.680806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.929 [2024-11-07 13:44:34.680846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.929 qpair failed and we were unable to recover it. 00:39:26.929 [2024-11-07 13:44:34.681219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.929 [2024-11-07 13:44:34.681260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.929 qpair failed and we were unable to recover it. 00:39:26.929 [2024-11-07 13:44:34.681621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.929 [2024-11-07 13:44:34.681660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.929 qpair failed and we were unable to recover it. 00:39:26.929 [2024-11-07 13:44:34.682037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.929 [2024-11-07 13:44:34.682079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.929 qpair failed and we were unable to recover it. 00:39:26.929 [2024-11-07 13:44:34.682440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.929 [2024-11-07 13:44:34.682481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.929 qpair failed and we were unable to recover it. 00:39:26.929 [2024-11-07 13:44:34.682771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.929 [2024-11-07 13:44:34.682811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.929 qpair failed and we were unable to recover it. 00:39:26.929 [2024-11-07 13:44:34.683201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.929 [2024-11-07 13:44:34.683242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.929 qpair failed and we were unable to recover it. 00:39:26.929 [2024-11-07 13:44:34.683606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.929 [2024-11-07 13:44:34.683648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.929 qpair failed and we were unable to recover it. 00:39:26.929 [2024-11-07 13:44:34.684017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.929 [2024-11-07 13:44:34.684059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.929 qpair failed and we were unable to recover it. 00:39:26.929 [2024-11-07 13:44:34.684325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.929 [2024-11-07 13:44:34.684363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.929 qpair failed and we were unable to recover it. 00:39:26.929 [2024-11-07 13:44:34.684748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.929 [2024-11-07 13:44:34.684787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.929 qpair failed and we were unable to recover it. 00:39:26.929 [2024-11-07 13:44:34.685036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.929 [2024-11-07 13:44:34.685076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.929 qpair failed and we were unable to recover it. 00:39:26.929 [2024-11-07 13:44:34.685411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.929 [2024-11-07 13:44:34.685451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.929 qpair failed and we were unable to recover it. 00:39:26.929 [2024-11-07 13:44:34.685709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.929 [2024-11-07 13:44:34.685747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.929 qpair failed and we were unable to recover it. 00:39:26.929 [2024-11-07 13:44:34.686113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.929 [2024-11-07 13:44:34.686154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.929 qpair failed and we were unable to recover it. 00:39:26.929 [2024-11-07 13:44:34.686408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.929 [2024-11-07 13:44:34.686451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.929 qpair failed and we were unable to recover it. 00:39:26.929 [2024-11-07 13:44:34.686832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.929 [2024-11-07 13:44:34.686881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.929 qpair failed and we were unable to recover it. 00:39:26.929 [2024-11-07 13:44:34.687248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.929 [2024-11-07 13:44:34.687287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.929 qpair failed and we were unable to recover it. 00:39:26.929 [2024-11-07 13:44:34.687643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.929 [2024-11-07 13:44:34.687682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.929 qpair failed and we were unable to recover it. 00:39:26.929 [2024-11-07 13:44:34.688068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.929 [2024-11-07 13:44:34.688110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.929 qpair failed and we were unable to recover it. 00:39:26.929 [2024-11-07 13:44:34.688480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.929 [2024-11-07 13:44:34.688520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.929 qpair failed and we were unable to recover it. 00:39:26.929 [2024-11-07 13:44:34.688883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.929 [2024-11-07 13:44:34.688924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.929 qpair failed and we were unable to recover it. 00:39:26.929 [2024-11-07 13:44:34.689300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.929 [2024-11-07 13:44:34.689338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.929 qpair failed and we were unable to recover it. 00:39:26.929 [2024-11-07 13:44:34.689690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.929 [2024-11-07 13:44:34.689729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.930 qpair failed and we were unable to recover it. 00:39:26.930 [2024-11-07 13:44:34.690091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.930 [2024-11-07 13:44:34.690133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.930 qpair failed and we were unable to recover it. 00:39:26.930 [2024-11-07 13:44:34.690494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.930 [2024-11-07 13:44:34.690534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.930 qpair failed and we were unable to recover it. 00:39:26.930 [2024-11-07 13:44:34.690789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.930 [2024-11-07 13:44:34.690829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.930 qpair failed and we were unable to recover it. 00:39:26.930 [2024-11-07 13:44:34.691219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.930 [2024-11-07 13:44:34.691267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.930 qpair failed and we were unable to recover it. 00:39:26.930 [2024-11-07 13:44:34.691639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.930 [2024-11-07 13:44:34.691679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.930 qpair failed and we were unable to recover it. 00:39:26.930 [2024-11-07 13:44:34.692037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.930 [2024-11-07 13:44:34.692078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.930 qpair failed and we were unable to recover it. 00:39:26.930 [2024-11-07 13:44:34.692447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.930 [2024-11-07 13:44:34.692486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.930 qpair failed and we were unable to recover it. 00:39:26.930 [2024-11-07 13:44:34.692908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.930 [2024-11-07 13:44:34.692949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.930 qpair failed and we were unable to recover it. 00:39:26.930 [2024-11-07 13:44:34.693327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.930 [2024-11-07 13:44:34.693368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.930 qpair failed and we were unable to recover it. 00:39:26.930 [2024-11-07 13:44:34.693801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.930 [2024-11-07 13:44:34.693839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.930 qpair failed and we were unable to recover it. 00:39:26.930 [2024-11-07 13:44:34.694223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.930 [2024-11-07 13:44:34.694263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.930 qpair failed and we were unable to recover it. 00:39:26.930 [2024-11-07 13:44:34.694623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.930 [2024-11-07 13:44:34.694663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.930 qpair failed and we were unable to recover it. 00:39:26.930 [2024-11-07 13:44:34.694931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.930 [2024-11-07 13:44:34.694974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.930 qpair failed and we were unable to recover it. 00:39:26.930 [2024-11-07 13:44:34.695345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.930 [2024-11-07 13:44:34.695384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.930 qpair failed and we were unable to recover it. 00:39:26.930 [2024-11-07 13:44:34.695750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.930 [2024-11-07 13:44:34.695790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.930 qpair failed and we were unable to recover it. 00:39:26.930 [2024-11-07 13:44:34.696211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.930 [2024-11-07 13:44:34.696252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.930 qpair failed and we were unable to recover it. 00:39:26.930 [2024-11-07 13:44:34.696702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.930 [2024-11-07 13:44:34.696742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.930 qpair failed and we were unable to recover it. 00:39:26.930 [2024-11-07 13:44:34.697134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.930 [2024-11-07 13:44:34.697176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.930 qpair failed and we were unable to recover it. 00:39:26.930 [2024-11-07 13:44:34.697507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.930 [2024-11-07 13:44:34.697546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.930 qpair failed and we were unable to recover it. 00:39:26.930 [2024-11-07 13:44:34.697896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.930 [2024-11-07 13:44:34.697937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.930 qpair failed and we were unable to recover it. 00:39:26.930 [2024-11-07 13:44:34.698298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.930 [2024-11-07 13:44:34.698338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.930 qpair failed and we were unable to recover it. 00:39:26.930 [2024-11-07 13:44:34.698698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.930 [2024-11-07 13:44:34.698737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.930 qpair failed and we were unable to recover it. 00:39:26.930 [2024-11-07 13:44:34.699101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.930 [2024-11-07 13:44:34.699143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.930 qpair failed and we were unable to recover it. 00:39:26.930 [2024-11-07 13:44:34.699495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.930 [2024-11-07 13:44:34.699534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.930 qpair failed and we were unable to recover it. 00:39:26.930 [2024-11-07 13:44:34.699953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.930 [2024-11-07 13:44:34.699994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.930 qpair failed and we were unable to recover it. 00:39:26.930 [2024-11-07 13:44:34.700402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.930 [2024-11-07 13:44:34.700454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.930 qpair failed and we were unable to recover it. 00:39:26.930 [2024-11-07 13:44:34.700792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.930 [2024-11-07 13:44:34.700832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.930 qpair failed and we were unable to recover it. 00:39:26.930 [2024-11-07 13:44:34.701091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.930 [2024-11-07 13:44:34.701134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.930 qpair failed and we were unable to recover it. 00:39:26.930 [2024-11-07 13:44:34.701508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.930 [2024-11-07 13:44:34.701549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.930 qpair failed and we were unable to recover it. 00:39:26.930 [2024-11-07 13:44:34.701911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.930 [2024-11-07 13:44:34.701952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.930 qpair failed and we were unable to recover it. 00:39:26.930 [2024-11-07 13:44:34.702327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.930 [2024-11-07 13:44:34.702368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.930 qpair failed and we were unable to recover it. 00:39:26.930 [2024-11-07 13:44:34.702605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.930 [2024-11-07 13:44:34.702644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.930 qpair failed and we were unable to recover it. 00:39:26.931 [2024-11-07 13:44:34.703040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.931 [2024-11-07 13:44:34.703083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.931 qpair failed and we were unable to recover it. 00:39:26.931 [2024-11-07 13:44:34.703447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.931 [2024-11-07 13:44:34.703487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.931 qpair failed and we were unable to recover it. 00:39:26.931 [2024-11-07 13:44:34.703848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.931 [2024-11-07 13:44:34.703899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.931 qpair failed and we were unable to recover it. 00:39:26.931 [2024-11-07 13:44:34.704277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.931 [2024-11-07 13:44:34.704317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.931 qpair failed and we were unable to recover it. 00:39:26.931 [2024-11-07 13:44:34.704690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.931 [2024-11-07 13:44:34.704730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.931 qpair failed and we were unable to recover it. 00:39:26.931 [2024-11-07 13:44:34.705091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.931 [2024-11-07 13:44:34.705132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.931 qpair failed and we were unable to recover it. 00:39:26.931 [2024-11-07 13:44:34.705564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.931 [2024-11-07 13:44:34.705604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.931 qpair failed and we were unable to recover it. 00:39:26.931 [2024-11-07 13:44:34.705964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.931 [2024-11-07 13:44:34.706005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.931 qpair failed and we were unable to recover it. 00:39:26.931 [2024-11-07 13:44:34.706438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.931 [2024-11-07 13:44:34.706479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.931 qpair failed and we were unable to recover it. 00:39:26.931 [2024-11-07 13:44:34.706855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.931 [2024-11-07 13:44:34.706906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.931 qpair failed and we were unable to recover it. 00:39:26.931 [2024-11-07 13:44:34.707272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.931 [2024-11-07 13:44:34.707311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.931 qpair failed and we were unable to recover it. 00:39:26.931 [2024-11-07 13:44:34.707672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.931 [2024-11-07 13:44:34.707717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.931 qpair failed and we were unable to recover it. 00:39:26.931 [2024-11-07 13:44:34.707983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.931 [2024-11-07 13:44:34.708025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.931 qpair failed and we were unable to recover it. 00:39:26.931 [2024-11-07 13:44:34.708376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.931 [2024-11-07 13:44:34.708416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.931 qpair failed and we were unable to recover it. 00:39:26.931 [2024-11-07 13:44:34.708765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.931 [2024-11-07 13:44:34.708804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.931 qpair failed and we were unable to recover it. 00:39:26.931 [2024-11-07 13:44:34.709226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.931 [2024-11-07 13:44:34.709268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.931 qpair failed and we were unable to recover it. 00:39:26.931 [2024-11-07 13:44:34.709639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.931 [2024-11-07 13:44:34.709679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.931 qpair failed and we were unable to recover it. 00:39:26.931 [2024-11-07 13:44:34.710050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.931 [2024-11-07 13:44:34.710091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.931 qpair failed and we were unable to recover it. 00:39:26.931 [2024-11-07 13:44:34.710432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.931 [2024-11-07 13:44:34.710471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.931 qpair failed and we were unable to recover it. 00:39:26.931 [2024-11-07 13:44:34.710828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.931 [2024-11-07 13:44:34.710883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.931 qpair failed and we were unable to recover it. 00:39:26.931 [2024-11-07 13:44:34.711134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.931 [2024-11-07 13:44:34.711177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.931 qpair failed and we were unable to recover it. 00:39:26.931 [2024-11-07 13:44:34.711526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.931 [2024-11-07 13:44:34.711566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.931 qpair failed and we were unable to recover it. 00:39:26.931 [2024-11-07 13:44:34.711918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.931 [2024-11-07 13:44:34.711961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.931 qpair failed and we were unable to recover it. 00:39:26.931 [2024-11-07 13:44:34.712329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.931 [2024-11-07 13:44:34.712368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.931 qpair failed and we were unable to recover it. 00:39:26.931 [2024-11-07 13:44:34.712757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.931 [2024-11-07 13:44:34.712796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.931 qpair failed and we were unable to recover it. 00:39:26.931 [2024-11-07 13:44:34.713190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.931 [2024-11-07 13:44:34.713233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.931 qpair failed and we were unable to recover it. 00:39:26.931 [2024-11-07 13:44:34.713534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.931 [2024-11-07 13:44:34.713573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:26.931 qpair failed and we were unable to recover it. 00:39:26.931 [2024-11-07 13:44:34.713829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.931 [2024-11-07 13:44:34.713883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.931 qpair failed and we were unable to recover it. 00:39:26.931 [2024-11-07 13:44:34.714310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.931 [2024-11-07 13:44:34.714358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.931 qpair failed and we were unable to recover it. 00:39:26.931 [2024-11-07 13:44:34.714702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.931 [2024-11-07 13:44:34.714719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.932 qpair failed and we were unable to recover it. 00:39:26.932 [2024-11-07 13:44:34.715217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.932 [2024-11-07 13:44:34.715265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.932 qpair failed and we were unable to recover it. 00:39:26.932 [2024-11-07 13:44:34.715523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.932 [2024-11-07 13:44:34.715541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.932 qpair failed and we were unable to recover it. 00:39:26.932 [2024-11-07 13:44:34.715866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.932 [2024-11-07 13:44:34.715881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.932 qpair failed and we were unable to recover it. 00:39:26.932 [2024-11-07 13:44:34.716204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.932 [2024-11-07 13:44:34.716219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.932 qpair failed and we were unable to recover it. 00:39:26.932 [2024-11-07 13:44:34.716537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.932 [2024-11-07 13:44:34.716550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.932 qpair failed and we were unable to recover it. 00:39:26.932 [2024-11-07 13:44:34.717063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.932 [2024-11-07 13:44:34.717111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.932 qpair failed and we were unable to recover it. 00:39:26.932 [2024-11-07 13:44:34.717489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.932 [2024-11-07 13:44:34.717507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.932 qpair failed and we were unable to recover it. 00:39:26.932 [2024-11-07 13:44:34.717816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.932 [2024-11-07 13:44:34.717830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.932 qpair failed and we were unable to recover it. 00:39:26.932 [2024-11-07 13:44:34.718084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.932 [2024-11-07 13:44:34.718100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.932 qpair failed and we were unable to recover it. 00:39:26.932 [2024-11-07 13:44:34.718418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.932 [2024-11-07 13:44:34.718431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.932 qpair failed and we were unable to recover it. 00:39:26.932 [2024-11-07 13:44:34.718768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.932 [2024-11-07 13:44:34.718782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.932 qpair failed and we were unable to recover it. 00:39:26.932 [2024-11-07 13:44:34.719007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.932 [2024-11-07 13:44:34.719022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.932 qpair failed and we were unable to recover it. 00:39:26.932 [2024-11-07 13:44:34.719347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.932 [2024-11-07 13:44:34.719361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.932 qpair failed and we were unable to recover it. 00:39:26.932 [2024-11-07 13:44:34.719604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.932 [2024-11-07 13:44:34.719618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.932 qpair failed and we were unable to recover it. 00:39:26.932 [2024-11-07 13:44:34.719955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.932 [2024-11-07 13:44:34.719969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.932 qpair failed and we were unable to recover it. 00:39:26.932 [2024-11-07 13:44:34.720289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.932 [2024-11-07 13:44:34.720303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.932 qpair failed and we were unable to recover it. 00:39:26.932 [2024-11-07 13:44:34.720638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.932 [2024-11-07 13:44:34.720652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.932 qpair failed and we were unable to recover it. 00:39:26.932 [2024-11-07 13:44:34.720860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.932 [2024-11-07 13:44:34.720878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.932 qpair failed and we were unable to recover it. 00:39:26.932 [2024-11-07 13:44:34.721141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.932 [2024-11-07 13:44:34.721155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.932 qpair failed and we were unable to recover it. 00:39:26.932 [2024-11-07 13:44:34.721433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.932 [2024-11-07 13:44:34.721447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.932 qpair failed and we were unable to recover it. 00:39:26.932 [2024-11-07 13:44:34.721778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.932 [2024-11-07 13:44:34.721793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.932 qpair failed and we were unable to recover it. 00:39:26.932 [2024-11-07 13:44:34.722110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.932 [2024-11-07 13:44:34.722126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.932 qpair failed and we were unable to recover it. 00:39:26.932 [2024-11-07 13:44:34.722483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.932 [2024-11-07 13:44:34.722497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.932 qpair failed and we were unable to recover it. 00:39:26.932 [2024-11-07 13:44:34.722807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.932 [2024-11-07 13:44:34.722821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.932 qpair failed and we were unable to recover it. 00:39:26.932 [2024-11-07 13:44:34.723133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.932 [2024-11-07 13:44:34.723148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.932 qpair failed and we were unable to recover it. 00:39:26.932 [2024-11-07 13:44:34.723477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.932 [2024-11-07 13:44:34.723492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.932 qpair failed and we were unable to recover it. 00:39:26.932 [2024-11-07 13:44:34.723815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.932 [2024-11-07 13:44:34.723829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.932 qpair failed and we were unable to recover it. 00:39:26.932 [2024-11-07 13:44:34.723953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.932 [2024-11-07 13:44:34.723967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.932 qpair failed and we were unable to recover it. 00:39:26.932 [2024-11-07 13:44:34.724268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.932 [2024-11-07 13:44:34.724283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.932 qpair failed and we were unable to recover it. 00:39:26.932 [2024-11-07 13:44:34.724598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.932 [2024-11-07 13:44:34.724613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.932 qpair failed and we were unable to recover it. 00:39:26.932 [2024-11-07 13:44:34.724932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.932 [2024-11-07 13:44:34.724945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.932 qpair failed and we were unable to recover it. 00:39:26.932 [2024-11-07 13:44:34.725265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.932 [2024-11-07 13:44:34.725279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.932 qpair failed and we were unable to recover it. 00:39:26.932 [2024-11-07 13:44:34.725602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.932 [2024-11-07 13:44:34.725615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.932 qpair failed and we were unable to recover it. 00:39:26.932 [2024-11-07 13:44:34.725952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.932 [2024-11-07 13:44:34.725966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.932 qpair failed and we were unable to recover it. 00:39:26.932 [2024-11-07 13:44:34.726259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.932 [2024-11-07 13:44:34.726273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.932 qpair failed and we were unable to recover it. 00:39:26.932 [2024-11-07 13:44:34.726668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.932 [2024-11-07 13:44:34.726682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.932 qpair failed and we were unable to recover it. 00:39:26.932 [2024-11-07 13:44:34.726972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.932 [2024-11-07 13:44:34.726985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.932 qpair failed and we were unable to recover it. 00:39:26.933 [2024-11-07 13:44:34.727274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.933 [2024-11-07 13:44:34.727288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.933 qpair failed and we were unable to recover it. 00:39:26.933 [2024-11-07 13:44:34.727457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.933 [2024-11-07 13:44:34.727470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.933 qpair failed and we were unable to recover it. 00:39:26.933 [2024-11-07 13:44:34.727751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.933 [2024-11-07 13:44:34.727764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.933 qpair failed and we were unable to recover it. 00:39:26.933 [2024-11-07 13:44:34.728113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.933 [2024-11-07 13:44:34.728127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.933 qpair failed and we were unable to recover it. 00:39:26.933 [2024-11-07 13:44:34.728392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.933 [2024-11-07 13:44:34.728405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.933 qpair failed and we were unable to recover it. 00:39:26.933 [2024-11-07 13:44:34.728727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.933 [2024-11-07 13:44:34.728740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.933 qpair failed and we were unable to recover it. 00:39:26.933 [2024-11-07 13:44:34.729114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.933 [2024-11-07 13:44:34.729128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.933 qpair failed and we were unable to recover it. 00:39:26.933 [2024-11-07 13:44:34.729438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.933 [2024-11-07 13:44:34.729452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.933 qpair failed and we were unable to recover it. 00:39:26.933 [2024-11-07 13:44:34.729769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.933 [2024-11-07 13:44:34.729782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.933 qpair failed and we were unable to recover it. 00:39:26.933 [2024-11-07 13:44:34.730012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.933 [2024-11-07 13:44:34.730026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.933 qpair failed and we were unable to recover it. 00:39:26.933 [2024-11-07 13:44:34.730326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.933 [2024-11-07 13:44:34.730339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.933 qpair failed and we were unable to recover it. 00:39:26.933 [2024-11-07 13:44:34.730660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.933 [2024-11-07 13:44:34.730673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.933 qpair failed and we were unable to recover it. 00:39:26.933 [2024-11-07 13:44:34.730992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.933 [2024-11-07 13:44:34.731006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.933 qpair failed and we were unable to recover it. 00:39:26.933 [2024-11-07 13:44:34.731189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.933 [2024-11-07 13:44:34.731203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.933 qpair failed and we were unable to recover it. 00:39:26.933 [2024-11-07 13:44:34.731523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.933 [2024-11-07 13:44:34.731536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.933 qpair failed and we were unable to recover it. 00:39:26.933 [2024-11-07 13:44:34.731873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.933 [2024-11-07 13:44:34.731887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.933 qpair failed and we were unable to recover it. 00:39:26.933 [2024-11-07 13:44:34.732224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.933 [2024-11-07 13:44:34.732239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.933 qpair failed and we were unable to recover it. 00:39:26.933 [2024-11-07 13:44:34.732566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.933 [2024-11-07 13:44:34.732580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.933 qpair failed and we were unable to recover it. 00:39:26.933 [2024-11-07 13:44:34.732918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.933 [2024-11-07 13:44:34.732932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.933 qpair failed and we were unable to recover it. 00:39:26.933 [2024-11-07 13:44:34.733303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.933 [2024-11-07 13:44:34.733316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.933 qpair failed and we were unable to recover it. 00:39:26.933 [2024-11-07 13:44:34.733626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.933 [2024-11-07 13:44:34.733639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.933 qpair failed and we were unable to recover it. 00:39:26.933 [2024-11-07 13:44:34.734005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.933 [2024-11-07 13:44:34.734020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.933 qpair failed and we were unable to recover it. 00:39:26.933 [2024-11-07 13:44:34.734384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.933 [2024-11-07 13:44:34.734398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.933 qpair failed and we were unable to recover it. 00:39:26.933 [2024-11-07 13:44:34.734750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.933 [2024-11-07 13:44:34.734763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.933 qpair failed and we were unable to recover it. 00:39:26.933 [2024-11-07 13:44:34.735114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.933 [2024-11-07 13:44:34.735130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.933 qpair failed and we were unable to recover it. 00:39:26.933 [2024-11-07 13:44:34.735434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.933 [2024-11-07 13:44:34.735447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.933 qpair failed and we were unable to recover it. 00:39:26.933 [2024-11-07 13:44:34.735757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.933 [2024-11-07 13:44:34.735770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.933 qpair failed and we were unable to recover it. 00:39:26.933 [2024-11-07 13:44:34.736079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.933 [2024-11-07 13:44:34.736093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.933 qpair failed and we were unable to recover it. 00:39:26.933 [2024-11-07 13:44:34.736431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.933 [2024-11-07 13:44:34.736444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.933 qpair failed and we were unable to recover it. 00:39:26.933 [2024-11-07 13:44:34.736760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.933 [2024-11-07 13:44:34.736774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.933 qpair failed and we were unable to recover it. 00:39:26.933 [2024-11-07 13:44:34.737087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.933 [2024-11-07 13:44:34.737101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.933 qpair failed and we were unable to recover it. 00:39:26.933 [2024-11-07 13:44:34.737414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.933 [2024-11-07 13:44:34.737427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.933 qpair failed and we were unable to recover it. 00:39:26.933 [2024-11-07 13:44:34.737741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.933 [2024-11-07 13:44:34.737754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.933 qpair failed and we were unable to recover it. 00:39:26.933 [2024-11-07 13:44:34.738066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.933 [2024-11-07 13:44:34.738080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.933 qpair failed and we were unable to recover it. 00:39:26.933 [2024-11-07 13:44:34.738394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.933 [2024-11-07 13:44:34.738408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.933 qpair failed and we were unable to recover it. 00:39:26.933 [2024-11-07 13:44:34.738761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.933 [2024-11-07 13:44:34.738775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.933 qpair failed and we were unable to recover it. 00:39:26.933 [2024-11-07 13:44:34.739084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.934 [2024-11-07 13:44:34.739098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.934 qpair failed and we were unable to recover it. 00:39:26.934 [2024-11-07 13:44:34.739385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.934 [2024-11-07 13:44:34.739405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.934 qpair failed and we were unable to recover it. 00:39:26.934 [2024-11-07 13:44:34.739747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.934 [2024-11-07 13:44:34.739762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.934 qpair failed and we were unable to recover it. 00:39:26.934 [2024-11-07 13:44:34.740074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.934 [2024-11-07 13:44:34.740088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.934 qpair failed and we were unable to recover it. 00:39:26.934 [2024-11-07 13:44:34.740400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.934 [2024-11-07 13:44:34.740414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.934 qpair failed and we were unable to recover it. 00:39:26.934 [2024-11-07 13:44:34.740750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.934 [2024-11-07 13:44:34.740764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.934 qpair failed and we were unable to recover it. 00:39:26.934 [2024-11-07 13:44:34.740938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.934 [2024-11-07 13:44:34.740953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.934 qpair failed and we were unable to recover it. 00:39:26.934 [2024-11-07 13:44:34.741275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.934 [2024-11-07 13:44:34.741288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.934 qpair failed and we were unable to recover it. 00:39:26.934 [2024-11-07 13:44:34.741501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.934 [2024-11-07 13:44:34.741514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.934 qpair failed and we were unable to recover it. 00:39:26.934 [2024-11-07 13:44:34.741840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.934 [2024-11-07 13:44:34.741854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.934 qpair failed and we were unable to recover it. 00:39:26.934 [2024-11-07 13:44:34.742230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.934 [2024-11-07 13:44:34.742244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.934 qpair failed and we were unable to recover it. 00:39:26.934 [2024-11-07 13:44:34.742549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.934 [2024-11-07 13:44:34.742563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.934 qpair failed and we were unable to recover it. 00:39:26.934 [2024-11-07 13:44:34.742908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.934 [2024-11-07 13:44:34.742921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.934 qpair failed and we were unable to recover it. 00:39:26.934 [2024-11-07 13:44:34.743224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.934 [2024-11-07 13:44:34.743237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.934 qpair failed and we were unable to recover it. 00:39:26.934 [2024-11-07 13:44:34.743568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.934 [2024-11-07 13:44:34.743582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.934 qpair failed and we were unable to recover it. 00:39:26.934 [2024-11-07 13:44:34.743914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.934 [2024-11-07 13:44:34.743928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.934 qpair failed and we were unable to recover it. 00:39:26.934 [2024-11-07 13:44:34.744208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.934 [2024-11-07 13:44:34.744221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.934 qpair failed and we were unable to recover it. 00:39:26.934 [2024-11-07 13:44:34.744514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.934 [2024-11-07 13:44:34.744528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.934 qpair failed and we were unable to recover it. 00:39:26.934 [2024-11-07 13:44:34.744848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.934 [2024-11-07 13:44:34.744866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.934 qpair failed and we were unable to recover it. 00:39:26.934 [2024-11-07 13:44:34.745073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.934 [2024-11-07 13:44:34.745086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.934 qpair failed and we were unable to recover it. 00:39:26.934 [2024-11-07 13:44:34.745278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.934 [2024-11-07 13:44:34.745293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.934 qpair failed and we were unable to recover it. 00:39:26.934 [2024-11-07 13:44:34.745579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.934 [2024-11-07 13:44:34.745592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.934 qpair failed and we were unable to recover it. 00:39:26.934 [2024-11-07 13:44:34.745873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.934 [2024-11-07 13:44:34.745887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.934 qpair failed and we were unable to recover it. 00:39:26.934 [2024-11-07 13:44:34.746171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.934 [2024-11-07 13:44:34.746184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.934 qpair failed and we were unable to recover it. 00:39:26.934 [2024-11-07 13:44:34.746469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.934 [2024-11-07 13:44:34.746482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.934 qpair failed and we were unable to recover it. 00:39:26.934 [2024-11-07 13:44:34.746860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.934 [2024-11-07 13:44:34.746882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.934 qpair failed and we were unable to recover it. 00:39:26.934 [2024-11-07 13:44:34.747194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.934 [2024-11-07 13:44:34.747207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.934 qpair failed and we were unable to recover it. 00:39:26.934 [2024-11-07 13:44:34.747513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.934 [2024-11-07 13:44:34.747527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.934 qpair failed and we were unable to recover it. 00:39:26.934 [2024-11-07 13:44:34.747838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.934 [2024-11-07 13:44:34.747854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.934 qpair failed and we were unable to recover it. 00:39:26.934 [2024-11-07 13:44:34.748141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.934 [2024-11-07 13:44:34.748162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.934 qpair failed and we were unable to recover it. 00:39:26.934 [2024-11-07 13:44:34.748494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.934 [2024-11-07 13:44:34.748507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.934 qpair failed and we were unable to recover it. 00:39:26.934 [2024-11-07 13:44:34.748813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.934 [2024-11-07 13:44:34.748827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.934 qpair failed and we were unable to recover it. 00:39:26.934 [2024-11-07 13:44:34.749086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.934 [2024-11-07 13:44:34.749100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.934 qpair failed and we were unable to recover it. 00:39:26.934 [2024-11-07 13:44:34.749389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.934 [2024-11-07 13:44:34.749410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.934 qpair failed and we were unable to recover it. 00:39:26.934 [2024-11-07 13:44:34.749732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.935 [2024-11-07 13:44:34.749747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.935 qpair failed and we were unable to recover it. 00:39:26.935 [2024-11-07 13:44:34.750064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.935 [2024-11-07 13:44:34.750077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.935 qpair failed and we were unable to recover it. 00:39:26.935 [2024-11-07 13:44:34.750398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.935 [2024-11-07 13:44:34.750412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.935 qpair failed and we were unable to recover it. 00:39:26.935 [2024-11-07 13:44:34.750795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.935 [2024-11-07 13:44:34.750808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.935 qpair failed and we were unable to recover it. 00:39:26.935 [2024-11-07 13:44:34.751088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.935 [2024-11-07 13:44:34.751102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.935 qpair failed and we were unable to recover it. 00:39:26.935 [2024-11-07 13:44:34.751393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.935 [2024-11-07 13:44:34.751406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.935 qpair failed and we were unable to recover it. 00:39:26.935 [2024-11-07 13:44:34.751720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.935 [2024-11-07 13:44:34.751741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.935 qpair failed and we were unable to recover it. 00:39:26.935 [2024-11-07 13:44:34.752022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.935 [2024-11-07 13:44:34.752035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.935 qpair failed and we were unable to recover it. 00:39:26.935 [2024-11-07 13:44:34.752396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.935 [2024-11-07 13:44:34.752410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.935 qpair failed and we were unable to recover it. 00:39:26.935 [2024-11-07 13:44:34.752745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.935 [2024-11-07 13:44:34.752759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.935 qpair failed and we were unable to recover it. 00:39:26.935 [2024-11-07 13:44:34.753069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.935 [2024-11-07 13:44:34.753083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.935 qpair failed and we were unable to recover it. 00:39:26.935 [2024-11-07 13:44:34.753380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.935 [2024-11-07 13:44:34.753393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.935 qpair failed and we were unable to recover it. 00:39:26.935 [2024-11-07 13:44:34.753730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.935 [2024-11-07 13:44:34.753743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.935 qpair failed and we were unable to recover it. 00:39:26.935 [2024-11-07 13:44:34.754003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.935 [2024-11-07 13:44:34.754016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.935 qpair failed and we were unable to recover it. 00:39:26.935 [2024-11-07 13:44:34.754279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.935 [2024-11-07 13:44:34.754293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.935 qpair failed and we were unable to recover it. 00:39:26.935 [2024-11-07 13:44:34.754593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.935 [2024-11-07 13:44:34.754606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.935 qpair failed and we were unable to recover it. 00:39:26.935 [2024-11-07 13:44:34.754888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.935 [2024-11-07 13:44:34.754902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.935 qpair failed and we were unable to recover it. 00:39:26.935 [2024-11-07 13:44:34.755224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.935 [2024-11-07 13:44:34.755238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.935 qpair failed and we were unable to recover it. 00:39:26.935 [2024-11-07 13:44:34.755409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.935 [2024-11-07 13:44:34.755424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.935 qpair failed and we were unable to recover it. 00:39:26.935 [2024-11-07 13:44:34.755691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.935 [2024-11-07 13:44:34.755705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.935 qpair failed and we were unable to recover it. 00:39:26.935 [2024-11-07 13:44:34.756035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.935 [2024-11-07 13:44:34.756050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.935 qpair failed and we were unable to recover it. 00:39:26.935 [2024-11-07 13:44:34.756399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.935 [2024-11-07 13:44:34.756414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.935 qpair failed and we were unable to recover it. 00:39:26.935 [2024-11-07 13:44:34.756739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.935 [2024-11-07 13:44:34.756752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.935 qpair failed and we were unable to recover it. 00:39:26.935 [2024-11-07 13:44:34.757091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.935 [2024-11-07 13:44:34.757106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.935 qpair failed and we were unable to recover it. 00:39:26.935 [2024-11-07 13:44:34.757429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.935 [2024-11-07 13:44:34.757443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.935 qpair failed and we were unable to recover it. 00:39:26.935 [2024-11-07 13:44:34.757752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.935 [2024-11-07 13:44:34.757765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.935 qpair failed and we were unable to recover it. 00:39:26.935 [2024-11-07 13:44:34.758083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.935 [2024-11-07 13:44:34.758096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.935 qpair failed and we were unable to recover it. 00:39:26.935 [2024-11-07 13:44:34.758409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.935 [2024-11-07 13:44:34.758422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.935 qpair failed and we were unable to recover it. 00:39:26.935 [2024-11-07 13:44:34.758730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.935 [2024-11-07 13:44:34.758744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.935 qpair failed and we were unable to recover it. 00:39:26.935 [2024-11-07 13:44:34.759060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.935 [2024-11-07 13:44:34.759075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.935 qpair failed and we were unable to recover it. 00:39:26.935 [2024-11-07 13:44:34.759258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.935 [2024-11-07 13:44:34.759272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.935 qpair failed and we were unable to recover it. 00:39:26.935 [2024-11-07 13:44:34.759588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.935 [2024-11-07 13:44:34.759602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.935 qpair failed and we were unable to recover it. 00:39:26.935 [2024-11-07 13:44:34.759932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.935 [2024-11-07 13:44:34.759946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.935 qpair failed and we were unable to recover it. 00:39:26.935 [2024-11-07 13:44:34.760283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.935 [2024-11-07 13:44:34.760296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.935 qpair failed and we were unable to recover it. 00:39:26.935 [2024-11-07 13:44:34.760496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.935 [2024-11-07 13:44:34.760514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.935 qpair failed and we were unable to recover it. 00:39:26.935 [2024-11-07 13:44:34.760807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.935 [2024-11-07 13:44:34.760820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.935 qpair failed and we were unable to recover it. 00:39:26.935 [2024-11-07 13:44:34.761131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.935 [2024-11-07 13:44:34.761145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.935 qpair failed and we were unable to recover it. 00:39:26.935 [2024-11-07 13:44:34.761451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.935 [2024-11-07 13:44:34.761465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.936 qpair failed and we were unable to recover it. 00:39:26.936 [2024-11-07 13:44:34.761766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.936 [2024-11-07 13:44:34.761779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.936 qpair failed and we were unable to recover it. 00:39:26.936 [2024-11-07 13:44:34.762004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.936 [2024-11-07 13:44:34.762018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.936 qpair failed and we were unable to recover it. 00:39:26.936 [2024-11-07 13:44:34.762296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.936 [2024-11-07 13:44:34.762309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.936 qpair failed and we were unable to recover it. 00:39:26.936 [2024-11-07 13:44:34.762620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.936 [2024-11-07 13:44:34.762634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.936 qpair failed and we were unable to recover it. 00:39:26.936 [2024-11-07 13:44:34.762966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.936 [2024-11-07 13:44:34.762981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.936 qpair failed and we were unable to recover it. 00:39:26.936 [2024-11-07 13:44:34.763311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.936 [2024-11-07 13:44:34.763325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.936 qpair failed and we were unable to recover it. 00:39:26.936 [2024-11-07 13:44:34.763636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.936 [2024-11-07 13:44:34.763650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.936 qpair failed and we were unable to recover it. 00:39:26.936 [2024-11-07 13:44:34.763962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.936 [2024-11-07 13:44:34.763979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.936 qpair failed and we were unable to recover it. 00:39:26.936 [2024-11-07 13:44:34.764283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.936 [2024-11-07 13:44:34.764297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.936 qpair failed and we were unable to recover it. 00:39:26.936 [2024-11-07 13:44:34.764625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.936 [2024-11-07 13:44:34.764638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.936 qpair failed and we were unable to recover it. 00:39:26.936 [2024-11-07 13:44:34.764957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.936 [2024-11-07 13:44:34.764972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.936 qpair failed and we were unable to recover it. 00:39:26.936 [2024-11-07 13:44:34.765178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.936 [2024-11-07 13:44:34.765192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.936 qpair failed and we were unable to recover it. 00:39:26.936 [2024-11-07 13:44:34.765527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.936 [2024-11-07 13:44:34.765541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.936 qpair failed and we were unable to recover it. 00:39:26.936 [2024-11-07 13:44:34.765824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.936 [2024-11-07 13:44:34.765846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.936 qpair failed and we were unable to recover it. 00:39:26.936 [2024-11-07 13:44:34.766146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.936 [2024-11-07 13:44:34.766161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.936 qpair failed and we were unable to recover it. 00:39:26.936 [2024-11-07 13:44:34.766481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.936 [2024-11-07 13:44:34.766495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.936 qpair failed and we were unable to recover it. 00:39:26.936 [2024-11-07 13:44:34.766828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.936 [2024-11-07 13:44:34.766841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.936 qpair failed and we were unable to recover it. 00:39:26.936 [2024-11-07 13:44:34.767058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.936 [2024-11-07 13:44:34.767072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.936 qpair failed and we were unable to recover it. 00:39:26.936 [2024-11-07 13:44:34.767409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.936 [2024-11-07 13:44:34.767422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.936 qpair failed and we were unable to recover it. 00:39:26.936 [2024-11-07 13:44:34.767744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.936 [2024-11-07 13:44:34.767757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.936 qpair failed and we were unable to recover it. 00:39:26.936 [2024-11-07 13:44:34.768088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.936 [2024-11-07 13:44:34.768102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.936 qpair failed and we were unable to recover it. 00:39:26.936 [2024-11-07 13:44:34.768432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.936 [2024-11-07 13:44:34.768446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.936 qpair failed and we were unable to recover it. 00:39:26.936 [2024-11-07 13:44:34.768803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.936 [2024-11-07 13:44:34.768817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.936 qpair failed and we were unable to recover it. 00:39:26.936 [2024-11-07 13:44:34.769154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.936 [2024-11-07 13:44:34.769168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.936 qpair failed and we were unable to recover it. 00:39:26.936 [2024-11-07 13:44:34.769457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.936 [2024-11-07 13:44:34.769470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.936 qpair failed and we were unable to recover it. 00:39:26.936 [2024-11-07 13:44:34.769787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.936 [2024-11-07 13:44:34.769799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.936 qpair failed and we were unable to recover it. 00:39:26.936 [2024-11-07 13:44:34.770083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.936 [2024-11-07 13:44:34.770097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.936 qpair failed and we were unable to recover it. 00:39:26.936 [2024-11-07 13:44:34.770385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.936 [2024-11-07 13:44:34.770398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.936 qpair failed and we were unable to recover it. 00:39:26.936 [2024-11-07 13:44:34.770581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.936 [2024-11-07 13:44:34.770596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.936 qpair failed and we were unable to recover it. 00:39:26.936 [2024-11-07 13:44:34.770916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.936 [2024-11-07 13:44:34.770931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.936 qpair failed and we were unable to recover it. 00:39:26.937 [2024-11-07 13:44:34.771255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.937 [2024-11-07 13:44:34.771269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.937 qpair failed and we were unable to recover it. 00:39:26.937 [2024-11-07 13:44:34.771582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.937 [2024-11-07 13:44:34.771596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.937 qpair failed and we were unable to recover it. 00:39:26.937 [2024-11-07 13:44:34.771911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.937 [2024-11-07 13:44:34.771924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.937 qpair failed and we were unable to recover it. 00:39:26.937 [2024-11-07 13:44:34.772308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.937 [2024-11-07 13:44:34.772322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.937 qpair failed and we were unable to recover it. 00:39:26.937 [2024-11-07 13:44:34.772652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.937 [2024-11-07 13:44:34.772666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.937 qpair failed and we were unable to recover it. 00:39:26.937 [2024-11-07 13:44:34.772983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.937 [2024-11-07 13:44:34.772997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.937 qpair failed and we were unable to recover it. 00:39:26.937 [2024-11-07 13:44:34.773287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.937 [2024-11-07 13:44:34.773302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.937 qpair failed and we were unable to recover it. 00:39:26.937 [2024-11-07 13:44:34.773618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.937 [2024-11-07 13:44:34.773631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.937 qpair failed and we were unable to recover it. 00:39:26.937 [2024-11-07 13:44:34.773913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.937 [2024-11-07 13:44:34.773927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.937 qpair failed and we were unable to recover it. 00:39:26.937 [2024-11-07 13:44:34.774218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.937 [2024-11-07 13:44:34.774231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.937 qpair failed and we were unable to recover it. 00:39:26.937 [2024-11-07 13:44:34.774499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.937 [2024-11-07 13:44:34.774512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.937 qpair failed and we were unable to recover it. 00:39:26.937 [2024-11-07 13:44:34.774783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.937 [2024-11-07 13:44:34.774796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.937 qpair failed and we were unable to recover it. 00:39:26.937 [2024-11-07 13:44:34.775132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.937 [2024-11-07 13:44:34.775146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.937 qpair failed and we were unable to recover it. 00:39:26.937 [2024-11-07 13:44:34.775473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.937 [2024-11-07 13:44:34.775486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.937 qpair failed and we were unable to recover it. 00:39:26.937 [2024-11-07 13:44:34.775701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.937 [2024-11-07 13:44:34.775714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.937 qpair failed and we were unable to recover it. 00:39:26.937 [2024-11-07 13:44:34.776030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.937 [2024-11-07 13:44:34.776044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.937 qpair failed and we were unable to recover it. 00:39:26.937 [2024-11-07 13:44:34.776361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.937 [2024-11-07 13:44:34.776374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.937 qpair failed and we were unable to recover it. 00:39:26.937 [2024-11-07 13:44:34.776703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.937 [2024-11-07 13:44:34.776716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.937 qpair failed and we were unable to recover it. 00:39:26.937 [2024-11-07 13:44:34.776944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.937 [2024-11-07 13:44:34.776958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.937 qpair failed and we were unable to recover it. 00:39:26.937 [2024-11-07 13:44:34.777274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.937 [2024-11-07 13:44:34.777288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.937 qpair failed and we were unable to recover it. 00:39:26.937 [2024-11-07 13:44:34.777581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.937 [2024-11-07 13:44:34.777595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.937 qpair failed and we were unable to recover it. 00:39:26.937 [2024-11-07 13:44:34.777877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.937 [2024-11-07 13:44:34.777891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.937 qpair failed and we were unable to recover it. 00:39:26.937 [2024-11-07 13:44:34.778201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.937 [2024-11-07 13:44:34.778215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.937 qpair failed and we were unable to recover it. 00:39:26.937 [2024-11-07 13:44:34.778533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.937 [2024-11-07 13:44:34.778546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.937 qpair failed and we were unable to recover it. 00:39:26.937 [2024-11-07 13:44:34.778761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.937 [2024-11-07 13:44:34.778774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.937 qpair failed and we were unable to recover it. 00:39:26.937 [2024-11-07 13:44:34.779097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.937 [2024-11-07 13:44:34.779111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.937 qpair failed and we were unable to recover it. 00:39:26.937 [2024-11-07 13:44:34.779329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.937 [2024-11-07 13:44:34.779342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.937 qpair failed and we were unable to recover it. 00:39:26.937 [2024-11-07 13:44:34.779682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.937 [2024-11-07 13:44:34.779696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.937 qpair failed and we were unable to recover it. 00:39:26.937 [2024-11-07 13:44:34.780023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.937 [2024-11-07 13:44:34.780038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.937 qpair failed and we were unable to recover it. 00:39:26.937 [2024-11-07 13:44:34.780235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.937 [2024-11-07 13:44:34.780250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.937 qpair failed and we were unable to recover it. 00:39:26.937 [2024-11-07 13:44:34.780560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.937 [2024-11-07 13:44:34.780574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.937 qpair failed and we were unable to recover it. 00:39:26.937 [2024-11-07 13:44:34.780889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.937 [2024-11-07 13:44:34.780903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.937 qpair failed and we were unable to recover it. 00:39:26.937 [2024-11-07 13:44:34.781197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.937 [2024-11-07 13:44:34.781211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.937 qpair failed and we were unable to recover it. 00:39:26.937 [2024-11-07 13:44:34.781514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.937 [2024-11-07 13:44:34.781528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.937 qpair failed and we were unable to recover it. 00:39:26.937 [2024-11-07 13:44:34.781837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.937 [2024-11-07 13:44:34.781850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.937 qpair failed and we were unable to recover it. 00:39:26.937 [2024-11-07 13:44:34.782198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.937 [2024-11-07 13:44:34.782212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.937 qpair failed and we were unable to recover it. 00:39:26.937 [2024-11-07 13:44:34.782524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.937 [2024-11-07 13:44:34.782538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.937 qpair failed and we were unable to recover it. 00:39:26.937 [2024-11-07 13:44:34.782859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.938 [2024-11-07 13:44:34.782876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.938 qpair failed and we were unable to recover it. 00:39:26.938 [2024-11-07 13:44:34.783247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.938 [2024-11-07 13:44:34.783260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.938 qpair failed and we were unable to recover it. 00:39:26.938 [2024-11-07 13:44:34.783631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.938 [2024-11-07 13:44:34.783644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.938 qpair failed and we were unable to recover it. 00:39:26.938 [2024-11-07 13:44:34.783925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.938 [2024-11-07 13:44:34.783939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.938 qpair failed and we were unable to recover it. 00:39:26.938 [2024-11-07 13:44:34.784264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.938 [2024-11-07 13:44:34.784277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.938 qpair failed and we were unable to recover it. 00:39:26.938 [2024-11-07 13:44:34.784609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.938 [2024-11-07 13:44:34.784623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.938 qpair failed and we were unable to recover it. 00:39:26.938 [2024-11-07 13:44:34.784811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.938 [2024-11-07 13:44:34.784824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.938 qpair failed and we were unable to recover it. 00:39:26.938 [2024-11-07 13:44:34.785147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.938 [2024-11-07 13:44:34.785161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.938 qpair failed and we were unable to recover it. 00:39:26.938 [2024-11-07 13:44:34.785504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.938 [2024-11-07 13:44:34.785519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.938 qpair failed and we were unable to recover it. 00:39:26.938 [2024-11-07 13:44:34.785815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.938 [2024-11-07 13:44:34.785831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.938 qpair failed and we were unable to recover it. 00:39:26.938 [2024-11-07 13:44:34.786147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.938 [2024-11-07 13:44:34.786161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.938 qpair failed and we were unable to recover it. 00:39:26.938 [2024-11-07 13:44:34.786377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.938 [2024-11-07 13:44:34.786391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.938 qpair failed and we were unable to recover it. 00:39:26.938 [2024-11-07 13:44:34.786750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.938 [2024-11-07 13:44:34.786764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.938 qpair failed and we were unable to recover it. 00:39:26.938 [2024-11-07 13:44:34.787083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.938 [2024-11-07 13:44:34.787097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.938 qpair failed and we were unable to recover it. 00:39:26.938 [2024-11-07 13:44:34.787426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.938 [2024-11-07 13:44:34.787441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.938 qpair failed and we were unable to recover it. 00:39:26.938 [2024-11-07 13:44:34.787762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.938 [2024-11-07 13:44:34.787777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.938 qpair failed and we were unable to recover it. 00:39:26.938 [2024-11-07 13:44:34.788105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.938 [2024-11-07 13:44:34.788119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.938 qpair failed and we were unable to recover it. 00:39:26.938 [2024-11-07 13:44:34.788460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.938 [2024-11-07 13:44:34.788474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.938 qpair failed and we were unable to recover it. 00:39:26.938 [2024-11-07 13:44:34.788785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.938 [2024-11-07 13:44:34.788798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.938 qpair failed and we were unable to recover it. 00:39:26.938 [2024-11-07 13:44:34.789120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.938 [2024-11-07 13:44:34.789135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.938 qpair failed and we were unable to recover it. 00:39:26.938 [2024-11-07 13:44:34.789470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.938 [2024-11-07 13:44:34.789484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.938 qpair failed and we were unable to recover it. 00:39:26.938 [2024-11-07 13:44:34.789793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.938 [2024-11-07 13:44:34.789807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.938 qpair failed and we were unable to recover it. 00:39:26.938 [2024-11-07 13:44:34.790141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.938 [2024-11-07 13:44:34.790155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.938 qpair failed and we were unable to recover it. 00:39:26.938 [2024-11-07 13:44:34.790488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.938 [2024-11-07 13:44:34.790503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.938 qpair failed and we were unable to recover it. 00:39:26.938 [2024-11-07 13:44:34.790829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.938 [2024-11-07 13:44:34.790843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.938 qpair failed and we were unable to recover it. 00:39:26.938 [2024-11-07 13:44:34.791163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.938 [2024-11-07 13:44:34.791178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.938 qpair failed and we were unable to recover it. 00:39:26.938 [2024-11-07 13:44:34.791488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.938 [2024-11-07 13:44:34.791502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.938 qpair failed and we were unable to recover it. 00:39:26.938 [2024-11-07 13:44:34.791838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.938 [2024-11-07 13:44:34.791853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.938 qpair failed and we were unable to recover it. 00:39:26.938 [2024-11-07 13:44:34.792195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.938 [2024-11-07 13:44:34.792210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.938 qpair failed and we were unable to recover it. 00:39:26.938 [2024-11-07 13:44:34.792504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.938 [2024-11-07 13:44:34.792518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.938 qpair failed and we were unable to recover it. 00:39:26.938 [2024-11-07 13:44:34.792828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.938 [2024-11-07 13:44:34.792842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.938 qpair failed and we were unable to recover it. 00:39:26.938 [2024-11-07 13:44:34.793154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.938 [2024-11-07 13:44:34.793169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.938 qpair failed and we were unable to recover it. 00:39:26.938 [2024-11-07 13:44:34.793481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.938 [2024-11-07 13:44:34.793495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.938 qpair failed and we were unable to recover it. 00:39:26.938 [2024-11-07 13:44:34.793809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.938 [2024-11-07 13:44:34.793823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.938 qpair failed and we were unable to recover it. 00:39:26.938 [2024-11-07 13:44:34.794103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.938 [2024-11-07 13:44:34.794117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.938 qpair failed and we were unable to recover it. 00:39:26.938 [2024-11-07 13:44:34.794439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.939 [2024-11-07 13:44:34.794454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.939 qpair failed and we were unable to recover it. 00:39:26.939 [2024-11-07 13:44:34.794776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.939 [2024-11-07 13:44:34.794790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.939 qpair failed and we were unable to recover it. 00:39:26.939 [2024-11-07 13:44:34.795114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.939 [2024-11-07 13:44:34.795129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.939 qpair failed and we were unable to recover it. 00:39:26.939 [2024-11-07 13:44:34.795496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.939 [2024-11-07 13:44:34.795510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.939 qpair failed and we were unable to recover it. 00:39:26.939 [2024-11-07 13:44:34.795827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.939 [2024-11-07 13:44:34.795841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.939 qpair failed and we were unable to recover it. 00:39:26.939 [2024-11-07 13:44:34.796195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.939 [2024-11-07 13:44:34.796210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.939 qpair failed and we were unable to recover it. 00:39:26.939 [2024-11-07 13:44:34.796503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.939 [2024-11-07 13:44:34.796517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.939 qpair failed and we were unable to recover it. 00:39:26.939 [2024-11-07 13:44:34.796689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.939 [2024-11-07 13:44:34.796703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.939 qpair failed and we were unable to recover it. 00:39:26.939 [2024-11-07 13:44:34.797019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.939 [2024-11-07 13:44:34.797032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.939 qpair failed and we were unable to recover it. 00:39:26.939 [2024-11-07 13:44:34.797316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.939 [2024-11-07 13:44:34.797329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.939 qpair failed and we were unable to recover it. 00:39:26.939 [2024-11-07 13:44:34.797640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.939 [2024-11-07 13:44:34.797653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.939 qpair failed and we were unable to recover it. 00:39:26.939 [2024-11-07 13:44:34.797969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.939 [2024-11-07 13:44:34.797983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.939 qpair failed and we were unable to recover it. 00:39:26.939 [2024-11-07 13:44:34.798314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.939 [2024-11-07 13:44:34.798327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.939 qpair failed and we were unable to recover it. 00:39:26.939 [2024-11-07 13:44:34.798651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.939 [2024-11-07 13:44:34.798664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.939 qpair failed and we were unable to recover it. 00:39:26.939 [2024-11-07 13:44:34.798981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.939 [2024-11-07 13:44:34.798997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.939 qpair failed and we were unable to recover it. 00:39:26.939 [2024-11-07 13:44:34.799284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.939 [2024-11-07 13:44:34.799297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.939 qpair failed and we were unable to recover it. 00:39:26.939 [2024-11-07 13:44:34.799629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.939 [2024-11-07 13:44:34.799642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.939 qpair failed and we were unable to recover it. 00:39:26.939 [2024-11-07 13:44:34.799955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.939 [2024-11-07 13:44:34.799969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.939 qpair failed and we were unable to recover it. 00:39:26.939 [2024-11-07 13:44:34.800267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.939 [2024-11-07 13:44:34.800280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.939 qpair failed and we were unable to recover it. 00:39:26.939 [2024-11-07 13:44:34.800477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.939 [2024-11-07 13:44:34.800490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.939 qpair failed and we were unable to recover it. 00:39:26.939 [2024-11-07 13:44:34.800947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.939 [2024-11-07 13:44:34.800961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.939 qpair failed and we were unable to recover it. 00:39:26.939 [2024-11-07 13:44:34.801258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.939 [2024-11-07 13:44:34.801273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.939 qpair failed and we were unable to recover it. 00:39:26.939 [2024-11-07 13:44:34.801593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.939 [2024-11-07 13:44:34.801606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.939 qpair failed and we were unable to recover it. 00:39:26.939 [2024-11-07 13:44:34.802003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.939 [2024-11-07 13:44:34.802017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.939 qpair failed and we were unable to recover it. 00:39:26.939 [2024-11-07 13:44:34.802241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.939 [2024-11-07 13:44:34.802254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.939 qpair failed and we were unable to recover it. 00:39:26.939 [2024-11-07 13:44:34.802534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.939 [2024-11-07 13:44:34.802547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.939 qpair failed and we were unable to recover it. 00:39:26.939 [2024-11-07 13:44:34.802868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.939 [2024-11-07 13:44:34.802881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.939 qpair failed and we were unable to recover it. 00:39:26.939 [2024-11-07 13:44:34.803126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.939 [2024-11-07 13:44:34.803139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.939 qpair failed and we were unable to recover it. 00:39:26.939 [2024-11-07 13:44:34.803463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.939 [2024-11-07 13:44:34.803477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.939 qpair failed and we were unable to recover it. 00:39:26.939 [2024-11-07 13:44:34.803782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.939 [2024-11-07 13:44:34.803795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.939 qpair failed and we were unable to recover it. 00:39:26.939 [2024-11-07 13:44:34.804014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.939 [2024-11-07 13:44:34.804028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.939 qpair failed and we were unable to recover it. 00:39:26.939 [2024-11-07 13:44:34.804224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.939 [2024-11-07 13:44:34.804238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.939 qpair failed and we were unable to recover it. 00:39:26.939 [2024-11-07 13:44:34.804556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.939 [2024-11-07 13:44:34.804570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.939 qpair failed and we were unable to recover it. 00:39:26.939 [2024-11-07 13:44:34.804905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.939 [2024-11-07 13:44:34.804922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.939 qpair failed and we were unable to recover it. 00:39:26.939 [2024-11-07 13:44:34.805229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.939 [2024-11-07 13:44:34.805243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.939 qpair failed and we were unable to recover it. 00:39:26.939 [2024-11-07 13:44:34.805557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.940 [2024-11-07 13:44:34.805570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.940 qpair failed and we were unable to recover it. 00:39:26.940 [2024-11-07 13:44:34.805893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.940 [2024-11-07 13:44:34.805907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.940 qpair failed and we were unable to recover it. 00:39:26.940 [2024-11-07 13:44:34.806264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.940 [2024-11-07 13:44:34.806277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.940 qpair failed and we were unable to recover it. 00:39:26.940 [2024-11-07 13:44:34.806587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.940 [2024-11-07 13:44:34.806600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.940 qpair failed and we were unable to recover it. 00:39:26.940 [2024-11-07 13:44:34.806932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.940 [2024-11-07 13:44:34.806946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.940 qpair failed and we were unable to recover it. 00:39:26.940 [2024-11-07 13:44:34.807142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.940 [2024-11-07 13:44:34.807156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.940 qpair failed and we were unable to recover it. 00:39:26.940 [2024-11-07 13:44:34.807485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.940 [2024-11-07 13:44:34.807498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.940 qpair failed and we were unable to recover it. 00:39:26.940 [2024-11-07 13:44:34.807830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.940 [2024-11-07 13:44:34.807843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.940 qpair failed and we were unable to recover it. 00:39:26.940 [2024-11-07 13:44:34.808174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.940 [2024-11-07 13:44:34.808188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.940 qpair failed and we were unable to recover it. 00:39:26.940 [2024-11-07 13:44:34.808474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.940 [2024-11-07 13:44:34.808487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.940 qpair failed and we were unable to recover it. 00:39:26.940 [2024-11-07 13:44:34.808820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.940 [2024-11-07 13:44:34.808833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.940 qpair failed and we were unable to recover it. 00:39:26.940 [2024-11-07 13:44:34.809152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.940 [2024-11-07 13:44:34.809174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.940 qpair failed and we were unable to recover it. 00:39:26.940 [2024-11-07 13:44:34.809494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.940 [2024-11-07 13:44:34.809508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.940 qpair failed and we were unable to recover it. 00:39:26.940 [2024-11-07 13:44:34.809840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.940 [2024-11-07 13:44:34.809854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.940 qpair failed and we were unable to recover it. 00:39:26.940 [2024-11-07 13:44:34.810184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.940 [2024-11-07 13:44:34.810197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.940 qpair failed and we were unable to recover it. 00:39:26.940 [2024-11-07 13:44:34.810514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.940 [2024-11-07 13:44:34.810528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.940 qpair failed and we were unable to recover it. 00:39:26.940 [2024-11-07 13:44:34.810860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.940 [2024-11-07 13:44:34.810877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.940 qpair failed and we were unable to recover it. 00:39:26.940 [2024-11-07 13:44:34.811200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.940 [2024-11-07 13:44:34.811213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.940 qpair failed and we were unable to recover it. 00:39:26.940 [2024-11-07 13:44:34.811519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.940 [2024-11-07 13:44:34.811532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.940 qpair failed and we were unable to recover it. 00:39:26.940 [2024-11-07 13:44:34.811718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.940 [2024-11-07 13:44:34.811735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.940 qpair failed and we were unable to recover it. 00:39:26.940 [2024-11-07 13:44:34.812032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.940 [2024-11-07 13:44:34.812046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.940 qpair failed and we were unable to recover it. 00:39:26.940 [2024-11-07 13:44:34.812360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.940 [2024-11-07 13:44:34.812373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.940 qpair failed and we were unable to recover it. 00:39:26.940 [2024-11-07 13:44:34.812680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.940 [2024-11-07 13:44:34.812694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.940 qpair failed and we were unable to recover it. 00:39:26.940 [2024-11-07 13:44:34.813025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.940 [2024-11-07 13:44:34.813038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.940 qpair failed and we were unable to recover it. 00:39:26.940 [2024-11-07 13:44:34.813369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.940 [2024-11-07 13:44:34.813383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.940 qpair failed and we were unable to recover it. 00:39:26.940 [2024-11-07 13:44:34.813713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.940 [2024-11-07 13:44:34.813727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.940 qpair failed and we were unable to recover it. 00:39:26.940 [2024-11-07 13:44:34.813940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.940 [2024-11-07 13:44:34.813954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.940 qpair failed and we were unable to recover it. 00:39:26.940 [2024-11-07 13:44:34.814196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.940 [2024-11-07 13:44:34.814209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.940 qpair failed and we were unable to recover it. 00:39:26.940 [2024-11-07 13:44:34.814544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.940 [2024-11-07 13:44:34.814557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.940 qpair failed and we were unable to recover it. 00:39:26.940 [2024-11-07 13:44:34.814820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.940 [2024-11-07 13:44:34.814834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.940 qpair failed and we were unable to recover it. 00:39:26.940 [2024-11-07 13:44:34.815140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.940 [2024-11-07 13:44:34.815154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.940 qpair failed and we were unable to recover it. 00:39:26.940 [2024-11-07 13:44:34.815509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.940 [2024-11-07 13:44:34.815522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.940 qpair failed and we were unable to recover it. 00:39:26.940 [2024-11-07 13:44:34.815849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.940 [2024-11-07 13:44:34.815870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.940 qpair failed and we were unable to recover it. 00:39:26.940 [2024-11-07 13:44:34.816200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.941 [2024-11-07 13:44:34.816214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.941 qpair failed and we were unable to recover it. 00:39:26.941 [2024-11-07 13:44:34.816499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.941 [2024-11-07 13:44:34.816513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.941 qpair failed and we were unable to recover it. 00:39:26.941 [2024-11-07 13:44:34.816840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.941 [2024-11-07 13:44:34.816854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.941 qpair failed and we were unable to recover it. 00:39:26.941 [2024-11-07 13:44:34.817150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.941 [2024-11-07 13:44:34.817164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.941 qpair failed and we were unable to recover it. 00:39:26.941 [2024-11-07 13:44:34.817444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.941 [2024-11-07 13:44:34.817458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.941 qpair failed and we were unable to recover it. 00:39:26.941 [2024-11-07 13:44:34.817761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.941 [2024-11-07 13:44:34.817774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.941 qpair failed and we were unable to recover it. 00:39:26.941 [2024-11-07 13:44:34.818073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.941 [2024-11-07 13:44:34.818088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.941 qpair failed and we were unable to recover it. 00:39:26.941 [2024-11-07 13:44:34.818398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.941 [2024-11-07 13:44:34.818412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.941 qpair failed and we were unable to recover it. 00:39:26.941 [2024-11-07 13:44:34.818736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.941 [2024-11-07 13:44:34.818750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.941 qpair failed and we were unable to recover it. 00:39:26.941 [2024-11-07 13:44:34.818937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.941 [2024-11-07 13:44:34.818953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.941 qpair failed and we were unable to recover it. 00:39:26.941 [2024-11-07 13:44:34.819298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.941 [2024-11-07 13:44:34.819311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.941 qpair failed and we were unable to recover it. 00:39:26.941 [2024-11-07 13:44:34.819631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.941 [2024-11-07 13:44:34.819644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.941 qpair failed and we were unable to recover it. 00:39:26.941 [2024-11-07 13:44:34.819953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.941 [2024-11-07 13:44:34.819966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.941 qpair failed and we were unable to recover it. 00:39:26.941 [2024-11-07 13:44:34.820275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.941 [2024-11-07 13:44:34.820288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.941 qpair failed and we were unable to recover it. 00:39:26.941 [2024-11-07 13:44:34.820599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.941 [2024-11-07 13:44:34.820612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.941 qpair failed and we were unable to recover it. 00:39:26.941 [2024-11-07 13:44:34.820912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.941 [2024-11-07 13:44:34.820926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.941 qpair failed and we were unable to recover it. 00:39:26.941 [2024-11-07 13:44:34.821272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.941 [2024-11-07 13:44:34.821285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.941 qpair failed and we were unable to recover it. 00:39:26.941 [2024-11-07 13:44:34.821597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.941 [2024-11-07 13:44:34.821611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.941 qpair failed and we were unable to recover it. 00:39:26.941 [2024-11-07 13:44:34.821952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.941 [2024-11-07 13:44:34.821966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.941 qpair failed and we were unable to recover it. 00:39:26.941 [2024-11-07 13:44:34.822265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.941 [2024-11-07 13:44:34.822279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.941 qpair failed and we were unable to recover it. 00:39:26.941 [2024-11-07 13:44:34.822498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.941 [2024-11-07 13:44:34.822512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.941 qpair failed and we were unable to recover it. 00:39:26.941 [2024-11-07 13:44:34.822826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.941 [2024-11-07 13:44:34.822840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.941 qpair failed and we were unable to recover it. 00:39:26.941 [2024-11-07 13:44:34.823155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.941 [2024-11-07 13:44:34.823168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.941 qpair failed and we were unable to recover it. 00:39:26.941 [2024-11-07 13:44:34.823483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.941 [2024-11-07 13:44:34.823496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.941 qpair failed and we were unable to recover it. 00:39:26.941 [2024-11-07 13:44:34.823799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.941 [2024-11-07 13:44:34.823813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.941 qpair failed and we were unable to recover it. 00:39:26.941 [2024-11-07 13:44:34.824097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.941 [2024-11-07 13:44:34.824111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.941 qpair failed and we were unable to recover it. 00:39:26.941 [2024-11-07 13:44:34.824296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.941 [2024-11-07 13:44:34.824312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.941 qpair failed and we were unable to recover it. 00:39:26.941 [2024-11-07 13:44:34.824623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.941 [2024-11-07 13:44:34.824636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.941 qpair failed and we were unable to recover it. 00:39:26.941 [2024-11-07 13:44:34.824954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.941 [2024-11-07 13:44:34.824968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.941 qpair failed and we were unable to recover it. 00:39:26.941 [2024-11-07 13:44:34.825287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.941 [2024-11-07 13:44:34.825300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.941 qpair failed and we were unable to recover it. 00:39:26.941 [2024-11-07 13:44:34.825691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.941 [2024-11-07 13:44:34.825704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.941 qpair failed and we were unable to recover it. 00:39:26.941 [2024-11-07 13:44:34.826036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.941 [2024-11-07 13:44:34.826050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.941 qpair failed and we were unable to recover it. 00:39:26.941 [2024-11-07 13:44:34.826255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.941 [2024-11-07 13:44:34.826269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.941 qpair failed and we were unable to recover it. 00:39:26.941 [2024-11-07 13:44:34.826595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.941 [2024-11-07 13:44:34.826608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.941 qpair failed and we were unable to recover it. 00:39:26.942 [2024-11-07 13:44:34.826893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.942 [2024-11-07 13:44:34.826907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.942 qpair failed and we were unable to recover it. 00:39:26.942 [2024-11-07 13:44:34.827242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.942 [2024-11-07 13:44:34.827255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.942 qpair failed and we were unable to recover it. 00:39:26.942 [2024-11-07 13:44:34.827538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.942 [2024-11-07 13:44:34.827552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.942 qpair failed and we were unable to recover it. 00:39:26.942 [2024-11-07 13:44:34.827921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.942 [2024-11-07 13:44:34.827935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.942 qpair failed and we were unable to recover it. 00:39:26.942 [2024-11-07 13:44:34.828259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.942 [2024-11-07 13:44:34.828272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.942 qpair failed and we were unable to recover it. 00:39:26.942 [2024-11-07 13:44:34.828604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.942 [2024-11-07 13:44:34.828617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.942 qpair failed and we were unable to recover it. 00:39:26.942 [2024-11-07 13:44:34.828998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.942 [2024-11-07 13:44:34.829013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.942 qpair failed and we were unable to recover it. 00:39:26.942 [2024-11-07 13:44:34.829339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.942 [2024-11-07 13:44:34.829352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.942 qpair failed and we were unable to recover it. 00:39:26.942 [2024-11-07 13:44:34.829670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.942 [2024-11-07 13:44:34.829684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.942 qpair failed and we were unable to recover it. 00:39:26.942 [2024-11-07 13:44:34.830025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.942 [2024-11-07 13:44:34.830039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.942 qpair failed and we were unable to recover it. 00:39:26.942 [2024-11-07 13:44:34.830354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.942 [2024-11-07 13:44:34.830368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.942 qpair failed and we were unable to recover it. 00:39:26.942 [2024-11-07 13:44:34.830702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.942 [2024-11-07 13:44:34.830715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.942 qpair failed and we were unable to recover it. 00:39:26.942 [2024-11-07 13:44:34.831041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.942 [2024-11-07 13:44:34.831055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.942 qpair failed and we were unable to recover it. 00:39:26.942 [2024-11-07 13:44:34.831382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.942 [2024-11-07 13:44:34.831394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.942 qpair failed and we were unable to recover it. 00:39:26.942 [2024-11-07 13:44:34.831570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.942 [2024-11-07 13:44:34.831584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.942 qpair failed and we were unable to recover it. 00:39:26.942 [2024-11-07 13:44:34.831853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.942 [2024-11-07 13:44:34.831869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.942 qpair failed and we were unable to recover it. 00:39:26.942 [2024-11-07 13:44:34.832163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.942 [2024-11-07 13:44:34.832176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.942 qpair failed and we were unable to recover it. 00:39:26.942 [2024-11-07 13:44:34.832465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.942 [2024-11-07 13:44:34.832487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.942 qpair failed and we were unable to recover it. 00:39:26.942 [2024-11-07 13:44:34.832812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.942 [2024-11-07 13:44:34.832826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.942 qpair failed and we were unable to recover it. 00:39:26.942 [2024-11-07 13:44:34.833112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.942 [2024-11-07 13:44:34.833129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.942 qpair failed and we were unable to recover it. 00:39:26.942 [2024-11-07 13:44:34.833452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.942 [2024-11-07 13:44:34.833465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.942 qpair failed and we were unable to recover it. 00:39:26.942 [2024-11-07 13:44:34.833791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.942 [2024-11-07 13:44:34.833805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.942 qpair failed and we were unable to recover it. 00:39:26.942 [2024-11-07 13:44:34.834116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.942 [2024-11-07 13:44:34.834131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.942 qpair failed and we were unable to recover it. 00:39:26.942 [2024-11-07 13:44:34.834402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.942 [2024-11-07 13:44:34.834415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.942 qpair failed and we were unable to recover it. 00:39:26.942 [2024-11-07 13:44:34.834760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.942 [2024-11-07 13:44:34.834774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.942 qpair failed and we were unable to recover it. 00:39:26.942 [2024-11-07 13:44:34.835102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.942 [2024-11-07 13:44:34.835116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.942 qpair failed and we were unable to recover it. 00:39:26.942 [2024-11-07 13:44:34.835397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.942 [2024-11-07 13:44:34.835411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.942 qpair failed and we were unable to recover it. 00:39:26.942 [2024-11-07 13:44:34.835697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.942 [2024-11-07 13:44:34.835710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.942 qpair failed and we were unable to recover it. 00:39:26.942 [2024-11-07 13:44:34.835938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.942 [2024-11-07 13:44:34.835953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.942 qpair failed and we were unable to recover it. 00:39:26.942 [2024-11-07 13:44:34.836264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.942 [2024-11-07 13:44:34.836277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.942 qpair failed and we were unable to recover it. 00:39:26.942 [2024-11-07 13:44:34.836534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.942 [2024-11-07 13:44:34.836548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.942 qpair failed and we were unable to recover it. 00:39:26.942 [2024-11-07 13:44:34.836824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.942 [2024-11-07 13:44:34.836837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.942 qpair failed and we were unable to recover it. 00:39:26.942 [2024-11-07 13:44:34.837163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.942 [2024-11-07 13:44:34.837177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.942 qpair failed and we were unable to recover it. 00:39:26.942 [2024-11-07 13:44:34.837513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.943 [2024-11-07 13:44:34.837527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.943 qpair failed and we were unable to recover it. 00:39:26.943 [2024-11-07 13:44:34.837703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.943 [2024-11-07 13:44:34.837718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.943 qpair failed and we were unable to recover it. 00:39:26.943 [2024-11-07 13:44:34.838035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.943 [2024-11-07 13:44:34.838049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.943 qpair failed and we were unable to recover it. 00:39:26.943 [2024-11-07 13:44:34.838344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.943 [2024-11-07 13:44:34.838357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.943 qpair failed and we were unable to recover it. 00:39:26.943 [2024-11-07 13:44:34.838691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.943 [2024-11-07 13:44:34.838704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.943 qpair failed and we were unable to recover it. 00:39:26.943 [2024-11-07 13:44:34.839030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.943 [2024-11-07 13:44:34.839044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.943 qpair failed and we were unable to recover it. 00:39:26.943 [2024-11-07 13:44:34.839355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.943 [2024-11-07 13:44:34.839369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.943 qpair failed and we were unable to recover it. 00:39:26.943 [2024-11-07 13:44:34.839567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.943 [2024-11-07 13:44:34.839580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.943 qpair failed and we were unable to recover it. 00:39:26.943 [2024-11-07 13:44:34.839885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.943 [2024-11-07 13:44:34.839899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.943 qpair failed and we were unable to recover it. 00:39:26.943 [2024-11-07 13:44:34.840223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.943 [2024-11-07 13:44:34.840237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.943 qpair failed and we were unable to recover it. 00:39:26.943 [2024-11-07 13:44:34.840571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.943 [2024-11-07 13:44:34.840586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.943 qpair failed and we were unable to recover it. 00:39:26.943 [2024-11-07 13:44:34.840911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.943 [2024-11-07 13:44:34.840926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.943 qpair failed and we were unable to recover it. 00:39:26.943 [2024-11-07 13:44:34.841267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.943 [2024-11-07 13:44:34.841281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.943 qpair failed and we were unable to recover it. 00:39:26.943 [2024-11-07 13:44:34.841606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.943 [2024-11-07 13:44:34.841620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.943 qpair failed and we were unable to recover it. 00:39:26.943 [2024-11-07 13:44:34.841924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.943 [2024-11-07 13:44:34.841938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.943 qpair failed and we were unable to recover it. 00:39:26.943 [2024-11-07 13:44:34.842142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.943 [2024-11-07 13:44:34.842155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.943 qpair failed and we were unable to recover it. 00:39:26.943 [2024-11-07 13:44:34.842468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.943 [2024-11-07 13:44:34.842482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.943 qpair failed and we were unable to recover it. 00:39:26.943 [2024-11-07 13:44:34.842796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.943 [2024-11-07 13:44:34.842810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.943 qpair failed and we were unable to recover it. 00:39:26.943 [2024-11-07 13:44:34.843165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.943 [2024-11-07 13:44:34.843179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.943 qpair failed and we were unable to recover it. 00:39:26.943 [2024-11-07 13:44:34.843498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.943 [2024-11-07 13:44:34.843511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.943 qpair failed and we were unable to recover it. 00:39:26.943 [2024-11-07 13:44:34.843736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.943 [2024-11-07 13:44:34.843749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.943 qpair failed and we were unable to recover it. 00:39:26.943 [2024-11-07 13:44:34.844078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.943 [2024-11-07 13:44:34.844092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.943 qpair failed and we were unable to recover it. 00:39:26.943 [2024-11-07 13:44:34.844423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.943 [2024-11-07 13:44:34.844437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.943 qpair failed and we were unable to recover it. 00:39:26.943 [2024-11-07 13:44:34.844642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.943 [2024-11-07 13:44:34.844655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.943 qpair failed and we were unable to recover it. 00:39:26.943 [2024-11-07 13:44:34.844964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.943 [2024-11-07 13:44:34.844978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.943 qpair failed and we were unable to recover it. 00:39:26.943 [2024-11-07 13:44:34.845313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.943 [2024-11-07 13:44:34.845330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.943 qpair failed and we were unable to recover it. 00:39:26.943 [2024-11-07 13:44:34.845657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.943 [2024-11-07 13:44:34.845675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.943 qpair failed and we were unable to recover it. 00:39:26.943 [2024-11-07 13:44:34.846007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.943 [2024-11-07 13:44:34.846021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.943 qpair failed and we were unable to recover it. 00:39:26.943 [2024-11-07 13:44:34.846309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.943 [2024-11-07 13:44:34.846323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.943 qpair failed and we were unable to recover it. 00:39:26.943 [2024-11-07 13:44:34.846643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.943 [2024-11-07 13:44:34.846658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.943 qpair failed and we were unable to recover it. 00:39:26.943 [2024-11-07 13:44:34.846935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.943 [2024-11-07 13:44:34.846949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.943 qpair failed and we were unable to recover it. 00:39:26.943 [2024-11-07 13:44:34.847317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.943 [2024-11-07 13:44:34.847331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.943 qpair failed and we were unable to recover it. 00:39:26.943 [2024-11-07 13:44:34.847660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.943 [2024-11-07 13:44:34.847674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.943 qpair failed and we were unable to recover it. 00:39:26.943 [2024-11-07 13:44:34.848010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.943 [2024-11-07 13:44:34.848024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.944 qpair failed and we were unable to recover it. 00:39:26.944 [2024-11-07 13:44:34.848345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.944 [2024-11-07 13:44:34.848359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.944 qpair failed and we were unable to recover it. 00:39:26.944 [2024-11-07 13:44:34.848632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.944 [2024-11-07 13:44:34.848646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.944 qpair failed and we were unable to recover it. 00:39:26.944 [2024-11-07 13:44:34.848980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.944 [2024-11-07 13:44:34.848994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.944 qpair failed and we were unable to recover it. 00:39:26.944 [2024-11-07 13:44:34.849330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.944 [2024-11-07 13:44:34.849345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.944 qpair failed and we were unable to recover it. 00:39:26.944 [2024-11-07 13:44:34.849668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.944 [2024-11-07 13:44:34.849681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.944 qpair failed and we were unable to recover it. 00:39:26.944 [2024-11-07 13:44:34.849970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.944 [2024-11-07 13:44:34.849985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.944 qpair failed and we were unable to recover it. 00:39:26.944 [2024-11-07 13:44:34.850307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.944 [2024-11-07 13:44:34.850322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.944 qpair failed and we were unable to recover it. 00:39:26.944 [2024-11-07 13:44:34.850533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.944 [2024-11-07 13:44:34.850547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.944 qpair failed and we were unable to recover it. 00:39:26.944 [2024-11-07 13:44:34.850746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.944 [2024-11-07 13:44:34.850760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.944 qpair failed and we were unable to recover it. 00:39:26.944 [2024-11-07 13:44:34.851096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.944 [2024-11-07 13:44:34.851110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.944 qpair failed and we were unable to recover it. 00:39:26.944 [2024-11-07 13:44:34.851392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.944 [2024-11-07 13:44:34.851405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.944 qpair failed and we were unable to recover it. 00:39:26.944 [2024-11-07 13:44:34.851593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.944 [2024-11-07 13:44:34.851608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.944 qpair failed and we were unable to recover it. 00:39:26.944 [2024-11-07 13:44:34.851932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.944 [2024-11-07 13:44:34.851946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.944 qpair failed and we were unable to recover it. 00:39:26.944 [2024-11-07 13:44:34.852259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.944 [2024-11-07 13:44:34.852273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.944 qpair failed and we were unable to recover it. 00:39:26.944 [2024-11-07 13:44:34.852606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.944 [2024-11-07 13:44:34.852620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.944 qpair failed and we were unable to recover it. 00:39:26.944 [2024-11-07 13:44:34.852950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.944 [2024-11-07 13:44:34.852965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.944 qpair failed and we were unable to recover it. 00:39:26.944 [2024-11-07 13:44:34.853323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.944 [2024-11-07 13:44:34.853337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.944 qpair failed and we were unable to recover it. 00:39:26.944 [2024-11-07 13:44:34.853740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.944 [2024-11-07 13:44:34.853754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.944 qpair failed and we were unable to recover it. 00:39:26.944 [2024-11-07 13:44:34.854074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.944 [2024-11-07 13:44:34.854088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.944 qpair failed and we were unable to recover it. 00:39:26.944 [2024-11-07 13:44:34.854417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.944 [2024-11-07 13:44:34.854432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.944 qpair failed and we were unable to recover it. 00:39:26.944 [2024-11-07 13:44:34.854648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.944 [2024-11-07 13:44:34.854663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.944 qpair failed and we were unable to recover it. 00:39:26.944 [2024-11-07 13:44:34.854972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.944 [2024-11-07 13:44:34.854986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.944 qpair failed and we were unable to recover it. 00:39:26.944 [2024-11-07 13:44:34.855298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.944 [2024-11-07 13:44:34.855311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.944 qpair failed and we were unable to recover it. 00:39:26.944 [2024-11-07 13:44:34.855615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.944 [2024-11-07 13:44:34.855629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.944 qpair failed and we were unable to recover it. 00:39:26.944 [2024-11-07 13:44:34.855943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.944 [2024-11-07 13:44:34.855958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.944 qpair failed and we were unable to recover it. 00:39:26.944 [2024-11-07 13:44:34.856295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.944 [2024-11-07 13:44:34.856309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.944 qpair failed and we were unable to recover it. 00:39:26.944 [2024-11-07 13:44:34.856695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.944 [2024-11-07 13:44:34.856708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.944 qpair failed and we were unable to recover it. 00:39:26.944 [2024-11-07 13:44:34.857040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.944 [2024-11-07 13:44:34.857054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.944 qpair failed and we were unable to recover it. 00:39:26.944 [2024-11-07 13:44:34.857356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.944 [2024-11-07 13:44:34.857370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.944 qpair failed and we were unable to recover it. 00:39:26.944 [2024-11-07 13:44:34.857669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.944 [2024-11-07 13:44:34.857682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.944 qpair failed and we were unable to recover it. 00:39:26.944 [2024-11-07 13:44:34.858025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.944 [2024-11-07 13:44:34.858040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.944 qpair failed and we were unable to recover it. 00:39:26.944 [2024-11-07 13:44:34.858336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.944 [2024-11-07 13:44:34.858349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.944 qpair failed and we were unable to recover it. 00:39:26.944 [2024-11-07 13:44:34.858565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.944 [2024-11-07 13:44:34.858580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.944 qpair failed and we were unable to recover it. 00:39:26.944 [2024-11-07 13:44:34.858778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.944 [2024-11-07 13:44:34.858791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.944 qpair failed and we were unable to recover it. 00:39:26.944 [2024-11-07 13:44:34.859212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.944 [2024-11-07 13:44:34.859227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.944 qpair failed and we were unable to recover it. 00:39:26.945 [2024-11-07 13:44:34.859541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.945 [2024-11-07 13:44:34.859556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.945 qpair failed and we were unable to recover it. 00:39:26.945 [2024-11-07 13:44:34.859877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.945 [2024-11-07 13:44:34.859891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.945 qpair failed and we were unable to recover it. 00:39:26.945 [2024-11-07 13:44:34.860195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.945 [2024-11-07 13:44:34.860209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.945 qpair failed and we were unable to recover it. 00:39:26.945 [2024-11-07 13:44:34.860538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.945 [2024-11-07 13:44:34.860551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.945 qpair failed and we were unable to recover it. 00:39:26.945 [2024-11-07 13:44:34.860887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.945 [2024-11-07 13:44:34.860902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.945 qpair failed and we were unable to recover it. 00:39:26.945 [2024-11-07 13:44:34.861258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.945 [2024-11-07 13:44:34.861273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.945 qpair failed and we were unable to recover it. 00:39:26.945 [2024-11-07 13:44:34.861576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.945 [2024-11-07 13:44:34.861590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.945 qpair failed and we were unable to recover it. 00:39:26.945 [2024-11-07 13:44:34.861900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.945 [2024-11-07 13:44:34.861914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.945 qpair failed and we were unable to recover it. 00:39:26.945 [2024-11-07 13:44:34.862062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.945 [2024-11-07 13:44:34.862077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.945 qpair failed and we were unable to recover it. 00:39:26.945 [2024-11-07 13:44:34.862411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.945 [2024-11-07 13:44:34.862424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.945 qpair failed and we were unable to recover it. 00:39:26.945 [2024-11-07 13:44:34.862750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.945 [2024-11-07 13:44:34.862763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.945 qpair failed and we were unable to recover it. 00:39:26.945 [2024-11-07 13:44:34.863093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.945 [2024-11-07 13:44:34.863107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.945 qpair failed and we were unable to recover it. 00:39:26.945 [2024-11-07 13:44:34.863486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.945 [2024-11-07 13:44:34.863501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.945 qpair failed and we were unable to recover it. 00:39:26.945 [2024-11-07 13:44:34.863825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.945 [2024-11-07 13:44:34.863839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.945 qpair failed and we were unable to recover it. 00:39:26.945 [2024-11-07 13:44:34.864165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.945 [2024-11-07 13:44:34.864180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.945 qpair failed and we were unable to recover it. 00:39:26.945 [2024-11-07 13:44:34.864510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.945 [2024-11-07 13:44:34.864523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.945 qpair failed and we were unable to recover it. 00:39:26.945 [2024-11-07 13:44:34.864808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.945 [2024-11-07 13:44:34.864821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.945 qpair failed and we were unable to recover it. 00:39:26.945 [2024-11-07 13:44:34.865154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.945 [2024-11-07 13:44:34.865169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.945 qpair failed and we were unable to recover it. 00:39:26.945 [2024-11-07 13:44:34.865490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.945 [2024-11-07 13:44:34.865505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.945 qpair failed and we were unable to recover it. 00:39:26.945 [2024-11-07 13:44:34.865899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.945 [2024-11-07 13:44:34.865914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.945 qpair failed and we were unable to recover it. 00:39:26.945 [2024-11-07 13:44:34.866262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.945 [2024-11-07 13:44:34.866276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.945 qpair failed and we were unable to recover it. 00:39:26.945 [2024-11-07 13:44:34.866471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.945 [2024-11-07 13:44:34.866486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.945 qpair failed and we were unable to recover it. 00:39:26.945 [2024-11-07 13:44:34.866758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.945 [2024-11-07 13:44:34.866771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.945 qpair failed and we were unable to recover it. 00:39:26.945 [2024-11-07 13:44:34.867087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.945 [2024-11-07 13:44:34.867101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.945 qpair failed and we were unable to recover it. 00:39:26.945 [2024-11-07 13:44:34.867429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.945 [2024-11-07 13:44:34.867443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.945 qpair failed and we were unable to recover it. 00:39:26.945 [2024-11-07 13:44:34.867745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.945 [2024-11-07 13:44:34.867758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.945 qpair failed and we were unable to recover it. 00:39:26.945 [2024-11-07 13:44:34.868076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.945 [2024-11-07 13:44:34.868091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.945 qpair failed and we were unable to recover it. 00:39:26.945 [2024-11-07 13:44:34.868394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.945 [2024-11-07 13:44:34.868408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.945 qpair failed and we were unable to recover it. 00:39:26.945 [2024-11-07 13:44:34.868692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.945 [2024-11-07 13:44:34.868707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.945 qpair failed and we were unable to recover it. 00:39:26.945 [2024-11-07 13:44:34.869016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.945 [2024-11-07 13:44:34.869030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.945 qpair failed and we were unable to recover it. 00:39:26.945 [2024-11-07 13:44:34.869343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.945 [2024-11-07 13:44:34.869356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.945 qpair failed and we were unable to recover it. 00:39:26.946 [2024-11-07 13:44:34.869690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.946 [2024-11-07 13:44:34.869704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.946 qpair failed and we were unable to recover it. 00:39:26.946 [2024-11-07 13:44:34.869993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.946 [2024-11-07 13:44:34.870008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.946 qpair failed and we were unable to recover it. 00:39:26.946 [2024-11-07 13:44:34.870329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.946 [2024-11-07 13:44:34.870342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.946 qpair failed and we were unable to recover it. 00:39:26.946 [2024-11-07 13:44:34.870659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.946 [2024-11-07 13:44:34.870672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.946 qpair failed and we were unable to recover it. 00:39:26.946 [2024-11-07 13:44:34.870982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.946 [2024-11-07 13:44:34.870996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.946 qpair failed and we were unable to recover it. 00:39:26.946 [2024-11-07 13:44:34.871334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.946 [2024-11-07 13:44:34.871349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.946 qpair failed and we were unable to recover it. 00:39:26.946 [2024-11-07 13:44:34.871673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.946 [2024-11-07 13:44:34.871689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.946 qpair failed and we were unable to recover it. 00:39:26.946 [2024-11-07 13:44:34.872002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.946 [2024-11-07 13:44:34.872017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.946 qpair failed and we were unable to recover it. 00:39:26.946 [2024-11-07 13:44:34.872349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.946 [2024-11-07 13:44:34.872363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.946 qpair failed and we were unable to recover it. 00:39:26.946 [2024-11-07 13:44:34.872694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.946 [2024-11-07 13:44:34.872716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.946 qpair failed and we were unable to recover it. 00:39:26.946 [2024-11-07 13:44:34.873010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.946 [2024-11-07 13:44:34.873024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.946 qpair failed and we were unable to recover it. 00:39:26.946 [2024-11-07 13:44:34.873333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.946 [2024-11-07 13:44:34.873347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.946 qpair failed and we were unable to recover it. 00:39:26.946 [2024-11-07 13:44:34.873653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.946 [2024-11-07 13:44:34.873667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.946 qpair failed and we were unable to recover it. 00:39:26.946 [2024-11-07 13:44:34.873884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.946 [2024-11-07 13:44:34.873899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.946 qpair failed and we were unable to recover it. 00:39:26.946 [2024-11-07 13:44:34.874189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.946 [2024-11-07 13:44:34.874204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.946 qpair failed and we were unable to recover it. 00:39:26.946 [2024-11-07 13:44:34.874534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.946 [2024-11-07 13:44:34.874548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.946 qpair failed and we were unable to recover it. 00:39:26.946 [2024-11-07 13:44:34.874752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.946 [2024-11-07 13:44:34.874766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.946 qpair failed and we were unable to recover it. 00:39:26.946 [2024-11-07 13:44:34.875028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.946 [2024-11-07 13:44:34.875043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.946 qpair failed and we were unable to recover it. 00:39:26.946 [2024-11-07 13:44:34.875360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.946 [2024-11-07 13:44:34.875373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.946 qpair failed and we were unable to recover it. 00:39:26.946 [2024-11-07 13:44:34.875691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.946 [2024-11-07 13:44:34.875704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.946 qpair failed and we were unable to recover it. 00:39:26.946 [2024-11-07 13:44:34.876028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.946 [2024-11-07 13:44:34.876042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.946 qpair failed and we were unable to recover it. 00:39:26.946 [2024-11-07 13:44:34.876325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.946 [2024-11-07 13:44:34.876339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.946 qpair failed and we were unable to recover it. 00:39:26.946 [2024-11-07 13:44:34.876666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.946 [2024-11-07 13:44:34.876679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.946 qpair failed and we were unable to recover it. 00:39:26.946 [2024-11-07 13:44:34.876993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.946 [2024-11-07 13:44:34.877007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.946 qpair failed and we were unable to recover it. 00:39:26.946 [2024-11-07 13:44:34.877355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.946 [2024-11-07 13:44:34.877369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.946 qpair failed and we were unable to recover it. 00:39:26.946 [2024-11-07 13:44:34.877700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.946 [2024-11-07 13:44:34.877722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.946 qpair failed and we were unable to recover it. 00:39:26.946 [2024-11-07 13:44:34.878041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.946 [2024-11-07 13:44:34.878055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.946 qpair failed and we were unable to recover it. 00:39:26.946 [2024-11-07 13:44:34.878370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.946 [2024-11-07 13:44:34.878384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.946 qpair failed and we were unable to recover it. 00:39:26.946 [2024-11-07 13:44:34.878628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.946 [2024-11-07 13:44:34.878642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.946 qpair failed and we were unable to recover it. 00:39:26.946 [2024-11-07 13:44:34.878850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.946 [2024-11-07 13:44:34.878926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.946 qpair failed and we were unable to recover it. 00:39:26.946 [2024-11-07 13:44:34.879267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.946 [2024-11-07 13:44:34.879280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.946 qpair failed and we were unable to recover it. 00:39:26.946 [2024-11-07 13:44:34.879599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.946 [2024-11-07 13:44:34.879613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.946 qpair failed and we were unable to recover it. 00:39:26.946 [2024-11-07 13:44:34.879921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.946 [2024-11-07 13:44:34.879935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.946 qpair failed and we were unable to recover it. 00:39:26.946 [2024-11-07 13:44:34.880277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.946 [2024-11-07 13:44:34.880291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.946 qpair failed and we were unable to recover it. 00:39:26.946 [2024-11-07 13:44:34.880650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.946 [2024-11-07 13:44:34.880664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.946 qpair failed and we were unable to recover it. 00:39:26.946 [2024-11-07 13:44:34.881014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.947 [2024-11-07 13:44:34.881029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.947 qpair failed and we were unable to recover it. 00:39:26.947 [2024-11-07 13:44:34.881240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.947 [2024-11-07 13:44:34.881255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.947 qpair failed and we were unable to recover it. 00:39:26.947 [2024-11-07 13:44:34.881567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.947 [2024-11-07 13:44:34.881582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.947 qpair failed and we were unable to recover it. 00:39:26.947 [2024-11-07 13:44:34.881905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.947 [2024-11-07 13:44:34.881919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.947 qpair failed and we were unable to recover it. 00:39:26.947 [2024-11-07 13:44:34.882220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.947 [2024-11-07 13:44:34.882234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.947 qpair failed and we were unable to recover it. 00:39:26.947 [2024-11-07 13:44:34.882621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.947 [2024-11-07 13:44:34.882636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.947 qpair failed and we were unable to recover it. 00:39:26.947 [2024-11-07 13:44:34.882960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.947 [2024-11-07 13:44:34.882975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.947 qpair failed and we were unable to recover it. 00:39:26.947 [2024-11-07 13:44:34.883290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.947 [2024-11-07 13:44:34.883303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.947 qpair failed and we were unable to recover it. 00:39:26.947 [2024-11-07 13:44:34.883671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.947 [2024-11-07 13:44:34.883684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.947 qpair failed and we were unable to recover it. 00:39:26.947 [2024-11-07 13:44:34.883999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.947 [2024-11-07 13:44:34.884013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.947 qpair failed and we were unable to recover it. 00:39:26.947 [2024-11-07 13:44:34.884344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.947 [2024-11-07 13:44:34.884357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.947 qpair failed and we were unable to recover it. 00:39:26.947 [2024-11-07 13:44:34.884714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.947 [2024-11-07 13:44:34.884731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.947 qpair failed and we were unable to recover it. 00:39:26.947 [2024-11-07 13:44:34.885027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.947 [2024-11-07 13:44:34.885042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.947 qpair failed and we were unable to recover it. 00:39:26.947 [2024-11-07 13:44:34.885354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.947 [2024-11-07 13:44:34.885369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.947 qpair failed and we were unable to recover it. 00:39:26.947 [2024-11-07 13:44:34.885698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.947 [2024-11-07 13:44:34.885712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.947 qpair failed and we were unable to recover it. 00:39:26.947 [2024-11-07 13:44:34.886028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.947 [2024-11-07 13:44:34.886042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.947 qpair failed and we were unable to recover it. 00:39:26.947 [2024-11-07 13:44:34.886276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.947 [2024-11-07 13:44:34.886289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.947 qpair failed and we were unable to recover it. 00:39:26.947 [2024-11-07 13:44:34.886611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.947 [2024-11-07 13:44:34.886624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.947 qpair failed and we were unable to recover it. 00:39:26.947 [2024-11-07 13:44:34.886948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.947 [2024-11-07 13:44:34.886966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.947 qpair failed and we were unable to recover it. 00:39:26.947 [2024-11-07 13:44:34.887284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.947 [2024-11-07 13:44:34.887298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.947 qpair failed and we were unable to recover it. 00:39:26.947 [2024-11-07 13:44:34.887607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.947 [2024-11-07 13:44:34.887622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.947 qpair failed and we were unable to recover it. 00:39:26.947 [2024-11-07 13:44:34.887929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.947 [2024-11-07 13:44:34.887943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.947 qpair failed and we were unable to recover it. 00:39:26.947 [2024-11-07 13:44:34.888118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.947 [2024-11-07 13:44:34.888132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.947 qpair failed and we were unable to recover it. 00:39:26.947 [2024-11-07 13:44:34.888450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.947 [2024-11-07 13:44:34.888464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.947 qpair failed and we were unable to recover it. 00:39:26.947 [2024-11-07 13:44:34.888764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.947 [2024-11-07 13:44:34.888778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.947 qpair failed and we were unable to recover it. 00:39:26.947 [2024-11-07 13:44:34.889066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.947 [2024-11-07 13:44:34.889080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.947 qpair failed and we were unable to recover it. 00:39:26.947 [2024-11-07 13:44:34.889276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.947 [2024-11-07 13:44:34.889289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.947 qpair failed and we were unable to recover it. 00:39:26.947 [2024-11-07 13:44:34.889475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.947 [2024-11-07 13:44:34.889490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.947 qpair failed and we were unable to recover it. 00:39:26.947 [2024-11-07 13:44:34.889770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.947 [2024-11-07 13:44:34.889783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.947 qpair failed and we were unable to recover it. 00:39:26.947 [2024-11-07 13:44:34.890002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.947 [2024-11-07 13:44:34.890016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.947 qpair failed and we were unable to recover it. 00:39:26.947 [2024-11-07 13:44:34.890333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.947 [2024-11-07 13:44:34.890348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.947 qpair failed and we were unable to recover it. 00:39:26.947 [2024-11-07 13:44:34.890669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.947 [2024-11-07 13:44:34.890683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.947 qpair failed and we were unable to recover it. 00:39:26.947 [2024-11-07 13:44:34.891077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.947 [2024-11-07 13:44:34.891091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.947 qpair failed and we were unable to recover it. 00:39:26.947 [2024-11-07 13:44:34.891374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.947 [2024-11-07 13:44:34.891388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.947 qpair failed and we were unable to recover it. 00:39:26.947 [2024-11-07 13:44:34.891697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.947 [2024-11-07 13:44:34.891710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.947 qpair failed and we were unable to recover it. 00:39:26.947 [2024-11-07 13:44:34.892106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.947 [2024-11-07 13:44:34.892121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.947 qpair failed and we were unable to recover it. 00:39:26.948 [2024-11-07 13:44:34.892446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.948 [2024-11-07 13:44:34.892459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.948 qpair failed and we were unable to recover it. 00:39:26.948 [2024-11-07 13:44:34.892789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.948 [2024-11-07 13:44:34.892802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.948 qpair failed and we were unable to recover it. 00:39:26.948 [2024-11-07 13:44:34.893135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.948 [2024-11-07 13:44:34.893149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.948 qpair failed and we were unable to recover it. 00:39:26.948 [2024-11-07 13:44:34.893471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.948 [2024-11-07 13:44:34.893484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.948 qpair failed and we were unable to recover it. 00:39:26.948 [2024-11-07 13:44:34.893851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.948 [2024-11-07 13:44:34.893869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.948 qpair failed and we were unable to recover it. 00:39:26.948 [2024-11-07 13:44:34.894171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.948 [2024-11-07 13:44:34.894185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.948 qpair failed and we were unable to recover it. 00:39:26.948 [2024-11-07 13:44:34.894401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.948 [2024-11-07 13:44:34.894415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.948 qpair failed and we were unable to recover it. 00:39:26.948 [2024-11-07 13:44:34.894634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.948 [2024-11-07 13:44:34.894649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.948 qpair failed and we were unable to recover it. 00:39:26.948 [2024-11-07 13:44:34.894872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.948 [2024-11-07 13:44:34.894886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.948 qpair failed and we were unable to recover it. 00:39:26.948 [2024-11-07 13:44:34.895203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.948 [2024-11-07 13:44:34.895217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.948 qpair failed and we were unable to recover it. 00:39:26.948 [2024-11-07 13:44:34.895533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.948 [2024-11-07 13:44:34.895547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.948 qpair failed and we were unable to recover it. 00:39:26.948 [2024-11-07 13:44:34.895854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.948 [2024-11-07 13:44:34.895871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.948 qpair failed and we were unable to recover it. 00:39:26.948 [2024-11-07 13:44:34.896190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.948 [2024-11-07 13:44:34.896203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.948 qpair failed and we were unable to recover it. 00:39:26.948 [2024-11-07 13:44:34.896535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.948 [2024-11-07 13:44:34.896548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.948 qpair failed and we were unable to recover it. 00:39:26.948 [2024-11-07 13:44:34.896877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.948 [2024-11-07 13:44:34.896892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.948 qpair failed and we were unable to recover it. 00:39:26.948 [2024-11-07 13:44:34.897109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.948 [2024-11-07 13:44:34.897124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.948 qpair failed and we were unable to recover it. 00:39:26.948 [2024-11-07 13:44:34.897455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.948 [2024-11-07 13:44:34.897468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.948 qpair failed and we were unable to recover it. 00:39:26.948 [2024-11-07 13:44:34.897777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.948 [2024-11-07 13:44:34.897790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.948 qpair failed and we were unable to recover it. 00:39:26.948 [2024-11-07 13:44:34.898102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.948 [2024-11-07 13:44:34.898116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.948 qpair failed and we were unable to recover it. 00:39:26.948 [2024-11-07 13:44:34.898419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.948 [2024-11-07 13:44:34.898433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.948 qpair failed and we were unable to recover it. 00:39:26.948 [2024-11-07 13:44:34.898738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.948 [2024-11-07 13:44:34.898752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.948 qpair failed and we were unable to recover it. 00:39:26.948 [2024-11-07 13:44:34.899017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.948 [2024-11-07 13:44:34.899031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.948 qpair failed and we were unable to recover it. 00:39:26.948 [2024-11-07 13:44:34.899213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.948 [2024-11-07 13:44:34.899228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.948 qpair failed and we were unable to recover it. 00:39:26.948 [2024-11-07 13:44:34.899555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.948 [2024-11-07 13:44:34.899568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.948 qpair failed and we were unable to recover it. 00:39:26.948 [2024-11-07 13:44:34.899888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.948 [2024-11-07 13:44:34.899903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.948 qpair failed and we were unable to recover it. 00:39:26.948 [2024-11-07 13:44:34.900097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.948 [2024-11-07 13:44:34.900112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.948 qpair failed and we were unable to recover it. 00:39:26.948 [2024-11-07 13:44:34.900404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.948 [2024-11-07 13:44:34.900418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.948 qpair failed and we were unable to recover it. 00:39:26.948 [2024-11-07 13:44:34.900803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.948 [2024-11-07 13:44:34.900816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.948 qpair failed and we were unable to recover it. 00:39:26.948 [2024-11-07 13:44:34.901131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.948 [2024-11-07 13:44:34.901145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.948 qpair failed and we were unable to recover it. 00:39:26.948 [2024-11-07 13:44:34.901481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.948 [2024-11-07 13:44:34.901496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.948 qpair failed and we were unable to recover it. 00:39:26.948 [2024-11-07 13:44:34.901838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.948 [2024-11-07 13:44:34.901852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.948 qpair failed and we were unable to recover it. 00:39:26.948 [2024-11-07 13:44:34.902186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.948 [2024-11-07 13:44:34.902200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.948 qpair failed and we were unable to recover it. 00:39:26.948 [2024-11-07 13:44:34.902573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.948 [2024-11-07 13:44:34.902587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.948 qpair failed and we were unable to recover it. 00:39:26.948 [2024-11-07 13:44:34.902905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.948 [2024-11-07 13:44:34.902919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.948 qpair failed and we were unable to recover it. 00:39:26.948 [2024-11-07 13:44:34.903281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.948 [2024-11-07 13:44:34.903295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.948 qpair failed and we were unable to recover it. 00:39:26.948 [2024-11-07 13:44:34.903613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.948 [2024-11-07 13:44:34.903626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.948 qpair failed and we were unable to recover it. 00:39:26.948 [2024-11-07 13:44:34.903939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.949 [2024-11-07 13:44:34.903954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.949 qpair failed and we were unable to recover it. 00:39:26.949 [2024-11-07 13:44:34.904263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.949 [2024-11-07 13:44:34.904276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.949 qpair failed and we were unable to recover it. 00:39:26.949 [2024-11-07 13:44:34.904576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.949 [2024-11-07 13:44:34.904590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.949 qpair failed and we were unable to recover it. 00:39:26.949 [2024-11-07 13:44:34.904932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.949 [2024-11-07 13:44:34.904945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.949 qpair failed and we were unable to recover it. 00:39:26.949 [2024-11-07 13:44:34.905263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.949 [2024-11-07 13:44:34.905276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.949 qpair failed and we were unable to recover it. 00:39:26.949 [2024-11-07 13:44:34.905607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.949 [2024-11-07 13:44:34.905621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.949 qpair failed and we were unable to recover it. 00:39:26.949 [2024-11-07 13:44:34.905959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.949 [2024-11-07 13:44:34.905973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.949 qpair failed and we were unable to recover it. 00:39:26.949 [2024-11-07 13:44:34.906306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.949 [2024-11-07 13:44:34.906319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.949 qpair failed and we were unable to recover it. 00:39:26.949 [2024-11-07 13:44:34.906632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.949 [2024-11-07 13:44:34.906645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.949 qpair failed and we were unable to recover it. 00:39:26.949 [2024-11-07 13:44:34.906973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.949 [2024-11-07 13:44:34.906987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.949 qpair failed and we were unable to recover it. 00:39:26.949 [2024-11-07 13:44:34.907315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.949 [2024-11-07 13:44:34.907329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.949 qpair failed and we were unable to recover it. 00:39:26.949 [2024-11-07 13:44:34.907546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.949 [2024-11-07 13:44:34.907559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.949 qpair failed and we were unable to recover it. 00:39:26.949 [2024-11-07 13:44:34.907648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.949 [2024-11-07 13:44:34.907662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.949 qpair failed and we were unable to recover it. 00:39:26.949 [2024-11-07 13:44:34.907956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.949 [2024-11-07 13:44:34.907970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.949 qpair failed and we were unable to recover it. 00:39:26.949 [2024-11-07 13:44:34.908309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.949 [2024-11-07 13:44:34.908323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.949 qpair failed and we were unable to recover it. 00:39:26.949 [2024-11-07 13:44:34.908670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.949 [2024-11-07 13:44:34.908685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.949 qpair failed and we were unable to recover it. 00:39:26.949 [2024-11-07 13:44:34.908996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.949 [2024-11-07 13:44:34.909011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.949 qpair failed and we were unable to recover it. 00:39:26.949 [2024-11-07 13:44:34.909283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.949 [2024-11-07 13:44:34.909297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.949 qpair failed and we were unable to recover it. 00:39:26.949 [2024-11-07 13:44:34.909623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.949 [2024-11-07 13:44:34.909637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.949 qpair failed and we were unable to recover it. 00:39:26.949 [2024-11-07 13:44:34.909910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.949 [2024-11-07 13:44:34.909926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.949 qpair failed and we were unable to recover it. 00:39:26.949 [2024-11-07 13:44:34.910263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.949 [2024-11-07 13:44:34.910276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.949 qpair failed and we were unable to recover it. 00:39:26.949 [2024-11-07 13:44:34.910584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.949 [2024-11-07 13:44:34.910598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.949 qpair failed and we were unable to recover it. 00:39:26.949 [2024-11-07 13:44:34.910899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.949 [2024-11-07 13:44:34.910913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.949 qpair failed and we were unable to recover it. 00:39:26.949 [2024-11-07 13:44:34.911238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.949 [2024-11-07 13:44:34.911251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.949 qpair failed and we were unable to recover it. 00:39:26.949 [2024-11-07 13:44:34.911530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.949 [2024-11-07 13:44:34.911543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.949 qpair failed and we were unable to recover it. 00:39:26.949 [2024-11-07 13:44:34.911640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.949 [2024-11-07 13:44:34.911654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.949 qpair failed and we were unable to recover it. 00:39:26.949 [2024-11-07 13:44:34.911947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.949 [2024-11-07 13:44:34.911960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.949 qpair failed and we were unable to recover it. 00:39:26.949 [2024-11-07 13:44:34.912273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.949 [2024-11-07 13:44:34.912286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.949 qpair failed and we were unable to recover it. 00:39:26.949 [2024-11-07 13:44:34.912615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.949 [2024-11-07 13:44:34.912628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.949 qpair failed and we were unable to recover it. 00:39:26.949 [2024-11-07 13:44:34.912940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.949 [2024-11-07 13:44:34.912953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.949 qpair failed and we were unable to recover it. 00:39:26.949 [2024-11-07 13:44:34.913169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.949 [2024-11-07 13:44:34.913183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.949 qpair failed and we were unable to recover it. 00:39:26.949 [2024-11-07 13:44:34.913514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.949 [2024-11-07 13:44:34.913527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.949 qpair failed and we were unable to recover it. 00:39:26.949 [2024-11-07 13:44:34.913856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.949 [2024-11-07 13:44:34.913872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.949 qpair failed and we were unable to recover it. 00:39:26.949 [2024-11-07 13:44:34.914172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.949 [2024-11-07 13:44:34.914185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.949 qpair failed and we were unable to recover it. 00:39:26.950 [2024-11-07 13:44:34.914489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.950 [2024-11-07 13:44:34.914502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.950 qpair failed and we were unable to recover it. 00:39:26.950 [2024-11-07 13:44:34.914817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.950 [2024-11-07 13:44:34.914830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.950 qpair failed and we were unable to recover it. 00:39:26.950 [2024-11-07 13:44:34.915153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.950 [2024-11-07 13:44:34.915167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.950 qpair failed and we were unable to recover it. 00:39:26.950 [2024-11-07 13:44:34.915485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.950 [2024-11-07 13:44:34.915507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.950 qpair failed and we were unable to recover it. 00:39:26.950 [2024-11-07 13:44:34.915832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.950 [2024-11-07 13:44:34.915846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.950 qpair failed and we were unable to recover it. 00:39:26.950 [2024-11-07 13:44:34.916235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.950 [2024-11-07 13:44:34.916250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.950 qpair failed and we were unable to recover it. 00:39:26.950 [2024-11-07 13:44:34.916569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.950 [2024-11-07 13:44:34.916582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:26.950 qpair failed and we were unable to recover it. 00:39:27.226 [2024-11-07 13:44:34.916872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.226 [2024-11-07 13:44:34.916887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.226 qpair failed and we were unable to recover it. 00:39:27.226 [2024-11-07 13:44:34.917197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.226 [2024-11-07 13:44:34.917212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.226 qpair failed and we were unable to recover it. 00:39:27.226 [2024-11-07 13:44:34.917494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.226 [2024-11-07 13:44:34.917507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.226 qpair failed and we were unable to recover it. 00:39:27.226 [2024-11-07 13:44:34.917823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.226 [2024-11-07 13:44:34.917837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.226 qpair failed and we were unable to recover it. 00:39:27.226 [2024-11-07 13:44:34.918143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.226 [2024-11-07 13:44:34.918157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.226 qpair failed and we were unable to recover it. 00:39:27.226 [2024-11-07 13:44:34.918484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.226 [2024-11-07 13:44:34.918498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.226 qpair failed and we were unable to recover it. 00:39:27.226 [2024-11-07 13:44:34.918816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.226 [2024-11-07 13:44:34.918829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.226 qpair failed and we were unable to recover it. 00:39:27.226 [2024-11-07 13:44:34.919066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.226 [2024-11-07 13:44:34.919080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.226 qpair failed and we were unable to recover it. 00:39:27.226 [2024-11-07 13:44:34.919412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.226 [2024-11-07 13:44:34.919428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.226 qpair failed and we were unable to recover it. 00:39:27.226 [2024-11-07 13:44:34.919758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.226 [2024-11-07 13:44:34.919772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.226 qpair failed and we were unable to recover it. 00:39:27.226 [2024-11-07 13:44:34.920139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.226 [2024-11-07 13:44:34.920153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.226 qpair failed and we were unable to recover it. 00:39:27.226 [2024-11-07 13:44:34.920356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.226 [2024-11-07 13:44:34.920369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.226 qpair failed and we were unable to recover it. 00:39:27.226 [2024-11-07 13:44:34.920688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.226 [2024-11-07 13:44:34.920702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.226 qpair failed and we were unable to recover it. 00:39:27.226 [2024-11-07 13:44:34.921017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.226 [2024-11-07 13:44:34.921031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.226 qpair failed and we were unable to recover it. 00:39:27.226 [2024-11-07 13:44:34.921211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.226 [2024-11-07 13:44:34.921226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.226 qpair failed and we were unable to recover it. 00:39:27.226 [2024-11-07 13:44:34.921617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.226 [2024-11-07 13:44:34.921631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.226 qpair failed and we were unable to recover it. 00:39:27.226 [2024-11-07 13:44:34.921923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.226 [2024-11-07 13:44:34.921937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.226 qpair failed and we were unable to recover it. 00:39:27.226 [2024-11-07 13:44:34.922255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.226 [2024-11-07 13:44:34.922269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.226 qpair failed and we were unable to recover it. 00:39:27.226 [2024-11-07 13:44:34.922583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.226 [2024-11-07 13:44:34.922599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.226 qpair failed and we were unable to recover it. 00:39:27.226 [2024-11-07 13:44:34.922916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.226 [2024-11-07 13:44:34.922930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.226 qpair failed and we were unable to recover it. 00:39:27.226 [2024-11-07 13:44:34.923217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.226 [2024-11-07 13:44:34.923231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.226 qpair failed and we were unable to recover it. 00:39:27.226 [2024-11-07 13:44:34.923575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.226 [2024-11-07 13:44:34.923588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.226 qpair failed and we were unable to recover it. 00:39:27.226 [2024-11-07 13:44:34.923905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.226 [2024-11-07 13:44:34.923919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.226 qpair failed and we were unable to recover it. 00:39:27.226 [2024-11-07 13:44:34.924229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.226 [2024-11-07 13:44:34.924242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.226 qpair failed and we were unable to recover it. 00:39:27.226 [2024-11-07 13:44:34.924445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.227 [2024-11-07 13:44:34.924458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.227 qpair failed and we were unable to recover it. 00:39:27.227 [2024-11-07 13:44:34.924788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.227 [2024-11-07 13:44:34.924801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.227 qpair failed and we were unable to recover it. 00:39:27.227 [2024-11-07 13:44:34.925107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.227 [2024-11-07 13:44:34.925121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.227 qpair failed and we were unable to recover it. 00:39:27.227 [2024-11-07 13:44:34.925426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.227 [2024-11-07 13:44:34.925440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.227 qpair failed and we were unable to recover it. 00:39:27.227 [2024-11-07 13:44:34.925750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.227 [2024-11-07 13:44:34.925764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.227 qpair failed and we were unable to recover it. 00:39:27.227 [2024-11-07 13:44:34.926076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.227 [2024-11-07 13:44:34.926091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.227 qpair failed and we were unable to recover it. 00:39:27.227 [2024-11-07 13:44:34.926396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.227 [2024-11-07 13:44:34.926410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.227 qpair failed and we were unable to recover it. 00:39:27.227 [2024-11-07 13:44:34.926723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.227 [2024-11-07 13:44:34.926736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.227 qpair failed and we were unable to recover it. 00:39:27.227 [2024-11-07 13:44:34.926993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.227 [2024-11-07 13:44:34.927007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.227 qpair failed and we were unable to recover it. 00:39:27.227 [2024-11-07 13:44:34.927362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.227 [2024-11-07 13:44:34.927376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.227 qpair failed and we were unable to recover it. 00:39:27.227 [2024-11-07 13:44:34.927685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.227 [2024-11-07 13:44:34.927698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.227 qpair failed and we were unable to recover it. 00:39:27.227 [2024-11-07 13:44:34.928018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.227 [2024-11-07 13:44:34.928031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.227 qpair failed and we were unable to recover it. 00:39:27.227 [2024-11-07 13:44:34.928327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.227 [2024-11-07 13:44:34.928340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.227 qpair failed and we were unable to recover it. 00:39:27.227 [2024-11-07 13:44:34.928661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.227 [2024-11-07 13:44:34.928675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.227 qpair failed and we were unable to recover it. 00:39:27.227 [2024-11-07 13:44:34.928984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.227 [2024-11-07 13:44:34.928998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.227 qpair failed and we were unable to recover it. 00:39:27.227 [2024-11-07 13:44:34.929309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.227 [2024-11-07 13:44:34.929322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.227 qpair failed and we were unable to recover it. 00:39:27.227 [2024-11-07 13:44:34.929599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.227 [2024-11-07 13:44:34.929612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.227 qpair failed and we were unable to recover it. 00:39:27.227 [2024-11-07 13:44:34.929948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.227 [2024-11-07 13:44:34.929961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.227 qpair failed and we were unable to recover it. 00:39:27.227 [2024-11-07 13:44:34.930234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.227 [2024-11-07 13:44:34.930247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.227 qpair failed and we were unable to recover it. 00:39:27.227 [2024-11-07 13:44:34.930589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.227 [2024-11-07 13:44:34.930602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.227 qpair failed and we were unable to recover it. 00:39:27.227 [2024-11-07 13:44:34.930920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.227 [2024-11-07 13:44:34.930934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.227 qpair failed and we were unable to recover it. 00:39:27.227 [2024-11-07 13:44:34.931312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.227 [2024-11-07 13:44:34.931326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.227 qpair failed and we were unable to recover it. 00:39:27.227 [2024-11-07 13:44:34.931635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.227 [2024-11-07 13:44:34.931648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.227 qpair failed and we were unable to recover it. 00:39:27.227 [2024-11-07 13:44:34.931969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.227 [2024-11-07 13:44:34.931983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.227 qpair failed and we were unable to recover it. 00:39:27.227 [2024-11-07 13:44:34.932294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.227 [2024-11-07 13:44:34.932307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.227 qpair failed and we were unable to recover it. 00:39:27.227 [2024-11-07 13:44:34.932612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.227 [2024-11-07 13:44:34.932627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.227 qpair failed and we were unable to recover it. 00:39:27.227 [2024-11-07 13:44:34.932948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.227 [2024-11-07 13:44:34.932961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.227 qpair failed and we were unable to recover it. 00:39:27.227 [2024-11-07 13:44:34.933239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.227 [2024-11-07 13:44:34.933252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.227 qpair failed and we were unable to recover it. 00:39:27.227 [2024-11-07 13:44:34.933568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.227 [2024-11-07 13:44:34.933582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.227 qpair failed and we were unable to recover it. 00:39:27.227 [2024-11-07 13:44:34.933910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.227 [2024-11-07 13:44:34.933924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.227 qpair failed and we were unable to recover it. 00:39:27.227 [2024-11-07 13:44:34.934256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.227 [2024-11-07 13:44:34.934270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.227 qpair failed and we were unable to recover it. 00:39:27.227 [2024-11-07 13:44:34.934597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.227 [2024-11-07 13:44:34.934611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.227 qpair failed and we were unable to recover it. 00:39:27.227 [2024-11-07 13:44:34.934922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.227 [2024-11-07 13:44:34.934936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.227 qpair failed and we were unable to recover it. 00:39:27.227 [2024-11-07 13:44:34.935251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.227 [2024-11-07 13:44:34.935265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.227 qpair failed and we were unable to recover it. 00:39:27.227 [2024-11-07 13:44:34.935464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.227 [2024-11-07 13:44:34.935479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.227 qpair failed and we were unable to recover it. 00:39:27.227 [2024-11-07 13:44:34.935801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.227 [2024-11-07 13:44:34.935814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.227 qpair failed and we were unable to recover it. 00:39:27.227 [2024-11-07 13:44:34.936128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.228 [2024-11-07 13:44:34.936142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.228 qpair failed and we were unable to recover it. 00:39:27.228 [2024-11-07 13:44:34.936477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.228 [2024-11-07 13:44:34.936491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.228 qpair failed and we were unable to recover it. 00:39:27.228 [2024-11-07 13:44:34.936708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.228 [2024-11-07 13:44:34.936721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.228 qpair failed and we were unable to recover it. 00:39:27.228 [2024-11-07 13:44:34.937080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.228 [2024-11-07 13:44:34.937094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.228 qpair failed and we were unable to recover it. 00:39:27.228 [2024-11-07 13:44:34.937405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.228 [2024-11-07 13:44:34.937418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.228 qpair failed and we were unable to recover it. 00:39:27.228 [2024-11-07 13:44:34.937784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.228 [2024-11-07 13:44:34.937798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.228 qpair failed and we were unable to recover it. 00:39:27.228 [2024-11-07 13:44:34.938112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.228 [2024-11-07 13:44:34.938125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.228 qpair failed and we were unable to recover it. 00:39:27.228 [2024-11-07 13:44:34.938455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.228 [2024-11-07 13:44:34.938468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.228 qpair failed and we were unable to recover it. 00:39:27.228 [2024-11-07 13:44:34.938866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.228 [2024-11-07 13:44:34.938880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.228 qpair failed and we were unable to recover it. 00:39:27.228 [2024-11-07 13:44:34.939167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.228 [2024-11-07 13:44:34.939181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.228 qpair failed and we were unable to recover it. 00:39:27.228 [2024-11-07 13:44:34.939514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.228 [2024-11-07 13:44:34.939528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.228 qpair failed and we were unable to recover it. 00:39:27.228 [2024-11-07 13:44:34.939730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.228 [2024-11-07 13:44:34.939743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.228 qpair failed and we were unable to recover it. 00:39:27.228 [2024-11-07 13:44:34.939950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.228 [2024-11-07 13:44:34.939964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.228 qpair failed and we were unable to recover it. 00:39:27.228 [2024-11-07 13:44:34.940277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.228 [2024-11-07 13:44:34.940290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.228 qpair failed and we were unable to recover it. 00:39:27.228 [2024-11-07 13:44:34.940598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.228 [2024-11-07 13:44:34.940612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.228 qpair failed and we were unable to recover it. 00:39:27.228 [2024-11-07 13:44:34.940943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.228 [2024-11-07 13:44:34.940957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.228 qpair failed and we were unable to recover it. 00:39:27.228 [2024-11-07 13:44:34.941279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.228 [2024-11-07 13:44:34.941292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.228 qpair failed and we were unable to recover it. 00:39:27.228 [2024-11-07 13:44:34.941631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.228 [2024-11-07 13:44:34.941644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.228 qpair failed and we were unable to recover it. 00:39:27.228 [2024-11-07 13:44:34.941953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.228 [2024-11-07 13:44:34.941967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.228 qpair failed and we were unable to recover it. 00:39:27.228 [2024-11-07 13:44:34.942284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.228 [2024-11-07 13:44:34.942297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.228 qpair failed and we were unable to recover it. 00:39:27.228 [2024-11-07 13:44:34.942615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.228 [2024-11-07 13:44:34.942629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.228 qpair failed and we were unable to recover it. 00:39:27.228 [2024-11-07 13:44:34.943033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.228 [2024-11-07 13:44:34.943047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.228 qpair failed and we were unable to recover it. 00:39:27.228 [2024-11-07 13:44:34.943325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.228 [2024-11-07 13:44:34.943339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.228 qpair failed and we were unable to recover it. 00:39:27.228 [2024-11-07 13:44:34.943652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.228 [2024-11-07 13:44:34.943666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.228 qpair failed and we were unable to recover it. 00:39:27.228 [2024-11-07 13:44:34.943986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.228 [2024-11-07 13:44:34.944000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.228 qpair failed and we were unable to recover it. 00:39:27.228 [2024-11-07 13:44:34.944319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.228 [2024-11-07 13:44:34.944333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.228 qpair failed and we were unable to recover it. 00:39:27.228 [2024-11-07 13:44:34.944518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.228 [2024-11-07 13:44:34.944531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.228 qpair failed and we were unable to recover it. 00:39:27.228 [2024-11-07 13:44:34.944854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.228 [2024-11-07 13:44:34.944870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.228 qpair failed and we were unable to recover it. 00:39:27.228 [2024-11-07 13:44:34.945167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.228 [2024-11-07 13:44:34.945180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.228 qpair failed and we were unable to recover it. 00:39:27.228 [2024-11-07 13:44:34.945468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.228 [2024-11-07 13:44:34.945482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.228 qpair failed and we were unable to recover it. 00:39:27.228 [2024-11-07 13:44:34.945797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.228 [2024-11-07 13:44:34.945810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.228 qpair failed and we were unable to recover it. 00:39:27.228 [2024-11-07 13:44:34.946121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.228 [2024-11-07 13:44:34.946136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.228 qpair failed and we were unable to recover it. 00:39:27.228 [2024-11-07 13:44:34.946461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.228 [2024-11-07 13:44:34.946475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.228 qpair failed and we were unable to recover it. 00:39:27.228 [2024-11-07 13:44:34.946754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.228 [2024-11-07 13:44:34.946768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.228 qpair failed and we were unable to recover it. 00:39:27.229 [2024-11-07 13:44:34.947086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.229 [2024-11-07 13:44:34.947099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.229 qpair failed and we were unable to recover it. 00:39:27.229 [2024-11-07 13:44:34.947327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.229 [2024-11-07 13:44:34.947340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.229 qpair failed and we were unable to recover it. 00:39:27.229 [2024-11-07 13:44:34.947650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.229 [2024-11-07 13:44:34.947663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.229 qpair failed and we were unable to recover it. 00:39:27.229 [2024-11-07 13:44:34.947992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.229 [2024-11-07 13:44:34.948006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.229 qpair failed and we were unable to recover it. 00:39:27.229 [2024-11-07 13:44:34.948310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.229 [2024-11-07 13:44:34.948326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.229 qpair failed and we were unable to recover it. 00:39:27.229 [2024-11-07 13:44:34.948646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.229 [2024-11-07 13:44:34.948660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.229 qpair failed and we were unable to recover it. 00:39:27.229 [2024-11-07 13:44:34.948990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.229 [2024-11-07 13:44:34.949003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.229 qpair failed and we were unable to recover it. 00:39:27.229 [2024-11-07 13:44:34.949298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.229 [2024-11-07 13:44:34.949312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.229 qpair failed and we were unable to recover it. 00:39:27.229 [2024-11-07 13:44:34.949621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.229 [2024-11-07 13:44:34.949634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.229 qpair failed and we were unable to recover it. 00:39:27.229 [2024-11-07 13:44:34.949941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.229 [2024-11-07 13:44:34.949955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.229 qpair failed and we were unable to recover it. 00:39:27.229 [2024-11-07 13:44:34.950286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.229 [2024-11-07 13:44:34.950299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.229 qpair failed and we were unable to recover it. 00:39:27.229 [2024-11-07 13:44:34.950589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.229 [2024-11-07 13:44:34.950602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.229 qpair failed and we were unable to recover it. 00:39:27.229 [2024-11-07 13:44:34.950926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.229 [2024-11-07 13:44:34.950940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.229 qpair failed and we were unable to recover it. 00:39:27.229 [2024-11-07 13:44:34.951252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.229 [2024-11-07 13:44:34.951266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.229 qpair failed and we were unable to recover it. 00:39:27.229 [2024-11-07 13:44:34.951570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.229 [2024-11-07 13:44:34.951584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.229 qpair failed and we were unable to recover it. 00:39:27.229 [2024-11-07 13:44:34.951872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.229 [2024-11-07 13:44:34.951886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.229 qpair failed and we were unable to recover it. 00:39:27.229 [2024-11-07 13:44:34.952186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.229 [2024-11-07 13:44:34.952199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.229 qpair failed and we were unable to recover it. 00:39:27.229 [2024-11-07 13:44:34.952478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.229 [2024-11-07 13:44:34.952492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.229 qpair failed and we were unable to recover it. 00:39:27.229 [2024-11-07 13:44:34.952823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.229 [2024-11-07 13:44:34.952836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.229 qpair failed and we were unable to recover it. 00:39:27.229 [2024-11-07 13:44:34.953170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.229 [2024-11-07 13:44:34.953192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.229 qpair failed and we were unable to recover it. 00:39:27.229 [2024-11-07 13:44:34.953502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.229 [2024-11-07 13:44:34.953516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.229 qpair failed and we were unable to recover it. 00:39:27.229 [2024-11-07 13:44:34.953844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.229 [2024-11-07 13:44:34.953858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.229 qpair failed and we were unable to recover it. 00:39:27.229 [2024-11-07 13:44:34.954180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.229 [2024-11-07 13:44:34.954194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.229 qpair failed and we were unable to recover it. 00:39:27.229 [2024-11-07 13:44:34.954529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.229 [2024-11-07 13:44:34.954543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.229 qpair failed and we were unable to recover it. 00:39:27.229 [2024-11-07 13:44:34.954855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.229 [2024-11-07 13:44:34.954872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.229 qpair failed and we were unable to recover it. 00:39:27.229 [2024-11-07 13:44:34.955170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.229 [2024-11-07 13:44:34.955183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.229 qpair failed and we were unable to recover it. 00:39:27.229 [2024-11-07 13:44:34.955489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.229 [2024-11-07 13:44:34.955502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.229 qpair failed and we were unable to recover it. 00:39:27.229 [2024-11-07 13:44:34.955831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.229 [2024-11-07 13:44:34.955845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.229 qpair failed and we were unable to recover it. 00:39:27.229 [2024-11-07 13:44:34.956159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.229 [2024-11-07 13:44:34.956173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.229 qpair failed and we were unable to recover it. 00:39:27.229 [2024-11-07 13:44:34.956480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.229 [2024-11-07 13:44:34.956493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.229 qpair failed and we were unable to recover it. 00:39:27.229 [2024-11-07 13:44:34.956731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.229 [2024-11-07 13:44:34.956744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.229 qpair failed and we were unable to recover it. 00:39:27.229 [2024-11-07 13:44:34.956934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.229 [2024-11-07 13:44:34.956948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.229 qpair failed and we were unable to recover it. 00:39:27.229 [2024-11-07 13:44:34.957251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.229 [2024-11-07 13:44:34.957265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.229 qpair failed and we were unable to recover it. 00:39:27.229 [2024-11-07 13:44:34.957574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.229 [2024-11-07 13:44:34.957588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.229 qpair failed and we were unable to recover it. 00:39:27.229 [2024-11-07 13:44:34.957776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.229 [2024-11-07 13:44:34.957791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.229 qpair failed and we were unable to recover it. 00:39:27.229 [2024-11-07 13:44:34.958122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.230 [2024-11-07 13:44:34.958136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.230 qpair failed and we were unable to recover it. 00:39:27.230 [2024-11-07 13:44:34.958444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.230 [2024-11-07 13:44:34.958458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.230 qpair failed and we were unable to recover it. 00:39:27.230 [2024-11-07 13:44:34.958825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.230 [2024-11-07 13:44:34.958839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.230 qpair failed and we were unable to recover it. 00:39:27.230 [2024-11-07 13:44:34.959044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.230 [2024-11-07 13:44:34.959059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.230 qpair failed and we were unable to recover it. 00:39:27.230 [2024-11-07 13:44:34.959400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.230 [2024-11-07 13:44:34.959414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.230 qpair failed and we were unable to recover it. 00:39:27.230 [2024-11-07 13:44:34.959743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.230 [2024-11-07 13:44:34.959757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.230 qpair failed and we were unable to recover it. 00:39:27.230 [2024-11-07 13:44:34.960099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.230 [2024-11-07 13:44:34.960114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.230 qpair failed and we were unable to recover it. 00:39:27.230 [2024-11-07 13:44:34.960478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.230 [2024-11-07 13:44:34.960492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.230 qpair failed and we were unable to recover it. 00:39:27.230 [2024-11-07 13:44:34.960828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.230 [2024-11-07 13:44:34.960842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.230 qpair failed and we were unable to recover it. 00:39:27.230 [2024-11-07 13:44:34.961165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.230 [2024-11-07 13:44:34.961182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.230 qpair failed and we were unable to recover it. 00:39:27.230 [2024-11-07 13:44:34.961361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.230 [2024-11-07 13:44:34.961376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.230 qpair failed and we were unable to recover it. 00:39:27.230 [2024-11-07 13:44:34.961606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.230 [2024-11-07 13:44:34.961620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.230 qpair failed and we were unable to recover it. 00:39:27.230 [2024-11-07 13:44:34.961969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.230 [2024-11-07 13:44:34.961983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.230 qpair failed and we were unable to recover it. 00:39:27.230 [2024-11-07 13:44:34.962278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.230 [2024-11-07 13:44:34.962291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.230 qpair failed and we were unable to recover it. 00:39:27.230 [2024-11-07 13:44:34.962565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.230 [2024-11-07 13:44:34.962578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.230 qpair failed and we were unable to recover it. 00:39:27.230 [2024-11-07 13:44:34.962898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.230 [2024-11-07 13:44:34.962913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.230 qpair failed and we were unable to recover it. 00:39:27.230 [2024-11-07 13:44:34.963199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.230 [2024-11-07 13:44:34.963213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.230 qpair failed and we were unable to recover it. 00:39:27.230 [2024-11-07 13:44:34.963534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.230 [2024-11-07 13:44:34.963549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.230 qpair failed and we were unable to recover it. 00:39:27.230 [2024-11-07 13:44:34.963881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.230 [2024-11-07 13:44:34.963895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.230 qpair failed and we were unable to recover it. 00:39:27.230 [2024-11-07 13:44:34.964207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.230 [2024-11-07 13:44:34.964228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.230 qpair failed and we were unable to recover it. 00:39:27.230 [2024-11-07 13:44:34.964554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.230 [2024-11-07 13:44:34.964567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.230 qpair failed and we were unable to recover it. 00:39:27.230 [2024-11-07 13:44:34.964873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.230 [2024-11-07 13:44:34.964887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.230 qpair failed and we were unable to recover it. 00:39:27.230 [2024-11-07 13:44:34.965114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.230 [2024-11-07 13:44:34.965128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.230 qpair failed and we were unable to recover it. 00:39:27.230 [2024-11-07 13:44:34.965443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.230 [2024-11-07 13:44:34.965457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.230 qpair failed and we were unable to recover it. 00:39:27.230 [2024-11-07 13:44:34.965758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.230 [2024-11-07 13:44:34.965771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.230 qpair failed and we were unable to recover it. 00:39:27.230 [2024-11-07 13:44:34.966158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.230 [2024-11-07 13:44:34.966172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.230 qpair failed and we were unable to recover it. 00:39:27.230 [2024-11-07 13:44:34.966495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.230 [2024-11-07 13:44:34.966508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.230 qpair failed and we were unable to recover it. 00:39:27.230 [2024-11-07 13:44:34.966723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.230 [2024-11-07 13:44:34.966736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.230 qpair failed and we were unable to recover it. 00:39:27.230 [2024-11-07 13:44:34.967038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.230 [2024-11-07 13:44:34.967051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.230 qpair failed and we were unable to recover it. 00:39:27.230 [2024-11-07 13:44:34.967384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.230 [2024-11-07 13:44:34.967397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.230 qpair failed and we were unable to recover it. 00:39:27.230 [2024-11-07 13:44:34.967713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.230 [2024-11-07 13:44:34.967726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.230 qpair failed and we were unable to recover it. 00:39:27.230 [2024-11-07 13:44:34.968033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.230 [2024-11-07 13:44:34.968046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.230 qpair failed and we were unable to recover it. 00:39:27.230 [2024-11-07 13:44:34.968416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.230 [2024-11-07 13:44:34.968430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.230 qpair failed and we were unable to recover it. 00:39:27.230 [2024-11-07 13:44:34.968744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.230 [2024-11-07 13:44:34.968757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.230 qpair failed and we were unable to recover it. 00:39:27.230 [2024-11-07 13:44:34.969090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.230 [2024-11-07 13:44:34.969104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.230 qpair failed and we were unable to recover it. 00:39:27.230 [2024-11-07 13:44:34.969500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.231 [2024-11-07 13:44:34.969513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.231 qpair failed and we were unable to recover it. 00:39:27.231 [2024-11-07 13:44:34.969796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.231 [2024-11-07 13:44:34.969815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.231 qpair failed and we were unable to recover it. 00:39:27.231 [2024-11-07 13:44:34.970135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.231 [2024-11-07 13:44:34.970149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.231 qpair failed and we were unable to recover it. 00:39:27.231 [2024-11-07 13:44:34.970466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.231 [2024-11-07 13:44:34.970480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.231 qpair failed and we were unable to recover it. 00:39:27.231 [2024-11-07 13:44:34.970698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.231 [2024-11-07 13:44:34.970711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.231 qpair failed and we were unable to recover it. 00:39:27.231 [2024-11-07 13:44:34.971036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.231 [2024-11-07 13:44:34.971050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.231 qpair failed and we were unable to recover it. 00:39:27.231 [2024-11-07 13:44:34.971375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.231 [2024-11-07 13:44:34.971389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.231 qpair failed and we were unable to recover it. 00:39:27.231 [2024-11-07 13:44:34.971713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.231 [2024-11-07 13:44:34.971726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.231 qpair failed and we were unable to recover it. 00:39:27.231 [2024-11-07 13:44:34.972031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.231 [2024-11-07 13:44:34.972044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.231 qpair failed and we were unable to recover it. 00:39:27.231 [2024-11-07 13:44:34.972330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.231 [2024-11-07 13:44:34.972344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.231 qpair failed and we were unable to recover it. 00:39:27.231 [2024-11-07 13:44:34.972663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.231 [2024-11-07 13:44:34.972676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.231 qpair failed and we were unable to recover it. 00:39:27.231 [2024-11-07 13:44:34.972993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.231 [2024-11-07 13:44:34.973007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.231 qpair failed and we were unable to recover it. 00:39:27.231 [2024-11-07 13:44:34.973228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.231 [2024-11-07 13:44:34.973241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.231 qpair failed and we were unable to recover it. 00:39:27.231 [2024-11-07 13:44:34.973532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.231 [2024-11-07 13:44:34.973545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.231 qpair failed and we were unable to recover it. 00:39:27.231 [2024-11-07 13:44:34.973869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.231 [2024-11-07 13:44:34.973885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.231 qpair failed and we were unable to recover it. 00:39:27.231 [2024-11-07 13:44:34.974176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.231 [2024-11-07 13:44:34.974189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.231 qpair failed and we were unable to recover it. 00:39:27.231 [2024-11-07 13:44:34.974445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.231 [2024-11-07 13:44:34.974458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.231 qpair failed and we were unable to recover it. 00:39:27.231 [2024-11-07 13:44:34.974789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.231 [2024-11-07 13:44:34.974802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.231 qpair failed and we were unable to recover it. 00:39:27.231 [2024-11-07 13:44:34.975123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.231 [2024-11-07 13:44:34.975137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.231 qpair failed and we were unable to recover it. 00:39:27.231 [2024-11-07 13:44:34.975448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.231 [2024-11-07 13:44:34.975461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.231 qpair failed and we were unable to recover it. 00:39:27.231 [2024-11-07 13:44:34.975796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.231 [2024-11-07 13:44:34.975810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.231 qpair failed and we were unable to recover it. 00:39:27.231 [2024-11-07 13:44:34.976022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.231 [2024-11-07 13:44:34.976036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.231 qpair failed and we were unable to recover it. 00:39:27.231 [2024-11-07 13:44:34.976353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.231 [2024-11-07 13:44:34.976366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.231 qpair failed and we were unable to recover it. 00:39:27.231 [2024-11-07 13:44:34.976614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.231 [2024-11-07 13:44:34.976627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.231 qpair failed and we were unable to recover it. 00:39:27.232 [2024-11-07 13:44:34.976930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.232 [2024-11-07 13:44:34.976943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.232 qpair failed and we were unable to recover it. 00:39:27.232 [2024-11-07 13:44:34.977274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.232 [2024-11-07 13:44:34.977288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.232 qpair failed and we were unable to recover it. 00:39:27.232 [2024-11-07 13:44:34.977616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.232 [2024-11-07 13:44:34.977630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.232 qpair failed and we were unable to recover it. 00:39:27.232 [2024-11-07 13:44:34.978004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.232 [2024-11-07 13:44:34.978018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.232 qpair failed and we were unable to recover it. 00:39:27.232 [2024-11-07 13:44:34.978216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.232 [2024-11-07 13:44:34.978230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.232 qpair failed and we were unable to recover it. 00:39:27.232 [2024-11-07 13:44:34.978514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.232 [2024-11-07 13:44:34.978528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.232 qpair failed and we were unable to recover it. 00:39:27.232 [2024-11-07 13:44:34.978838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.232 [2024-11-07 13:44:34.978852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.232 qpair failed and we were unable to recover it. 00:39:27.232 [2024-11-07 13:44:34.979200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.232 [2024-11-07 13:44:34.979214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.232 qpair failed and we were unable to recover it. 00:39:27.232 [2024-11-07 13:44:34.979412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.232 [2024-11-07 13:44:34.979426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.232 qpair failed and we were unable to recover it. 00:39:27.232 [2024-11-07 13:44:34.979723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.232 [2024-11-07 13:44:34.979736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.232 qpair failed and we were unable to recover it. 00:39:27.232 [2024-11-07 13:44:34.980016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.232 [2024-11-07 13:44:34.980030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.232 qpair failed and we were unable to recover it. 00:39:27.232 [2024-11-07 13:44:34.980359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.232 [2024-11-07 13:44:34.980373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.232 qpair failed and we were unable to recover it. 00:39:27.232 [2024-11-07 13:44:34.980689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.232 [2024-11-07 13:44:34.980702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.232 qpair failed and we were unable to recover it. 00:39:27.232 [2024-11-07 13:44:34.981033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.232 [2024-11-07 13:44:34.981046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.232 qpair failed and we were unable to recover it. 00:39:27.232 [2024-11-07 13:44:34.981368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.232 [2024-11-07 13:44:34.981387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.232 qpair failed and we were unable to recover it. 00:39:27.232 [2024-11-07 13:44:34.981779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.232 [2024-11-07 13:44:34.981792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.232 qpair failed and we were unable to recover it. 00:39:27.232 [2024-11-07 13:44:34.982099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.232 [2024-11-07 13:44:34.982113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.232 qpair failed and we were unable to recover it. 00:39:27.232 [2024-11-07 13:44:34.982414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.232 [2024-11-07 13:44:34.982428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.232 qpair failed and we were unable to recover it. 00:39:27.232 [2024-11-07 13:44:34.982622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.232 [2024-11-07 13:44:34.982635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.232 qpair failed and we were unable to recover it. 00:39:27.232 [2024-11-07 13:44:34.982952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.232 [2024-11-07 13:44:34.982965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.232 qpair failed and we were unable to recover it. 00:39:27.232 [2024-11-07 13:44:34.983160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.232 [2024-11-07 13:44:34.983173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.232 qpair failed and we were unable to recover it. 00:39:27.232 [2024-11-07 13:44:34.983561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.232 [2024-11-07 13:44:34.983575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.232 qpair failed and we were unable to recover it. 00:39:27.232 [2024-11-07 13:44:34.983816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.232 [2024-11-07 13:44:34.983829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.232 qpair failed and we were unable to recover it. 00:39:27.232 [2024-11-07 13:44:34.984116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.232 [2024-11-07 13:44:34.984130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.232 qpair failed and we were unable to recover it. 00:39:27.232 [2024-11-07 13:44:34.984454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.232 [2024-11-07 13:44:34.984468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.232 qpair failed and we were unable to recover it. 00:39:27.232 [2024-11-07 13:44:34.984767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.232 [2024-11-07 13:44:34.984780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.232 qpair failed and we were unable to recover it. 00:39:27.232 [2024-11-07 13:44:34.985092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.232 [2024-11-07 13:44:34.985105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.232 qpair failed and we were unable to recover it. 00:39:27.232 [2024-11-07 13:44:34.985414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.232 [2024-11-07 13:44:34.985427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.232 qpair failed and we were unable to recover it. 00:39:27.232 [2024-11-07 13:44:34.985747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.232 [2024-11-07 13:44:34.985760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.232 qpair failed and we were unable to recover it. 00:39:27.232 [2024-11-07 13:44:34.985990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.232 [2024-11-07 13:44:34.986004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.232 qpair failed and we were unable to recover it. 00:39:27.232 [2024-11-07 13:44:34.986306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.232 [2024-11-07 13:44:34.986319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.232 qpair failed and we were unable to recover it. 00:39:27.232 [2024-11-07 13:44:34.986627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.232 [2024-11-07 13:44:34.986641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.232 qpair failed and we were unable to recover it. 00:39:27.232 [2024-11-07 13:44:34.986964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.232 [2024-11-07 13:44:34.986977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.232 qpair failed and we were unable to recover it. 00:39:27.232 [2024-11-07 13:44:34.987261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.232 [2024-11-07 13:44:34.987275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.232 qpair failed and we were unable to recover it. 00:39:27.232 [2024-11-07 13:44:34.987590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.232 [2024-11-07 13:44:34.987603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.232 qpair failed and we were unable to recover it. 00:39:27.232 [2024-11-07 13:44:34.987910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.232 [2024-11-07 13:44:34.987924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.232 qpair failed and we were unable to recover it. 00:39:27.232 [2024-11-07 13:44:34.988233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.232 [2024-11-07 13:44:34.988246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.233 qpair failed and we were unable to recover it. 00:39:27.233 [2024-11-07 13:44:34.988617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.233 [2024-11-07 13:44:34.988632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.233 qpair failed and we were unable to recover it. 00:39:27.233 [2024-11-07 13:44:34.988951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.233 [2024-11-07 13:44:34.988965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.233 qpair failed and we were unable to recover it. 00:39:27.233 [2024-11-07 13:44:34.989256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.233 [2024-11-07 13:44:34.989269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.233 qpair failed and we were unable to recover it. 00:39:27.233 [2024-11-07 13:44:34.989577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.233 [2024-11-07 13:44:34.989590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.233 qpair failed and we were unable to recover it. 00:39:27.233 [2024-11-07 13:44:34.989876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.233 [2024-11-07 13:44:34.989889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.233 qpair failed and we were unable to recover it. 00:39:27.233 [2024-11-07 13:44:34.990104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.233 [2024-11-07 13:44:34.990117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.233 qpair failed and we were unable to recover it. 00:39:27.233 [2024-11-07 13:44:34.990562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.233 [2024-11-07 13:44:34.990575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.233 qpair failed and we were unable to recover it. 00:39:27.233 [2024-11-07 13:44:34.990853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.233 [2024-11-07 13:44:34.990875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.233 qpair failed and we were unable to recover it. 00:39:27.233 [2024-11-07 13:44:34.991212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.233 [2024-11-07 13:44:34.991226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.233 qpair failed and we were unable to recover it. 00:39:27.233 [2024-11-07 13:44:34.991482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.233 [2024-11-07 13:44:34.991495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.233 qpair failed and we were unable to recover it. 00:39:27.233 [2024-11-07 13:44:34.991808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.233 [2024-11-07 13:44:34.991821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.233 qpair failed and we were unable to recover it. 00:39:27.233 [2024-11-07 13:44:34.992104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.233 [2024-11-07 13:44:34.992125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.233 qpair failed and we were unable to recover it. 00:39:27.233 [2024-11-07 13:44:34.992462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.233 [2024-11-07 13:44:34.992476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.233 qpair failed and we were unable to recover it. 00:39:27.233 [2024-11-07 13:44:34.992734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.233 [2024-11-07 13:44:34.992747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.233 qpair failed and we were unable to recover it. 00:39:27.233 [2024-11-07 13:44:34.993075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.233 [2024-11-07 13:44:34.993089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.233 qpair failed and we were unable to recover it. 00:39:27.233 [2024-11-07 13:44:34.993394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.233 [2024-11-07 13:44:34.993408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.233 qpair failed and we were unable to recover it. 00:39:27.233 [2024-11-07 13:44:34.993703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.233 [2024-11-07 13:44:34.993718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.233 qpair failed and we were unable to recover it. 00:39:27.233 [2024-11-07 13:44:34.993950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.233 [2024-11-07 13:44:34.993964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.233 qpair failed and we were unable to recover it. 00:39:27.233 [2024-11-07 13:44:34.994284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.233 [2024-11-07 13:44:34.994298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.233 qpair failed and we were unable to recover it. 00:39:27.233 [2024-11-07 13:44:34.994610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.233 [2024-11-07 13:44:34.994624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.233 qpair failed and we were unable to recover it. 00:39:27.233 [2024-11-07 13:44:34.994946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.233 [2024-11-07 13:44:34.994962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.233 qpair failed and we were unable to recover it. 00:39:27.233 [2024-11-07 13:44:34.995253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.233 [2024-11-07 13:44:34.995266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.233 qpair failed and we were unable to recover it. 00:39:27.233 [2024-11-07 13:44:34.995580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.233 [2024-11-07 13:44:34.995593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.233 qpair failed and we were unable to recover it. 00:39:27.233 [2024-11-07 13:44:34.995870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.233 [2024-11-07 13:44:34.995884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.233 qpair failed and we were unable to recover it. 00:39:27.233 [2024-11-07 13:44:34.996201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.233 [2024-11-07 13:44:34.996215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.233 qpair failed and we were unable to recover it. 00:39:27.233 [2024-11-07 13:44:34.996531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.233 [2024-11-07 13:44:34.996544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.233 qpair failed and we were unable to recover it. 00:39:27.233 [2024-11-07 13:44:34.996852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.233 [2024-11-07 13:44:34.996868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.233 qpair failed and we were unable to recover it. 00:39:27.233 [2024-11-07 13:44:34.997019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.233 [2024-11-07 13:44:34.997034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.233 qpair failed and we were unable to recover it. 00:39:27.233 [2024-11-07 13:44:34.997344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.233 [2024-11-07 13:44:34.997358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.233 qpair failed and we were unable to recover it. 00:39:27.233 [2024-11-07 13:44:34.997560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.233 [2024-11-07 13:44:34.997573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.233 qpair failed and we were unable to recover it. 00:39:27.233 [2024-11-07 13:44:34.997854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.233 [2024-11-07 13:44:34.997871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.233 qpair failed and we were unable to recover it. 00:39:27.233 [2024-11-07 13:44:34.998190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.233 [2024-11-07 13:44:34.998203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.233 qpair failed and we were unable to recover it. 00:39:27.233 [2024-11-07 13:44:34.998573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.233 [2024-11-07 13:44:34.998586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.233 qpair failed and we were unable to recover it. 00:39:27.233 [2024-11-07 13:44:34.998859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.233 [2024-11-07 13:44:34.998879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.233 qpair failed and we were unable to recover it. 00:39:27.233 [2024-11-07 13:44:34.999217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.233 [2024-11-07 13:44:34.999230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.233 qpair failed and we were unable to recover it. 00:39:27.233 [2024-11-07 13:44:34.999535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.233 [2024-11-07 13:44:34.999548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.233 qpair failed and we were unable to recover it. 00:39:27.233 [2024-11-07 13:44:34.999676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.233 [2024-11-07 13:44:34.999690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.233 qpair failed and we were unable to recover it. 00:39:27.233 [2024-11-07 13:44:34.999999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.233 [2024-11-07 13:44:35.000012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.234 qpair failed and we were unable to recover it. 00:39:27.234 [2024-11-07 13:44:35.000366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.234 [2024-11-07 13:44:35.000380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.234 qpair failed and we were unable to recover it. 00:39:27.234 [2024-11-07 13:44:35.000707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.234 [2024-11-07 13:44:35.000720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.234 qpair failed and we were unable to recover it. 00:39:27.234 [2024-11-07 13:44:35.001054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.234 [2024-11-07 13:44:35.001068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.234 qpair failed and we were unable to recover it. 00:39:27.234 [2024-11-07 13:44:35.001397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.234 [2024-11-07 13:44:35.001410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.234 qpair failed and we were unable to recover it. 00:39:27.234 [2024-11-07 13:44:35.001639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.234 [2024-11-07 13:44:35.001652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.234 qpair failed and we were unable to recover it. 00:39:27.234 [2024-11-07 13:44:35.001972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.234 [2024-11-07 13:44:35.001985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.234 qpair failed and we were unable to recover it. 00:39:27.234 [2024-11-07 13:44:35.002371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.234 [2024-11-07 13:44:35.002385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.234 qpair failed and we were unable to recover it. 00:39:27.234 [2024-11-07 13:44:35.002711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.234 [2024-11-07 13:44:35.002725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.234 qpair failed and we were unable to recover it. 00:39:27.234 [2024-11-07 13:44:35.002983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.234 [2024-11-07 13:44:35.002997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.234 qpair failed and we were unable to recover it. 00:39:27.234 [2024-11-07 13:44:35.003292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.234 [2024-11-07 13:44:35.003306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.234 qpair failed and we were unable to recover it. 00:39:27.234 [2024-11-07 13:44:35.003625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.234 [2024-11-07 13:44:35.003638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.234 qpair failed and we were unable to recover it. 00:39:27.234 [2024-11-07 13:44:35.004027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.234 [2024-11-07 13:44:35.004041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.234 qpair failed and we were unable to recover it. 00:39:27.234 [2024-11-07 13:44:35.004363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.234 [2024-11-07 13:44:35.004377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.234 qpair failed and we were unable to recover it. 00:39:27.234 [2024-11-07 13:44:35.004686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.234 [2024-11-07 13:44:35.004699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.234 qpair failed and we were unable to recover it. 00:39:27.234 [2024-11-07 13:44:35.005070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.234 [2024-11-07 13:44:35.005086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.234 qpair failed and we were unable to recover it. 00:39:27.234 [2024-11-07 13:44:35.005399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.234 [2024-11-07 13:44:35.005413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.234 qpair failed and we were unable to recover it. 00:39:27.234 [2024-11-07 13:44:35.005619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.234 [2024-11-07 13:44:35.005632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.234 qpair failed and we were unable to recover it. 00:39:27.234 [2024-11-07 13:44:35.005837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.234 [2024-11-07 13:44:35.005850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.234 qpair failed and we were unable to recover it. 00:39:27.234 [2024-11-07 13:44:35.006180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.234 [2024-11-07 13:44:35.006195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.234 qpair failed and we were unable to recover it. 00:39:27.234 [2024-11-07 13:44:35.006520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.234 [2024-11-07 13:44:35.006533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.234 qpair failed and we were unable to recover it. 00:39:27.234 [2024-11-07 13:44:35.006843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.234 [2024-11-07 13:44:35.006857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.234 qpair failed and we were unable to recover it. 00:39:27.234 [2024-11-07 13:44:35.007181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.234 [2024-11-07 13:44:35.007195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.234 qpair failed and we were unable to recover it. 00:39:27.234 [2024-11-07 13:44:35.007518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.234 [2024-11-07 13:44:35.007534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.234 qpair failed and we were unable to recover it. 00:39:27.234 [2024-11-07 13:44:35.007850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.234 [2024-11-07 13:44:35.007869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.234 qpair failed and we were unable to recover it. 00:39:27.234 [2024-11-07 13:44:35.008217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.234 [2024-11-07 13:44:35.008231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.234 qpair failed and we were unable to recover it. 00:39:27.234 [2024-11-07 13:44:35.008570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.234 [2024-11-07 13:44:35.008584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.234 qpair failed and we were unable to recover it. 00:39:27.234 [2024-11-07 13:44:35.008915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.234 [2024-11-07 13:44:35.008930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.234 qpair failed and we were unable to recover it. 00:39:27.234 [2024-11-07 13:44:35.009334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.234 [2024-11-07 13:44:35.009347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.234 qpair failed and we were unable to recover it. 00:39:27.234 [2024-11-07 13:44:35.009655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.234 [2024-11-07 13:44:35.009669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.234 qpair failed and we were unable to recover it. 00:39:27.234 [2024-11-07 13:44:35.010002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.234 [2024-11-07 13:44:35.010016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.234 qpair failed and we were unable to recover it. 00:39:27.234 [2024-11-07 13:44:35.010211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.235 [2024-11-07 13:44:35.010225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.235 qpair failed and we were unable to recover it. 00:39:27.235 [2024-11-07 13:44:35.010501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.235 [2024-11-07 13:44:35.010514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.235 qpair failed and we were unable to recover it. 00:39:27.235 [2024-11-07 13:44:35.010794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.235 [2024-11-07 13:44:35.010808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.235 qpair failed and we were unable to recover it. 00:39:27.235 [2024-11-07 13:44:35.011139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.235 [2024-11-07 13:44:35.011153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.235 qpair failed and we were unable to recover it. 00:39:27.235 [2024-11-07 13:44:35.011483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.235 [2024-11-07 13:44:35.011496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.235 qpair failed and we were unable to recover it. 00:39:27.235 [2024-11-07 13:44:35.011812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.235 [2024-11-07 13:44:35.011825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.235 qpair failed and we were unable to recover it. 00:39:27.235 [2024-11-07 13:44:35.012125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.235 [2024-11-07 13:44:35.012139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.235 qpair failed and we were unable to recover it. 00:39:27.235 [2024-11-07 13:44:35.012459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.235 [2024-11-07 13:44:35.012472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.235 qpair failed and we were unable to recover it. 00:39:27.235 [2024-11-07 13:44:35.012667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.235 [2024-11-07 13:44:35.012680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.235 qpair failed and we were unable to recover it. 00:39:27.235 [2024-11-07 13:44:35.012855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.235 [2024-11-07 13:44:35.012873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.235 qpair failed and we were unable to recover it. 00:39:27.235 [2024-11-07 13:44:35.013257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.235 [2024-11-07 13:44:35.013271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.235 qpair failed and we were unable to recover it. 00:39:27.235 [2024-11-07 13:44:35.013549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.235 [2024-11-07 13:44:35.013562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.235 qpair failed and we were unable to recover it. 00:39:27.235 [2024-11-07 13:44:35.013898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.235 [2024-11-07 13:44:35.013912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.235 qpair failed and we were unable to recover it. 00:39:27.235 [2024-11-07 13:44:35.014232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.235 [2024-11-07 13:44:35.014245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.235 qpair failed and we were unable to recover it. 00:39:27.235 [2024-11-07 13:44:35.014552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.235 [2024-11-07 13:44:35.014566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.235 qpair failed and we were unable to recover it. 00:39:27.235 [2024-11-07 13:44:35.014882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.235 [2024-11-07 13:44:35.014896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.235 qpair failed and we were unable to recover it. 00:39:27.235 [2024-11-07 13:44:35.015200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.235 [2024-11-07 13:44:35.015214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.235 qpair failed and we were unable to recover it. 00:39:27.235 [2024-11-07 13:44:35.015528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.235 [2024-11-07 13:44:35.015550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.235 qpair failed and we were unable to recover it. 00:39:27.235 [2024-11-07 13:44:35.015869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.235 [2024-11-07 13:44:35.015883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.235 qpair failed and we were unable to recover it. 00:39:27.235 [2024-11-07 13:44:35.016200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.235 [2024-11-07 13:44:35.016213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.235 qpair failed and we were unable to recover it. 00:39:27.235 [2024-11-07 13:44:35.016586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.235 [2024-11-07 13:44:35.016600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.235 qpair failed and we were unable to recover it. 00:39:27.235 [2024-11-07 13:44:35.016915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.235 [2024-11-07 13:44:35.016929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.235 qpair failed and we were unable to recover it. 00:39:27.235 [2024-11-07 13:44:35.017235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.235 [2024-11-07 13:44:35.017248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.235 qpair failed and we were unable to recover it. 00:39:27.235 [2024-11-07 13:44:35.017566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.235 [2024-11-07 13:44:35.017579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.235 qpair failed and we were unable to recover it. 00:39:27.235 [2024-11-07 13:44:35.017911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.235 [2024-11-07 13:44:35.017925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.235 qpair failed and we were unable to recover it. 00:39:27.235 [2024-11-07 13:44:35.018329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.235 [2024-11-07 13:44:35.018342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.235 qpair failed and we were unable to recover it. 00:39:27.235 [2024-11-07 13:44:35.018741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.235 [2024-11-07 13:44:35.018754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.235 qpair failed and we were unable to recover it. 00:39:27.235 [2024-11-07 13:44:35.019177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.235 [2024-11-07 13:44:35.019191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.235 qpair failed and we were unable to recover it. 00:39:27.235 [2024-11-07 13:44:35.019425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.235 [2024-11-07 13:44:35.019439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.235 qpair failed and we were unable to recover it. 00:39:27.235 [2024-11-07 13:44:35.019785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.235 [2024-11-07 13:44:35.019799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.235 qpair failed and we were unable to recover it. 00:39:27.235 [2024-11-07 13:44:35.020086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.235 [2024-11-07 13:44:35.020100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.235 qpair failed and we were unable to recover it. 00:39:27.235 [2024-11-07 13:44:35.020409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.235 [2024-11-07 13:44:35.020422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.235 qpair failed and we were unable to recover it. 00:39:27.235 [2024-11-07 13:44:35.020753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.235 [2024-11-07 13:44:35.020769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.235 qpair failed and we were unable to recover it. 00:39:27.235 [2024-11-07 13:44:35.021093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.235 [2024-11-07 13:44:35.021106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.235 qpair failed and we were unable to recover it. 00:39:27.235 [2024-11-07 13:44:35.021410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.235 [2024-11-07 13:44:35.021423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.235 qpair failed and we were unable to recover it. 00:39:27.235 [2024-11-07 13:44:35.021734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.235 [2024-11-07 13:44:35.021746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.235 qpair failed and we were unable to recover it. 00:39:27.236 [2024-11-07 13:44:35.022057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.236 [2024-11-07 13:44:35.022071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.236 qpair failed and we were unable to recover it. 00:39:27.236 [2024-11-07 13:44:35.022356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.236 [2024-11-07 13:44:35.022370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.236 qpair failed and we were unable to recover it. 00:39:27.236 [2024-11-07 13:44:35.022647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.236 [2024-11-07 13:44:35.022661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.236 qpair failed and we were unable to recover it. 00:39:27.236 [2024-11-07 13:44:35.022973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.236 [2024-11-07 13:44:35.022987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.236 qpair failed and we were unable to recover it. 00:39:27.236 [2024-11-07 13:44:35.023301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.236 [2024-11-07 13:44:35.023315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.236 qpair failed and we were unable to recover it. 00:39:27.236 [2024-11-07 13:44:35.023645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.236 [2024-11-07 13:44:35.023659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.236 qpair failed and we were unable to recover it. 00:39:27.236 [2024-11-07 13:44:35.023970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.236 [2024-11-07 13:44:35.023984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.236 qpair failed and we were unable to recover it. 00:39:27.236 [2024-11-07 13:44:35.024299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.236 [2024-11-07 13:44:35.024312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.236 qpair failed and we were unable to recover it. 00:39:27.236 [2024-11-07 13:44:35.024650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.236 [2024-11-07 13:44:35.024664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.236 qpair failed and we were unable to recover it. 00:39:27.236 [2024-11-07 13:44:35.024989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.236 [2024-11-07 13:44:35.025003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.236 qpair failed and we were unable to recover it. 00:39:27.236 [2024-11-07 13:44:35.025284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.236 [2024-11-07 13:44:35.025298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.236 qpair failed and we were unable to recover it. 00:39:27.236 [2024-11-07 13:44:35.025606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.236 [2024-11-07 13:44:35.025619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.236 qpair failed and we were unable to recover it. 00:39:27.236 [2024-11-07 13:44:35.025959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.236 [2024-11-07 13:44:35.025973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.236 qpair failed and we were unable to recover it. 00:39:27.236 [2024-11-07 13:44:35.026296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.236 [2024-11-07 13:44:35.026309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.236 qpair failed and we were unable to recover it. 00:39:27.236 [2024-11-07 13:44:35.026488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.236 [2024-11-07 13:44:35.026502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.236 qpair failed and we were unable to recover it. 00:39:27.236 [2024-11-07 13:44:35.026823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.236 [2024-11-07 13:44:35.026836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.236 qpair failed and we were unable to recover it. 00:39:27.236 [2024-11-07 13:44:35.027163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.236 [2024-11-07 13:44:35.027177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.236 qpair failed and we were unable to recover it. 00:39:27.236 [2024-11-07 13:44:35.027505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.236 [2024-11-07 13:44:35.027518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.236 qpair failed and we were unable to recover it. 00:39:27.236 [2024-11-07 13:44:35.027835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.236 [2024-11-07 13:44:35.027848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.236 qpair failed and we were unable to recover it. 00:39:27.236 [2024-11-07 13:44:35.028150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.236 [2024-11-07 13:44:35.028163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.236 qpair failed and we were unable to recover it. 00:39:27.236 [2024-11-07 13:44:35.028485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.236 [2024-11-07 13:44:35.028498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.236 qpair failed and we were unable to recover it. 00:39:27.236 [2024-11-07 13:44:35.028853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.236 [2024-11-07 13:44:35.028868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.236 qpair failed and we were unable to recover it. 00:39:27.236 [2024-11-07 13:44:35.029162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.236 [2024-11-07 13:44:35.029175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.236 qpair failed and we were unable to recover it. 00:39:27.236 [2024-11-07 13:44:35.029482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.236 [2024-11-07 13:44:35.029495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.236 qpair failed and we were unable to recover it. 00:39:27.236 [2024-11-07 13:44:35.029816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.236 [2024-11-07 13:44:35.029831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.236 qpair failed and we were unable to recover it. 00:39:27.236 [2024-11-07 13:44:35.030151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.236 [2024-11-07 13:44:35.030166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.236 qpair failed and we were unable to recover it. 00:39:27.236 [2024-11-07 13:44:35.030494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.236 [2024-11-07 13:44:35.030508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.236 qpair failed and we were unable to recover it. 00:39:27.236 [2024-11-07 13:44:35.030817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.236 [2024-11-07 13:44:35.030831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.236 qpair failed and we were unable to recover it. 00:39:27.236 [2024-11-07 13:44:35.031032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.236 [2024-11-07 13:44:35.031046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.236 qpair failed and we were unable to recover it. 00:39:27.236 [2024-11-07 13:44:35.031356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.236 [2024-11-07 13:44:35.031370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.236 qpair failed and we were unable to recover it. 00:39:27.236 [2024-11-07 13:44:35.031692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.236 [2024-11-07 13:44:35.031706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.236 qpair failed and we were unable to recover it. 00:39:27.236 [2024-11-07 13:44:35.031986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.236 [2024-11-07 13:44:35.031999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.236 qpair failed and we were unable to recover it. 00:39:27.237 [2024-11-07 13:44:35.032337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.237 [2024-11-07 13:44:35.032350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.237 qpair failed and we were unable to recover it. 00:39:27.237 [2024-11-07 13:44:35.032664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.237 [2024-11-07 13:44:35.032678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.237 qpair failed and we were unable to recover it. 00:39:27.237 [2024-11-07 13:44:35.033010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.237 [2024-11-07 13:44:35.033024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.237 qpair failed and we were unable to recover it. 00:39:27.237 [2024-11-07 13:44:35.033329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.237 [2024-11-07 13:44:35.033343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.237 qpair failed and we were unable to recover it. 00:39:27.237 [2024-11-07 13:44:35.033632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.237 [2024-11-07 13:44:35.033648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.237 qpair failed and we were unable to recover it. 00:39:27.237 [2024-11-07 13:44:35.033973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.237 [2024-11-07 13:44:35.033987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.237 qpair failed and we were unable to recover it. 00:39:27.237 [2024-11-07 13:44:35.034280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.237 [2024-11-07 13:44:35.034293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.237 qpair failed and we were unable to recover it. 00:39:27.237 [2024-11-07 13:44:35.034611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.237 [2024-11-07 13:44:35.034624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.237 qpair failed and we were unable to recover it. 00:39:27.237 [2024-11-07 13:44:35.034914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.237 [2024-11-07 13:44:35.034929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.237 qpair failed and we were unable to recover it. 00:39:27.237 [2024-11-07 13:44:35.035245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.237 [2024-11-07 13:44:35.035259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.237 qpair failed and we were unable to recover it. 00:39:27.237 [2024-11-07 13:44:35.035576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.237 [2024-11-07 13:44:35.035598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.237 qpair failed and we were unable to recover it. 00:39:27.237 [2024-11-07 13:44:35.035808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.237 [2024-11-07 13:44:35.035821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.237 qpair failed and we were unable to recover it. 00:39:27.237 [2024-11-07 13:44:35.036187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.237 [2024-11-07 13:44:35.036201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.237 qpair failed and we were unable to recover it. 00:39:27.237 [2024-11-07 13:44:35.036527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.237 [2024-11-07 13:44:35.036540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.237 qpair failed and we were unable to recover it. 00:39:27.237 [2024-11-07 13:44:35.036855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.237 [2024-11-07 13:44:35.036872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.237 qpair failed and we were unable to recover it. 00:39:27.237 [2024-11-07 13:44:35.037047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.237 [2024-11-07 13:44:35.037061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.237 qpair failed and we were unable to recover it. 00:39:27.237 [2024-11-07 13:44:35.037378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.237 [2024-11-07 13:44:35.037391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.237 qpair failed and we were unable to recover it. 00:39:27.237 [2024-11-07 13:44:35.037698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.237 [2024-11-07 13:44:35.037711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.237 qpair failed and we were unable to recover it. 00:39:27.237 [2024-11-07 13:44:35.038045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.237 [2024-11-07 13:44:35.038059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.237 qpair failed and we were unable to recover it. 00:39:27.237 [2024-11-07 13:44:35.038447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.237 [2024-11-07 13:44:35.038461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.237 qpair failed and we were unable to recover it. 00:39:27.237 [2024-11-07 13:44:35.038851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.237 [2024-11-07 13:44:35.038871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.237 qpair failed and we were unable to recover it. 00:39:27.237 [2024-11-07 13:44:35.039265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.237 [2024-11-07 13:44:35.039278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.237 qpair failed and we were unable to recover it. 00:39:27.237 [2024-11-07 13:44:35.039541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.237 [2024-11-07 13:44:35.039554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.237 qpair failed and we were unable to recover it. 00:39:27.237 [2024-11-07 13:44:35.039873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.237 [2024-11-07 13:44:35.039886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.237 qpair failed and we were unable to recover it. 00:39:27.237 [2024-11-07 13:44:35.040234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.237 [2024-11-07 13:44:35.040248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.237 qpair failed and we were unable to recover it. 00:39:27.237 [2024-11-07 13:44:35.040576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.237 [2024-11-07 13:44:35.040589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.237 qpair failed and we were unable to recover it. 00:39:27.237 [2024-11-07 13:44:35.040869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.237 [2024-11-07 13:44:35.040883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.237 qpair failed and we were unable to recover it. 00:39:27.237 [2024-11-07 13:44:35.041208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.237 [2024-11-07 13:44:35.041222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.237 qpair failed and we were unable to recover it. 00:39:27.237 [2024-11-07 13:44:35.041555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.237 [2024-11-07 13:44:35.041569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.237 qpair failed and we were unable to recover it. 00:39:27.237 [2024-11-07 13:44:35.041882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.237 [2024-11-07 13:44:35.041896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.237 qpair failed and we were unable to recover it. 00:39:27.237 [2024-11-07 13:44:35.042249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.237 [2024-11-07 13:44:35.042263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.237 qpair failed and we were unable to recover it. 00:39:27.237 [2024-11-07 13:44:35.042582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.237 [2024-11-07 13:44:35.042597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.237 qpair failed and we were unable to recover it. 00:39:27.237 [2024-11-07 13:44:35.042817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.237 [2024-11-07 13:44:35.042831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.237 qpair failed and we were unable to recover it. 00:39:27.237 [2024-11-07 13:44:35.043154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.237 [2024-11-07 13:44:35.043168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.237 qpair failed and we were unable to recover it. 00:39:27.237 [2024-11-07 13:44:35.043480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.237 [2024-11-07 13:44:35.043493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.237 qpair failed and we were unable to recover it. 00:39:27.237 [2024-11-07 13:44:35.043803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.237 [2024-11-07 13:44:35.043817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.237 qpair failed and we were unable to recover it. 00:39:27.237 [2024-11-07 13:44:35.044147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.237 [2024-11-07 13:44:35.044161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.237 qpair failed and we were unable to recover it. 00:39:27.237 [2024-11-07 13:44:35.044479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.237 [2024-11-07 13:44:35.044493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.237 qpair failed and we were unable to recover it. 00:39:27.238 [2024-11-07 13:44:35.044672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.238 [2024-11-07 13:44:35.044686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.238 qpair failed and we were unable to recover it. 00:39:27.238 [2024-11-07 13:44:35.045007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.238 [2024-11-07 13:44:35.045022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.238 qpair failed and we were unable to recover it. 00:39:27.238 [2024-11-07 13:44:35.045363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.238 [2024-11-07 13:44:35.045377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.238 qpair failed and we were unable to recover it. 00:39:27.238 [2024-11-07 13:44:35.045704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.238 [2024-11-07 13:44:35.045717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.238 qpair failed and we were unable to recover it. 00:39:27.238 [2024-11-07 13:44:35.045993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.238 [2024-11-07 13:44:35.046007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.238 qpair failed and we were unable to recover it. 00:39:27.238 [2024-11-07 13:44:35.046322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.238 [2024-11-07 13:44:35.046335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.238 qpair failed and we were unable to recover it. 00:39:27.238 [2024-11-07 13:44:35.046664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.238 [2024-11-07 13:44:35.046681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.238 qpair failed and we were unable to recover it. 00:39:27.238 [2024-11-07 13:44:35.047010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.238 [2024-11-07 13:44:35.047023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.238 qpair failed and we were unable to recover it. 00:39:27.238 [2024-11-07 13:44:35.047337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.238 [2024-11-07 13:44:35.047350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.238 qpair failed and we were unable to recover it. 00:39:27.238 [2024-11-07 13:44:35.047673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.238 [2024-11-07 13:44:35.047686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.238 qpair failed and we were unable to recover it. 00:39:27.238 [2024-11-07 13:44:35.048030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.238 [2024-11-07 13:44:35.048044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.238 qpair failed and we were unable to recover it. 00:39:27.238 [2024-11-07 13:44:35.048368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.238 [2024-11-07 13:44:35.048381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.238 qpair failed and we were unable to recover it. 00:39:27.238 [2024-11-07 13:44:35.048695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.238 [2024-11-07 13:44:35.048708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.238 qpair failed and we were unable to recover it. 00:39:27.238 [2024-11-07 13:44:35.048880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.238 [2024-11-07 13:44:35.048895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.238 qpair failed and we were unable to recover it. 00:39:27.238 [2024-11-07 13:44:35.049304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.238 [2024-11-07 13:44:35.049318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.238 qpair failed and we were unable to recover it. 00:39:27.238 [2024-11-07 13:44:35.049647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.238 [2024-11-07 13:44:35.049660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.238 qpair failed and we were unable to recover it. 00:39:27.238 [2024-11-07 13:44:35.049988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.238 [2024-11-07 13:44:35.050003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.238 qpair failed and we were unable to recover it. 00:39:27.238 [2024-11-07 13:44:35.050198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.238 [2024-11-07 13:44:35.050212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.238 qpair failed and we were unable to recover it. 00:39:27.238 [2024-11-07 13:44:35.050540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.238 [2024-11-07 13:44:35.050553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.238 qpair failed and we were unable to recover it. 00:39:27.238 [2024-11-07 13:44:35.050873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.238 [2024-11-07 13:44:35.050887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.238 qpair failed and we were unable to recover it. 00:39:27.238 [2024-11-07 13:44:35.051212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.238 [2024-11-07 13:44:35.051225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.238 qpair failed and we were unable to recover it. 00:39:27.238 [2024-11-07 13:44:35.051526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.238 [2024-11-07 13:44:35.051540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.238 qpair failed and we were unable to recover it. 00:39:27.238 [2024-11-07 13:44:35.051876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.238 [2024-11-07 13:44:35.051890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.238 qpair failed and we were unable to recover it. 00:39:27.238 [2024-11-07 13:44:35.052133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.238 [2024-11-07 13:44:35.052148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.238 qpair failed and we were unable to recover it. 00:39:27.238 [2024-11-07 13:44:35.052480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.238 [2024-11-07 13:44:35.052494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.238 qpair failed and we were unable to recover it. 00:39:27.238 [2024-11-07 13:44:35.052816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.238 [2024-11-07 13:44:35.052829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.238 qpair failed and we were unable to recover it. 00:39:27.238 [2024-11-07 13:44:35.053196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.238 [2024-11-07 13:44:35.053209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.238 qpair failed and we were unable to recover it. 00:39:27.238 [2024-11-07 13:44:35.053524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.238 [2024-11-07 13:44:35.053538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.238 qpair failed and we were unable to recover it. 00:39:27.238 [2024-11-07 13:44:35.053870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.238 [2024-11-07 13:44:35.053885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.238 qpair failed and we were unable to recover it. 00:39:27.238 [2024-11-07 13:44:35.054205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.238 [2024-11-07 13:44:35.054218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.238 qpair failed and we were unable to recover it. 00:39:27.238 [2024-11-07 13:44:35.054516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.238 [2024-11-07 13:44:35.054529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.238 qpair failed and we were unable to recover it. 00:39:27.238 [2024-11-07 13:44:35.054857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.238 [2024-11-07 13:44:35.054878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.238 qpair failed and we were unable to recover it. 00:39:27.238 [2024-11-07 13:44:35.055200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.238 [2024-11-07 13:44:35.055213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.238 qpair failed and we were unable to recover it. 00:39:27.238 [2024-11-07 13:44:35.055523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.238 [2024-11-07 13:44:35.055536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.238 qpair failed and we were unable to recover it. 00:39:27.238 [2024-11-07 13:44:35.055734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.239 [2024-11-07 13:44:35.055748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.239 qpair failed and we were unable to recover it. 00:39:27.239 [2024-11-07 13:44:35.056064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.239 [2024-11-07 13:44:35.056078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.239 qpair failed and we were unable to recover it. 00:39:27.239 [2024-11-07 13:44:35.056378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.239 [2024-11-07 13:44:35.056391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.239 qpair failed and we were unable to recover it. 00:39:27.239 [2024-11-07 13:44:35.056720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.239 [2024-11-07 13:44:35.056733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.239 qpair failed and we were unable to recover it. 00:39:27.239 [2024-11-07 13:44:35.057120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.239 [2024-11-07 13:44:35.057134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.239 qpair failed and we were unable to recover it. 00:39:27.239 [2024-11-07 13:44:35.057455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.239 [2024-11-07 13:44:35.057468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.239 qpair failed and we were unable to recover it. 00:39:27.239 [2024-11-07 13:44:35.057812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.239 [2024-11-07 13:44:35.057825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.239 qpair failed and we were unable to recover it. 00:39:27.239 [2024-11-07 13:44:35.058140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.239 [2024-11-07 13:44:35.058154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.239 qpair failed and we were unable to recover it. 00:39:27.239 [2024-11-07 13:44:35.058368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.239 [2024-11-07 13:44:35.058381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.239 qpair failed and we were unable to recover it. 00:39:27.239 [2024-11-07 13:44:35.058729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.239 [2024-11-07 13:44:35.058743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.239 qpair failed and we were unable to recover it. 00:39:27.239 [2024-11-07 13:44:35.059089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.239 [2024-11-07 13:44:35.059103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.239 qpair failed and we were unable to recover it. 00:39:27.239 [2024-11-07 13:44:35.059425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.239 [2024-11-07 13:44:35.059440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.239 qpair failed and we were unable to recover it. 00:39:27.239 [2024-11-07 13:44:35.059738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.239 [2024-11-07 13:44:35.059754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.239 qpair failed and we were unable to recover it. 00:39:27.239 [2024-11-07 13:44:35.060084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.239 [2024-11-07 13:44:35.060105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.239 qpair failed and we were unable to recover it. 00:39:27.239 [2024-11-07 13:44:35.060410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.239 [2024-11-07 13:44:35.060424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.239 qpair failed and we were unable to recover it. 00:39:27.239 [2024-11-07 13:44:35.060680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.239 [2024-11-07 13:44:35.060693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.239 qpair failed and we were unable to recover it. 00:39:27.239 [2024-11-07 13:44:35.061018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.239 [2024-11-07 13:44:35.061032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.239 qpair failed and we were unable to recover it. 00:39:27.239 [2024-11-07 13:44:35.061332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.239 [2024-11-07 13:44:35.061346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.239 qpair failed and we were unable to recover it. 00:39:27.239 [2024-11-07 13:44:35.061543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.239 [2024-11-07 13:44:35.061557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.239 qpair failed and we were unable to recover it. 00:39:27.239 [2024-11-07 13:44:35.061891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.239 [2024-11-07 13:44:35.061906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.239 qpair failed and we were unable to recover it. 00:39:27.239 [2024-11-07 13:44:35.062295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.239 [2024-11-07 13:44:35.062309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.239 qpair failed and we were unable to recover it. 00:39:27.239 [2024-11-07 13:44:35.062585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.239 [2024-11-07 13:44:35.062598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.239 qpair failed and we were unable to recover it. 00:39:27.239 [2024-11-07 13:44:35.062908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.239 [2024-11-07 13:44:35.062922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.239 qpair failed and we were unable to recover it. 00:39:27.239 [2024-11-07 13:44:35.063113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.239 [2024-11-07 13:44:35.063126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.239 qpair failed and we were unable to recover it. 00:39:27.239 [2024-11-07 13:44:35.063414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.239 [2024-11-07 13:44:35.063428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.239 qpair failed and we were unable to recover it. 00:39:27.239 [2024-11-07 13:44:35.063599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.239 [2024-11-07 13:44:35.063614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.239 qpair failed and we were unable to recover it. 00:39:27.239 [2024-11-07 13:44:35.063802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.239 [2024-11-07 13:44:35.063816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.239 qpair failed and we were unable to recover it. 00:39:27.239 [2024-11-07 13:44:35.064126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.239 [2024-11-07 13:44:35.064145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.239 qpair failed and we were unable to recover it. 00:39:27.239 [2024-11-07 13:44:35.064440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.239 [2024-11-07 13:44:35.064453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.239 qpair failed and we were unable to recover it. 00:39:27.239 [2024-11-07 13:44:35.064771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.239 [2024-11-07 13:44:35.064793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.239 qpair failed and we were unable to recover it. 00:39:27.239 [2024-11-07 13:44:35.065116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.239 [2024-11-07 13:44:35.065129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.239 qpair failed and we were unable to recover it. 00:39:27.239 [2024-11-07 13:44:35.065445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.239 [2024-11-07 13:44:35.065458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.239 qpair failed and we were unable to recover it. 00:39:27.239 [2024-11-07 13:44:35.065794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.239 [2024-11-07 13:44:35.065807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.239 qpair failed and we were unable to recover it. 00:39:27.239 [2024-11-07 13:44:35.066142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.239 [2024-11-07 13:44:35.066157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.239 qpair failed and we were unable to recover it. 00:39:27.239 [2024-11-07 13:44:35.066477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.239 [2024-11-07 13:44:35.066491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.239 qpair failed and we were unable to recover it. 00:39:27.239 [2024-11-07 13:44:35.066804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.239 [2024-11-07 13:44:35.066817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.239 qpair failed and we were unable to recover it. 00:39:27.239 [2024-11-07 13:44:35.067150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.239 [2024-11-07 13:44:35.067163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.239 qpair failed and we were unable to recover it. 00:39:27.239 [2024-11-07 13:44:35.067390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.239 [2024-11-07 13:44:35.067403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.239 qpair failed and we were unable to recover it. 00:39:27.240 [2024-11-07 13:44:35.067601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.240 [2024-11-07 13:44:35.067616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.240 qpair failed and we were unable to recover it. 00:39:27.240 [2024-11-07 13:44:35.067975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.240 [2024-11-07 13:44:35.067989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.240 qpair failed and we were unable to recover it. 00:39:27.240 [2024-11-07 13:44:35.068285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.240 [2024-11-07 13:44:35.068299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.240 qpair failed and we were unable to recover it. 00:39:27.240 [2024-11-07 13:44:35.068608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.240 [2024-11-07 13:44:35.068621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.240 qpair failed and we were unable to recover it. 00:39:27.240 [2024-11-07 13:44:35.068941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.240 [2024-11-07 13:44:35.068955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.240 qpair failed and we were unable to recover it. 00:39:27.240 [2024-11-07 13:44:35.069272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.240 [2024-11-07 13:44:35.069286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.240 qpair failed and we were unable to recover it. 00:39:27.240 [2024-11-07 13:44:35.069588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.240 [2024-11-07 13:44:35.069601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.240 qpair failed and we were unable to recover it. 00:39:27.240 [2024-11-07 13:44:35.069904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.240 [2024-11-07 13:44:35.069917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.240 qpair failed and we were unable to recover it. 00:39:27.240 [2024-11-07 13:44:35.070202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.240 [2024-11-07 13:44:35.070215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.240 qpair failed and we were unable to recover it. 00:39:27.240 [2024-11-07 13:44:35.070529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.240 [2024-11-07 13:44:35.070542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.240 qpair failed and we were unable to recover it. 00:39:27.240 [2024-11-07 13:44:35.070832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.240 [2024-11-07 13:44:35.070846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.240 qpair failed and we were unable to recover it. 00:39:27.240 [2024-11-07 13:44:35.071182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.240 [2024-11-07 13:44:35.071196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.240 qpair failed and we were unable to recover it. 00:39:27.240 [2024-11-07 13:44:35.071527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.240 [2024-11-07 13:44:35.071540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.240 qpair failed and we were unable to recover it. 00:39:27.240 [2024-11-07 13:44:35.071860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.240 [2024-11-07 13:44:35.071877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.240 qpair failed and we were unable to recover it. 00:39:27.240 [2024-11-07 13:44:35.072254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.240 [2024-11-07 13:44:35.072270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.240 qpair failed and we were unable to recover it. 00:39:27.240 [2024-11-07 13:44:35.072472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.240 [2024-11-07 13:44:35.072485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.240 qpair failed and we were unable to recover it. 00:39:27.240 [2024-11-07 13:44:35.072785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.240 [2024-11-07 13:44:35.072799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.240 qpair failed and we were unable to recover it. 00:39:27.240 [2024-11-07 13:44:35.073132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.240 [2024-11-07 13:44:35.073146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.240 qpair failed and we were unable to recover it. 00:39:27.240 [2024-11-07 13:44:35.073329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.240 [2024-11-07 13:44:35.073343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.240 qpair failed and we were unable to recover it. 00:39:27.240 [2024-11-07 13:44:35.073526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.240 [2024-11-07 13:44:35.073540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.240 qpair failed and we were unable to recover it. 00:39:27.240 [2024-11-07 13:44:35.073874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.240 [2024-11-07 13:44:35.073888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.240 qpair failed and we were unable to recover it. 00:39:27.240 [2024-11-07 13:44:35.074195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.240 [2024-11-07 13:44:35.074208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.240 qpair failed and we were unable to recover it. 00:39:27.240 [2024-11-07 13:44:35.074524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.240 [2024-11-07 13:44:35.074538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.240 qpair failed and we were unable to recover it. 00:39:27.240 [2024-11-07 13:44:35.074850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.240 [2024-11-07 13:44:35.074871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.240 qpair failed and we were unable to recover it. 00:39:27.240 [2024-11-07 13:44:35.075207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.240 [2024-11-07 13:44:35.075220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.240 qpair failed and we were unable to recover it. 00:39:27.240 [2024-11-07 13:44:35.075523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.240 [2024-11-07 13:44:35.075536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.240 qpair failed and we were unable to recover it. 00:39:27.240 [2024-11-07 13:44:35.075870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.240 [2024-11-07 13:44:35.075885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.240 qpair failed and we were unable to recover it. 00:39:27.240 [2024-11-07 13:44:35.076205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.240 [2024-11-07 13:44:35.076219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.240 qpair failed and we were unable to recover it. 00:39:27.240 [2024-11-07 13:44:35.076529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.240 [2024-11-07 13:44:35.076543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.240 qpair failed and we were unable to recover it. 00:39:27.240 [2024-11-07 13:44:35.076851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.240 [2024-11-07 13:44:35.076868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.240 qpair failed and we were unable to recover it. 00:39:27.240 [2024-11-07 13:44:35.077057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.240 [2024-11-07 13:44:35.077072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.240 qpair failed and we were unable to recover it. 00:39:27.240 [2024-11-07 13:44:35.077274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.240 [2024-11-07 13:44:35.077287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.240 qpair failed and we were unable to recover it. 00:39:27.240 [2024-11-07 13:44:35.077610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.240 [2024-11-07 13:44:35.077624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.240 qpair failed and we were unable to recover it. 00:39:27.240 [2024-11-07 13:44:35.077926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.240 [2024-11-07 13:44:35.077941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.240 qpair failed and we were unable to recover it. 00:39:27.240 [2024-11-07 13:44:35.078268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.240 [2024-11-07 13:44:35.078281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.240 qpair failed and we were unable to recover it. 00:39:27.240 [2024-11-07 13:44:35.078598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.241 [2024-11-07 13:44:35.078618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.241 qpair failed and we were unable to recover it. 00:39:27.241 [2024-11-07 13:44:35.078986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.241 [2024-11-07 13:44:35.079000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.241 qpair failed and we were unable to recover it. 00:39:27.241 [2024-11-07 13:44:35.079304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.241 [2024-11-07 13:44:35.079318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.241 qpair failed and we were unable to recover it. 00:39:27.241 [2024-11-07 13:44:35.079499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.241 [2024-11-07 13:44:35.079514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.241 qpair failed and we were unable to recover it. 00:39:27.241 [2024-11-07 13:44:35.079819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.241 [2024-11-07 13:44:35.079833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.241 qpair failed and we were unable to recover it. 00:39:27.241 [2024-11-07 13:44:35.080132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.241 [2024-11-07 13:44:35.080146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.241 qpair failed and we were unable to recover it. 00:39:27.241 [2024-11-07 13:44:35.080519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.241 [2024-11-07 13:44:35.080532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.241 qpair failed and we were unable to recover it. 00:39:27.241 [2024-11-07 13:44:35.080878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.241 [2024-11-07 13:44:35.080892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.241 qpair failed and we were unable to recover it. 00:39:27.241 [2024-11-07 13:44:35.081248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.241 [2024-11-07 13:44:35.081261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.241 qpair failed and we were unable to recover it. 00:39:27.241 [2024-11-07 13:44:35.081591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.241 [2024-11-07 13:44:35.081604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.241 qpair failed and we were unable to recover it. 00:39:27.241 [2024-11-07 13:44:35.081917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.241 [2024-11-07 13:44:35.081931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.241 qpair failed and we were unable to recover it. 00:39:27.241 [2024-11-07 13:44:35.082245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.241 [2024-11-07 13:44:35.082258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.241 qpair failed and we were unable to recover it. 00:39:27.241 [2024-11-07 13:44:35.082573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.241 [2024-11-07 13:44:35.082586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.241 qpair failed and we were unable to recover it. 00:39:27.241 [2024-11-07 13:44:35.082798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.241 [2024-11-07 13:44:35.082811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.241 qpair failed and we were unable to recover it. 00:39:27.241 [2024-11-07 13:44:35.083164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.241 [2024-11-07 13:44:35.083178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.241 qpair failed and we were unable to recover it. 00:39:27.241 [2024-11-07 13:44:35.083522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.241 [2024-11-07 13:44:35.083535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.241 qpair failed and we were unable to recover it. 00:39:27.241 [2024-11-07 13:44:35.083852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.241 [2024-11-07 13:44:35.083878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.241 qpair failed and we were unable to recover it. 00:39:27.241 [2024-11-07 13:44:35.084097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.241 [2024-11-07 13:44:35.084111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.241 qpair failed and we were unable to recover it. 00:39:27.241 [2024-11-07 13:44:35.084424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.241 [2024-11-07 13:44:35.084437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.241 qpair failed and we were unable to recover it. 00:39:27.241 [2024-11-07 13:44:35.084768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.241 [2024-11-07 13:44:35.084784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.241 qpair failed and we were unable to recover it. 00:39:27.241 [2024-11-07 13:44:35.085111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.241 [2024-11-07 13:44:35.085125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.241 qpair failed and we were unable to recover it. 00:39:27.241 [2024-11-07 13:44:35.085412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.241 [2024-11-07 13:44:35.085426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.241 qpair failed and we were unable to recover it. 00:39:27.241 [2024-11-07 13:44:35.085720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.241 [2024-11-07 13:44:35.085734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.241 qpair failed and we were unable to recover it. 00:39:27.241 [2024-11-07 13:44:35.086041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.241 [2024-11-07 13:44:35.086055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.241 qpair failed and we were unable to recover it. 00:39:27.241 [2024-11-07 13:44:35.086380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.241 [2024-11-07 13:44:35.086395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.241 qpair failed and we were unable to recover it. 00:39:27.241 [2024-11-07 13:44:35.086609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.241 [2024-11-07 13:44:35.086622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.241 qpair failed and we were unable to recover it. 00:39:27.241 [2024-11-07 13:44:35.086948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.241 [2024-11-07 13:44:35.086962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.241 qpair failed and we were unable to recover it. 00:39:27.241 [2024-11-07 13:44:35.087270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.241 [2024-11-07 13:44:35.087284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.241 qpair failed and we were unable to recover it. 00:39:27.241 [2024-11-07 13:44:35.087589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.241 [2024-11-07 13:44:35.087603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.241 qpair failed and we were unable to recover it. 00:39:27.241 [2024-11-07 13:44:35.087916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.241 [2024-11-07 13:44:35.087930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.241 qpair failed and we were unable to recover it. 00:39:27.241 [2024-11-07 13:44:35.088165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.241 [2024-11-07 13:44:35.088179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.241 qpair failed and we were unable to recover it. 00:39:27.241 [2024-11-07 13:44:35.088482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.241 [2024-11-07 13:44:35.088496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.241 qpair failed and we were unable to recover it. 00:39:27.241 [2024-11-07 13:44:35.088806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.241 [2024-11-07 13:44:35.088819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.242 qpair failed and we were unable to recover it. 00:39:27.242 [2024-11-07 13:44:35.089160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.242 [2024-11-07 13:44:35.089175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.242 qpair failed and we were unable to recover it. 00:39:27.242 [2024-11-07 13:44:35.089560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.242 [2024-11-07 13:44:35.089573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.242 qpair failed and we were unable to recover it. 00:39:27.242 [2024-11-07 13:44:35.089879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.242 [2024-11-07 13:44:35.089893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.242 qpair failed and we were unable to recover it. 00:39:27.242 [2024-11-07 13:44:35.090188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.242 [2024-11-07 13:44:35.090202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.242 qpair failed and we were unable to recover it. 00:39:27.242 [2024-11-07 13:44:35.090522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.242 [2024-11-07 13:44:35.090539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.242 qpair failed and we were unable to recover it. 00:39:27.242 [2024-11-07 13:44:35.090754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.242 [2024-11-07 13:44:35.090768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.242 qpair failed and we were unable to recover it. 00:39:27.242 [2024-11-07 13:44:35.090995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.242 [2024-11-07 13:44:35.091009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.242 qpair failed and we were unable to recover it. 00:39:27.242 [2024-11-07 13:44:35.091281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.242 [2024-11-07 13:44:35.091295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.242 qpair failed and we were unable to recover it. 00:39:27.242 [2024-11-07 13:44:35.091618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.242 [2024-11-07 13:44:35.091631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.242 qpair failed and we were unable to recover it. 00:39:27.242 [2024-11-07 13:44:35.091959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.242 [2024-11-07 13:44:35.091974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.242 qpair failed and we were unable to recover it. 00:39:27.242 [2024-11-07 13:44:35.092302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.242 [2024-11-07 13:44:35.092315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.242 qpair failed and we were unable to recover it. 00:39:27.242 [2024-11-07 13:44:35.092596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.242 [2024-11-07 13:44:35.092609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.242 qpair failed and we were unable to recover it. 00:39:27.242 [2024-11-07 13:44:35.092948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.242 [2024-11-07 13:44:35.092962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.242 qpair failed and we were unable to recover it. 00:39:27.242 [2024-11-07 13:44:35.093260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.242 [2024-11-07 13:44:35.093274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.242 qpair failed and we were unable to recover it. 00:39:27.242 [2024-11-07 13:44:35.093606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.242 [2024-11-07 13:44:35.093620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.242 qpair failed and we were unable to recover it. 00:39:27.242 [2024-11-07 13:44:35.093936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.242 [2024-11-07 13:44:35.093951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.242 qpair failed and we were unable to recover it. 00:39:27.242 [2024-11-07 13:44:35.094149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.242 [2024-11-07 13:44:35.094165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.242 qpair failed and we were unable to recover it. 00:39:27.242 [2024-11-07 13:44:35.094479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.242 [2024-11-07 13:44:35.094493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.242 qpair failed and we were unable to recover it. 00:39:27.242 [2024-11-07 13:44:35.094808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.242 [2024-11-07 13:44:35.094825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.242 qpair failed and we were unable to recover it. 00:39:27.242 [2024-11-07 13:44:35.095131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.242 [2024-11-07 13:44:35.095147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.242 qpair failed and we were unable to recover it. 00:39:27.242 [2024-11-07 13:44:35.095466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.242 [2024-11-07 13:44:35.095480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.242 qpair failed and we were unable to recover it. 00:39:27.242 [2024-11-07 13:44:35.095797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.242 [2024-11-07 13:44:35.095811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.242 qpair failed and we were unable to recover it. 00:39:27.242 [2024-11-07 13:44:35.096120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.242 [2024-11-07 13:44:35.096134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.242 qpair failed and we were unable to recover it. 00:39:27.242 [2024-11-07 13:44:35.096455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.242 [2024-11-07 13:44:35.096469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.242 qpair failed and we were unable to recover it. 00:39:27.242 [2024-11-07 13:44:35.096797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.242 [2024-11-07 13:44:35.096812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.242 qpair failed and we were unable to recover it. 00:39:27.242 [2024-11-07 13:44:35.097028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.242 [2024-11-07 13:44:35.097043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.242 qpair failed and we were unable to recover it. 00:39:27.242 [2024-11-07 13:44:35.097361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.242 [2024-11-07 13:44:35.097378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.242 qpair failed and we were unable to recover it. 00:39:27.242 [2024-11-07 13:44:35.097688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.242 [2024-11-07 13:44:35.097701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.242 qpair failed and we were unable to recover it. 00:39:27.242 [2024-11-07 13:44:35.098039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.242 [2024-11-07 13:44:35.098054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.242 qpair failed and we were unable to recover it. 00:39:27.242 [2024-11-07 13:44:35.098250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.242 [2024-11-07 13:44:35.098265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.242 qpair failed and we were unable to recover it. 00:39:27.242 [2024-11-07 13:44:35.098574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.242 [2024-11-07 13:44:35.098587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.242 qpair failed and we were unable to recover it. 00:39:27.242 [2024-11-07 13:44:35.098914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.242 [2024-11-07 13:44:35.098928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.243 qpair failed and we were unable to recover it. 00:39:27.243 [2024-11-07 13:44:35.099078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.243 [2024-11-07 13:44:35.099092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.243 qpair failed and we were unable to recover it. 00:39:27.243 [2024-11-07 13:44:35.099413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.243 [2024-11-07 13:44:35.099427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.243 qpair failed and we were unable to recover it. 00:39:27.243 [2024-11-07 13:44:35.099709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.243 [2024-11-07 13:44:35.099723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.243 qpair failed and we were unable to recover it. 00:39:27.243 [2024-11-07 13:44:35.100032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.243 [2024-11-07 13:44:35.100045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.243 qpair failed and we were unable to recover it. 00:39:27.243 [2024-11-07 13:44:35.100343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.243 [2024-11-07 13:44:35.100364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.243 qpair failed and we were unable to recover it. 00:39:27.243 [2024-11-07 13:44:35.100685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.243 [2024-11-07 13:44:35.100699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.243 qpair failed and we were unable to recover it. 00:39:27.243 [2024-11-07 13:44:35.101011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.243 [2024-11-07 13:44:35.101025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.243 qpair failed and we were unable to recover it. 00:39:27.243 [2024-11-07 13:44:35.101417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.243 [2024-11-07 13:44:35.101430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.243 qpair failed and we were unable to recover it. 00:39:27.243 [2024-11-07 13:44:35.101761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.243 [2024-11-07 13:44:35.101775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.243 qpair failed and we were unable to recover it. 00:39:27.243 [2024-11-07 13:44:35.101993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.243 [2024-11-07 13:44:35.102008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.243 qpair failed and we were unable to recover it. 00:39:27.243 [2024-11-07 13:44:35.102330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.243 [2024-11-07 13:44:35.102343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.243 qpair failed and we were unable to recover it. 00:39:27.243 [2024-11-07 13:44:35.102688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.243 [2024-11-07 13:44:35.102701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.243 qpair failed and we were unable to recover it. 00:39:27.243 [2024-11-07 13:44:35.102982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.243 [2024-11-07 13:44:35.102996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.243 qpair failed and we were unable to recover it. 00:39:27.243 [2024-11-07 13:44:35.103301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.243 [2024-11-07 13:44:35.103314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.243 qpair failed and we were unable to recover it. 00:39:27.243 [2024-11-07 13:44:35.103599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.243 [2024-11-07 13:44:35.103612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.243 qpair failed and we were unable to recover it. 00:39:27.243 [2024-11-07 13:44:35.103924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.243 [2024-11-07 13:44:35.103937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.243 qpair failed and we were unable to recover it. 00:39:27.243 [2024-11-07 13:44:35.104252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.243 [2024-11-07 13:44:35.104266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.243 qpair failed and we were unable to recover it. 00:39:27.243 [2024-11-07 13:44:35.104629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.243 [2024-11-07 13:44:35.104643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.243 qpair failed and we were unable to recover it. 00:39:27.243 [2024-11-07 13:44:35.104853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.243 [2024-11-07 13:44:35.104870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.243 qpair failed and we were unable to recover it. 00:39:27.243 [2024-11-07 13:44:35.105192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.243 [2024-11-07 13:44:35.105205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.243 qpair failed and we were unable to recover it. 00:39:27.243 [2024-11-07 13:44:35.105556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.243 [2024-11-07 13:44:35.105569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.243 qpair failed and we were unable to recover it. 00:39:27.243 [2024-11-07 13:44:35.105876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.243 [2024-11-07 13:44:35.105890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.243 qpair failed and we were unable to recover it. 00:39:27.243 [2024-11-07 13:44:35.106252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.243 [2024-11-07 13:44:35.106267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.243 qpair failed and we were unable to recover it. 00:39:27.243 [2024-11-07 13:44:35.106478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.243 [2024-11-07 13:44:35.106493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.243 qpair failed and we were unable to recover it. 00:39:27.243 [2024-11-07 13:44:35.106828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.243 [2024-11-07 13:44:35.106842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.243 qpair failed and we were unable to recover it. 00:39:27.243 [2024-11-07 13:44:35.107124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.243 [2024-11-07 13:44:35.107138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.243 qpair failed and we were unable to recover it. 00:39:27.243 [2024-11-07 13:44:35.107438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.243 [2024-11-07 13:44:35.107452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.243 qpair failed and we were unable to recover it. 00:39:27.243 [2024-11-07 13:44:35.107729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.243 [2024-11-07 13:44:35.107742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.243 qpair failed and we were unable to recover it. 00:39:27.243 [2024-11-07 13:44:35.108049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.243 [2024-11-07 13:44:35.108064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.243 qpair failed and we were unable to recover it. 00:39:27.243 [2024-11-07 13:44:35.108374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.243 [2024-11-07 13:44:35.108387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.243 qpair failed and we were unable to recover it. 00:39:27.243 [2024-11-07 13:44:35.108679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.243 [2024-11-07 13:44:35.108693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.243 qpair failed and we were unable to recover it. 00:39:27.243 [2024-11-07 13:44:35.109028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.243 [2024-11-07 13:44:35.109042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.243 qpair failed and we were unable to recover it. 00:39:27.243 [2024-11-07 13:44:35.109347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.243 [2024-11-07 13:44:35.109361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.243 qpair failed and we were unable to recover it. 00:39:27.243 [2024-11-07 13:44:35.109692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.243 [2024-11-07 13:44:35.109705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.243 qpair failed and we were unable to recover it. 00:39:27.243 [2024-11-07 13:44:35.109994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.243 [2024-11-07 13:44:35.110010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.243 qpair failed and we were unable to recover it. 00:39:27.243 [2024-11-07 13:44:35.110341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.244 [2024-11-07 13:44:35.110355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.244 qpair failed and we were unable to recover it. 00:39:27.244 [2024-11-07 13:44:35.110669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.244 [2024-11-07 13:44:35.110683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.244 qpair failed and we were unable to recover it. 00:39:27.244 [2024-11-07 13:44:35.110889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.244 [2024-11-07 13:44:35.110903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.244 qpair failed and we were unable to recover it. 00:39:27.244 [2024-11-07 13:44:35.111197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.244 [2024-11-07 13:44:35.111211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.244 qpair failed and we were unable to recover it. 00:39:27.244 [2024-11-07 13:44:35.111525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.244 [2024-11-07 13:44:35.111540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.244 qpair failed and we were unable to recover it. 00:39:27.244 [2024-11-07 13:44:35.111873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.244 [2024-11-07 13:44:35.111888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.244 qpair failed and we were unable to recover it. 00:39:27.244 [2024-11-07 13:44:35.112236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.244 [2024-11-07 13:44:35.112249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.244 qpair failed and we were unable to recover it. 00:39:27.244 [2024-11-07 13:44:35.112562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.244 [2024-11-07 13:44:35.112575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.244 qpair failed and we were unable to recover it. 00:39:27.244 [2024-11-07 13:44:35.112780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.244 [2024-11-07 13:44:35.112793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.244 qpair failed and we were unable to recover it. 00:39:27.244 [2024-11-07 13:44:35.113109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.244 [2024-11-07 13:44:35.113123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.244 qpair failed and we were unable to recover it. 00:39:27.244 [2024-11-07 13:44:35.113450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.244 [2024-11-07 13:44:35.113463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.244 qpair failed and we were unable to recover it. 00:39:27.244 [2024-11-07 13:44:35.113766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.244 [2024-11-07 13:44:35.113779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.244 qpair failed and we were unable to recover it. 00:39:27.244 [2024-11-07 13:44:35.114110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.244 [2024-11-07 13:44:35.114123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.244 qpair failed and we were unable to recover it. 00:39:27.244 [2024-11-07 13:44:35.114447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.244 [2024-11-07 13:44:35.114460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.244 qpair failed and we were unable to recover it. 00:39:27.244 [2024-11-07 13:44:35.114759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.244 [2024-11-07 13:44:35.114773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.244 qpair failed and we were unable to recover it. 00:39:27.244 [2024-11-07 13:44:35.115071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.244 [2024-11-07 13:44:35.115085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.244 qpair failed and we were unable to recover it. 00:39:27.244 [2024-11-07 13:44:35.115461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.244 [2024-11-07 13:44:35.115474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.244 qpair failed and we were unable to recover it. 00:39:27.244 [2024-11-07 13:44:35.115784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.244 [2024-11-07 13:44:35.115798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.244 qpair failed and we were unable to recover it. 00:39:27.244 [2024-11-07 13:44:35.116001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.244 [2024-11-07 13:44:35.116015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.244 qpair failed and we were unable to recover it. 00:39:27.244 [2024-11-07 13:44:35.116334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.244 [2024-11-07 13:44:35.116349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.244 qpair failed and we were unable to recover it. 00:39:27.244 [2024-11-07 13:44:35.116657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.244 [2024-11-07 13:44:35.116671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.244 qpair failed and we were unable to recover it. 00:39:27.244 [2024-11-07 13:44:35.117010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.244 [2024-11-07 13:44:35.117024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.244 qpair failed and we were unable to recover it. 00:39:27.244 [2024-11-07 13:44:35.117402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.244 [2024-11-07 13:44:35.117416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.244 qpair failed and we were unable to recover it. 00:39:27.244 [2024-11-07 13:44:35.117749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.244 [2024-11-07 13:44:35.117762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.244 qpair failed and we were unable to recover it. 00:39:27.244 [2024-11-07 13:44:35.118090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.244 [2024-11-07 13:44:35.118105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.244 qpair failed and we were unable to recover it. 00:39:27.244 [2024-11-07 13:44:35.118432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.244 [2024-11-07 13:44:35.118447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.244 qpair failed and we were unable to recover it. 00:39:27.244 [2024-11-07 13:44:35.118630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.244 [2024-11-07 13:44:35.118645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.244 qpair failed and we were unable to recover it. 00:39:27.244 [2024-11-07 13:44:35.118960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.244 [2024-11-07 13:44:35.118974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.244 qpair failed and we were unable to recover it. 00:39:27.244 [2024-11-07 13:44:35.119303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.244 [2024-11-07 13:44:35.119317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.244 qpair failed and we were unable to recover it. 00:39:27.244 [2024-11-07 13:44:35.119645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.244 [2024-11-07 13:44:35.119660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.244 qpair failed and we were unable to recover it. 00:39:27.244 [2024-11-07 13:44:35.120002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.244 [2024-11-07 13:44:35.120017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.244 qpair failed and we were unable to recover it. 00:39:27.244 [2024-11-07 13:44:35.120350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.244 [2024-11-07 13:44:35.120364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.244 qpair failed and we were unable to recover it. 00:39:27.244 [2024-11-07 13:44:35.120683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.244 [2024-11-07 13:44:35.120698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.244 qpair failed and we were unable to recover it. 00:39:27.244 [2024-11-07 13:44:35.121007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.244 [2024-11-07 13:44:35.121022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.244 qpair failed and we were unable to recover it. 00:39:27.244 [2024-11-07 13:44:35.121371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.244 [2024-11-07 13:44:35.121386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.244 qpair failed and we were unable to recover it. 00:39:27.244 [2024-11-07 13:44:35.121626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.244 [2024-11-07 13:44:35.121640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.244 qpair failed and we were unable to recover it. 00:39:27.244 [2024-11-07 13:44:35.121845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.244 [2024-11-07 13:44:35.121859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.245 qpair failed and we were unable to recover it. 00:39:27.245 [2024-11-07 13:44:35.122200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.245 [2024-11-07 13:44:35.122214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.245 qpair failed and we were unable to recover it. 00:39:27.245 [2024-11-07 13:44:35.122533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.245 [2024-11-07 13:44:35.122547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.245 qpair failed and we were unable to recover it. 00:39:27.245 [2024-11-07 13:44:35.122881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.245 [2024-11-07 13:44:35.122898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.245 qpair failed and we were unable to recover it. 00:39:27.245 [2024-11-07 13:44:35.123195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.245 [2024-11-07 13:44:35.123209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.245 qpair failed and we were unable to recover it. 00:39:27.245 [2024-11-07 13:44:35.123533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.245 [2024-11-07 13:44:35.123548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.245 qpair failed and we were unable to recover it. 00:39:27.245 [2024-11-07 13:44:35.123733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.245 [2024-11-07 13:44:35.123748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.245 qpair failed and we were unable to recover it. 00:39:27.245 [2024-11-07 13:44:35.124047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.245 [2024-11-07 13:44:35.124062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.245 qpair failed and we were unable to recover it. 00:39:27.245 [2024-11-07 13:44:35.124377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.245 [2024-11-07 13:44:35.124392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.245 qpair failed and we were unable to recover it. 00:39:27.245 [2024-11-07 13:44:35.124761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.245 [2024-11-07 13:44:35.124775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.245 qpair failed and we were unable to recover it. 00:39:27.245 [2024-11-07 13:44:35.125093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.245 [2024-11-07 13:44:35.125108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.245 qpair failed and we were unable to recover it. 00:39:27.245 [2024-11-07 13:44:35.125441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.245 [2024-11-07 13:44:35.125456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.245 qpair failed and we were unable to recover it. 00:39:27.245 [2024-11-07 13:44:35.125785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.245 [2024-11-07 13:44:35.125799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.245 qpair failed and we were unable to recover it. 00:39:27.245 [2024-11-07 13:44:35.126102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.245 [2024-11-07 13:44:35.126116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.245 qpair failed and we were unable to recover it. 00:39:27.245 [2024-11-07 13:44:35.126442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.245 [2024-11-07 13:44:35.126456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.245 qpair failed and we were unable to recover it. 00:39:27.245 [2024-11-07 13:44:35.126736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.245 [2024-11-07 13:44:35.126751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.245 qpair failed and we were unable to recover it. 00:39:27.245 [2024-11-07 13:44:35.126957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.245 [2024-11-07 13:44:35.126972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.245 qpair failed and we were unable to recover it. 00:39:27.245 [2024-11-07 13:44:35.127157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.245 [2024-11-07 13:44:35.127171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.245 qpair failed and we were unable to recover it. 00:39:27.245 [2024-11-07 13:44:35.127494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.245 [2024-11-07 13:44:35.127508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.245 qpair failed and we were unable to recover it. 00:39:27.245 [2024-11-07 13:44:35.127825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.245 [2024-11-07 13:44:35.127839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.245 qpair failed and we were unable to recover it. 00:39:27.245 [2024-11-07 13:44:35.128042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.245 [2024-11-07 13:44:35.128059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.245 qpair failed and we were unable to recover it. 00:39:27.245 [2024-11-07 13:44:35.128384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.245 [2024-11-07 13:44:35.128399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.245 qpair failed and we were unable to recover it. 00:39:27.245 [2024-11-07 13:44:35.128730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.245 [2024-11-07 13:44:35.128745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.245 qpair failed and we were unable to recover it. 00:39:27.245 [2024-11-07 13:44:35.128943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.245 [2024-11-07 13:44:35.128957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.245 qpair failed and we were unable to recover it. 00:39:27.245 [2024-11-07 13:44:35.129246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.245 [2024-11-07 13:44:35.129260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.245 qpair failed and we were unable to recover it. 00:39:27.245 [2024-11-07 13:44:35.129586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.245 [2024-11-07 13:44:35.129599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.245 qpair failed and we were unable to recover it. 00:39:27.245 [2024-11-07 13:44:35.129781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.245 [2024-11-07 13:44:35.129795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.245 qpair failed and we were unable to recover it. 00:39:27.245 [2024-11-07 13:44:35.130122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.245 [2024-11-07 13:44:35.130137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.245 qpair failed and we were unable to recover it. 00:39:27.245 [2024-11-07 13:44:35.130508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.245 [2024-11-07 13:44:35.130522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.245 qpair failed and we were unable to recover it. 00:39:27.245 [2024-11-07 13:44:35.130810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.245 [2024-11-07 13:44:35.130828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.245 qpair failed and we were unable to recover it. 00:39:27.245 [2024-11-07 13:44:35.131139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.245 [2024-11-07 13:44:35.131154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.245 qpair failed and we were unable to recover it. 00:39:27.245 [2024-11-07 13:44:35.131469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.245 [2024-11-07 13:44:35.131483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.245 qpair failed and we were unable to recover it. 00:39:27.245 [2024-11-07 13:44:35.131786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.245 [2024-11-07 13:44:35.131800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.245 qpair failed and we were unable to recover it. 00:39:27.246 [2024-11-07 13:44:35.132004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.246 [2024-11-07 13:44:35.132018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.246 qpair failed and we were unable to recover it. 00:39:27.246 [2024-11-07 13:44:35.132345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.246 [2024-11-07 13:44:35.132358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.246 qpair failed and we were unable to recover it. 00:39:27.246 [2024-11-07 13:44:35.132690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.246 [2024-11-07 13:44:35.132703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.246 qpair failed and we were unable to recover it. 00:39:27.246 [2024-11-07 13:44:35.133026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.246 [2024-11-07 13:44:35.133040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.246 qpair failed and we were unable to recover it. 00:39:27.246 [2024-11-07 13:44:35.133362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.246 [2024-11-07 13:44:35.133376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.246 qpair failed and we were unable to recover it. 00:39:27.246 [2024-11-07 13:44:35.133707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.246 [2024-11-07 13:44:35.133721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.246 qpair failed and we were unable to recover it. 00:39:27.246 [2024-11-07 13:44:35.133938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.246 [2024-11-07 13:44:35.133952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.246 qpair failed and we were unable to recover it. 00:39:27.246 [2024-11-07 13:44:35.134268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.246 [2024-11-07 13:44:35.134282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.246 qpair failed and we were unable to recover it. 00:39:27.246 [2024-11-07 13:44:35.134615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.246 [2024-11-07 13:44:35.134629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.246 qpair failed and we were unable to recover it. 00:39:27.246 [2024-11-07 13:44:35.134962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.246 [2024-11-07 13:44:35.134977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.246 qpair failed and we were unable to recover it. 00:39:27.246 [2024-11-07 13:44:35.135275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.246 [2024-11-07 13:44:35.135290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.246 qpair failed and we were unable to recover it. 00:39:27.246 [2024-11-07 13:44:35.135630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.246 [2024-11-07 13:44:35.135645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.246 qpair failed and we were unable to recover it. 00:39:27.246 [2024-11-07 13:44:35.135969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.246 [2024-11-07 13:44:35.135984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.246 qpair failed and we were unable to recover it. 00:39:27.246 [2024-11-07 13:44:35.136171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.246 [2024-11-07 13:44:35.136185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.246 qpair failed and we were unable to recover it. 00:39:27.246 [2024-11-07 13:44:35.136475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.246 [2024-11-07 13:44:35.136489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.246 qpair failed and we were unable to recover it. 00:39:27.246 [2024-11-07 13:44:35.136779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.246 [2024-11-07 13:44:35.136793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.246 qpair failed and we were unable to recover it. 00:39:27.246 [2024-11-07 13:44:35.137099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.246 [2024-11-07 13:44:35.137113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.246 qpair failed and we were unable to recover it. 00:39:27.246 [2024-11-07 13:44:35.137418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.246 [2024-11-07 13:44:35.137432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.246 qpair failed and we were unable to recover it. 00:39:27.246 [2024-11-07 13:44:35.137746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.246 [2024-11-07 13:44:35.137759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.246 qpair failed and we were unable to recover it. 00:39:27.246 [2024-11-07 13:44:35.138154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.246 [2024-11-07 13:44:35.138169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.246 qpair failed and we were unable to recover it. 00:39:27.246 [2024-11-07 13:44:35.138453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.246 [2024-11-07 13:44:35.138467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.246 qpair failed and we were unable to recover it. 00:39:27.246 [2024-11-07 13:44:35.138750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.246 [2024-11-07 13:44:35.138764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.246 qpair failed and we were unable to recover it. 00:39:27.246 [2024-11-07 13:44:35.139091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.246 [2024-11-07 13:44:35.139105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.246 qpair failed and we were unable to recover it. 00:39:27.246 [2024-11-07 13:44:35.139400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.246 [2024-11-07 13:44:35.139423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.246 qpair failed and we were unable to recover it. 00:39:27.246 [2024-11-07 13:44:35.139744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.246 [2024-11-07 13:44:35.139758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.246 qpair failed and we were unable to recover it. 00:39:27.246 [2024-11-07 13:44:35.140054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.246 [2024-11-07 13:44:35.140069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.246 qpair failed and we were unable to recover it. 00:39:27.246 [2024-11-07 13:44:35.140368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.246 [2024-11-07 13:44:35.140383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.246 qpair failed and we were unable to recover it. 00:39:27.246 [2024-11-07 13:44:35.140667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.246 [2024-11-07 13:44:35.140681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.246 qpair failed and we were unable to recover it. 00:39:27.246 [2024-11-07 13:44:35.140984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.246 [2024-11-07 13:44:35.140999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.246 qpair failed and we were unable to recover it. 00:39:27.246 [2024-11-07 13:44:35.141286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.246 [2024-11-07 13:44:35.141300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.246 qpair failed and we were unable to recover it. 00:39:27.246 [2024-11-07 13:44:35.141631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.246 [2024-11-07 13:44:35.141645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.246 qpair failed and we were unable to recover it. 00:39:27.246 [2024-11-07 13:44:35.141928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.246 [2024-11-07 13:44:35.141942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.246 qpair failed and we were unable to recover it. 00:39:27.246 [2024-11-07 13:44:35.142333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.246 [2024-11-07 13:44:35.142347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.246 qpair failed and we were unable to recover it. 00:39:27.246 [2024-11-07 13:44:35.142674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.246 [2024-11-07 13:44:35.142687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.247 qpair failed and we were unable to recover it. 00:39:27.247 [2024-11-07 13:44:35.143003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.247 [2024-11-07 13:44:35.143017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.247 qpair failed and we were unable to recover it. 00:39:27.247 [2024-11-07 13:44:35.143211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.247 [2024-11-07 13:44:35.143226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.247 qpair failed and we were unable to recover it. 00:39:27.247 [2024-11-07 13:44:35.143513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.247 [2024-11-07 13:44:35.143526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.247 qpair failed and we were unable to recover it. 00:39:27.247 [2024-11-07 13:44:35.143841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.247 [2024-11-07 13:44:35.143856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.247 qpair failed and we were unable to recover it. 00:39:27.247 [2024-11-07 13:44:35.144246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.247 [2024-11-07 13:44:35.144260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.247 qpair failed and we were unable to recover it. 00:39:27.247 [2024-11-07 13:44:35.144551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.247 [2024-11-07 13:44:35.144572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.247 qpair failed and we were unable to recover it. 00:39:27.247 [2024-11-07 13:44:35.144895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.247 [2024-11-07 13:44:35.144909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.247 qpair failed and we were unable to recover it. 00:39:27.247 [2024-11-07 13:44:35.145225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.247 [2024-11-07 13:44:35.145239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.247 qpair failed and we were unable to recover it. 00:39:27.247 [2024-11-07 13:44:35.145552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.247 [2024-11-07 13:44:35.145566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.247 qpair failed and we were unable to recover it. 00:39:27.247 [2024-11-07 13:44:35.145902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.247 [2024-11-07 13:44:35.145917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.247 qpair failed and we were unable to recover it. 00:39:27.247 [2024-11-07 13:44:35.146206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.247 [2024-11-07 13:44:35.146221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.247 qpair failed and we were unable to recover it. 00:39:27.247 [2024-11-07 13:44:35.146544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.247 [2024-11-07 13:44:35.146557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.247 qpair failed and we were unable to recover it. 00:39:27.247 [2024-11-07 13:44:35.146873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.247 [2024-11-07 13:44:35.146888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.247 qpair failed and we were unable to recover it. 00:39:27.247 [2024-11-07 13:44:35.147199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.247 [2024-11-07 13:44:35.147213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.247 qpair failed and we were unable to recover it. 00:39:27.247 [2024-11-07 13:44:35.147526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.247 [2024-11-07 13:44:35.147539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.247 qpair failed and we were unable to recover it. 00:39:27.247 [2024-11-07 13:44:35.147870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.247 [2024-11-07 13:44:35.147884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.247 qpair failed and we were unable to recover it. 00:39:27.247 [2024-11-07 13:44:35.148276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.247 [2024-11-07 13:44:35.148290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.247 qpair failed and we were unable to recover it. 00:39:27.247 [2024-11-07 13:44:35.148506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.247 [2024-11-07 13:44:35.148520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.247 qpair failed and we were unable to recover it. 00:39:27.247 [2024-11-07 13:44:35.148799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.247 [2024-11-07 13:44:35.148814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.247 qpair failed and we were unable to recover it. 00:39:27.247 [2024-11-07 13:44:35.149113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.247 [2024-11-07 13:44:35.149127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.247 qpair failed and we were unable to recover it. 00:39:27.247 [2024-11-07 13:44:35.149394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.247 [2024-11-07 13:44:35.149407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.247 qpair failed and we were unable to recover it. 00:39:27.247 [2024-11-07 13:44:35.149734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.247 [2024-11-07 13:44:35.149747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.247 qpair failed and we were unable to recover it. 00:39:27.247 [2024-11-07 13:44:35.150034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.247 [2024-11-07 13:44:35.150049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.247 qpair failed and we were unable to recover it. 00:39:27.247 [2024-11-07 13:44:35.150405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.247 [2024-11-07 13:44:35.150418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.247 qpair failed and we were unable to recover it. 00:39:27.247 [2024-11-07 13:44:35.150603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.247 [2024-11-07 13:44:35.150617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.247 qpair failed and we were unable to recover it. 00:39:27.247 [2024-11-07 13:44:35.150954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.247 [2024-11-07 13:44:35.150968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.247 qpair failed and we were unable to recover it. 00:39:27.247 [2024-11-07 13:44:35.151295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.247 [2024-11-07 13:44:35.151309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.247 qpair failed and we were unable to recover it. 00:39:27.247 [2024-11-07 13:44:35.151620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.247 [2024-11-07 13:44:35.151634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.247 qpair failed and we were unable to recover it. 00:39:27.247 [2024-11-07 13:44:35.151931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.247 [2024-11-07 13:44:35.151944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.247 qpair failed and we were unable to recover it. 00:39:27.247 [2024-11-07 13:44:35.152328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.247 [2024-11-07 13:44:35.152342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.247 qpair failed and we were unable to recover it. 00:39:27.247 [2024-11-07 13:44:35.152664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.247 [2024-11-07 13:44:35.152678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.247 qpair failed and we were unable to recover it. 00:39:27.247 [2024-11-07 13:44:35.152992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.247 [2024-11-07 13:44:35.153006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.247 qpair failed and we were unable to recover it. 00:39:27.247 [2024-11-07 13:44:35.153206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.247 [2024-11-07 13:44:35.153219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.247 qpair failed and we were unable to recover it. 00:39:27.247 [2024-11-07 13:44:35.153558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.247 [2024-11-07 13:44:35.153571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.247 qpair failed and we were unable to recover it. 00:39:27.247 [2024-11-07 13:44:35.153872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.247 [2024-11-07 13:44:35.153886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.247 qpair failed and we were unable to recover it. 00:39:27.247 [2024-11-07 13:44:35.154217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.247 [2024-11-07 13:44:35.154230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.247 qpair failed and we were unable to recover it. 00:39:27.248 [2024-11-07 13:44:35.154545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.248 [2024-11-07 13:44:35.154558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.248 qpair failed and we were unable to recover it. 00:39:27.248 [2024-11-07 13:44:35.154904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.248 [2024-11-07 13:44:35.154918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.248 qpair failed and we were unable to recover it. 00:39:27.248 [2024-11-07 13:44:35.155241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.248 [2024-11-07 13:44:35.155255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.248 qpair failed and we were unable to recover it. 00:39:27.248 [2024-11-07 13:44:35.155553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.248 [2024-11-07 13:44:35.155566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.248 qpair failed and we were unable to recover it. 00:39:27.248 [2024-11-07 13:44:35.155856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.248 [2024-11-07 13:44:35.155874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.248 qpair failed and we were unable to recover it. 00:39:27.248 [2024-11-07 13:44:35.156112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.248 [2024-11-07 13:44:35.156126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.248 qpair failed and we were unable to recover it. 00:39:27.248 [2024-11-07 13:44:35.156327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.248 [2024-11-07 13:44:35.156340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.248 qpair failed and we were unable to recover it. 00:39:27.248 [2024-11-07 13:44:35.156654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.248 [2024-11-07 13:44:35.156670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.248 qpair failed and we were unable to recover it. 00:39:27.248 [2024-11-07 13:44:35.156983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.248 [2024-11-07 13:44:35.156997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.248 qpair failed and we were unable to recover it. 00:39:27.248 [2024-11-07 13:44:35.157311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.248 [2024-11-07 13:44:35.157324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.248 qpair failed and we were unable to recover it. 00:39:27.248 [2024-11-07 13:44:35.157666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.248 [2024-11-07 13:44:35.157679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.248 qpair failed and we were unable to recover it. 00:39:27.248 [2024-11-07 13:44:35.157898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.248 [2024-11-07 13:44:35.157912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.248 qpair failed and we were unable to recover it. 00:39:27.248 [2024-11-07 13:44:35.158298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.248 [2024-11-07 13:44:35.158312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.248 qpair failed and we were unable to recover it. 00:39:27.248 [2024-11-07 13:44:35.158678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.248 [2024-11-07 13:44:35.158691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.248 qpair failed and we were unable to recover it. 00:39:27.248 [2024-11-07 13:44:35.158972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.248 [2024-11-07 13:44:35.158986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.248 qpair failed and we were unable to recover it. 00:39:27.248 [2024-11-07 13:44:35.159314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.248 [2024-11-07 13:44:35.159328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.248 qpair failed and we were unable to recover it. 00:39:27.248 [2024-11-07 13:44:35.159606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.248 [2024-11-07 13:44:35.159619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.248 qpair failed and we were unable to recover it. 00:39:27.248 [2024-11-07 13:44:35.160014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.248 [2024-11-07 13:44:35.160027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.248 qpair failed and we were unable to recover it. 00:39:27.248 [2024-11-07 13:44:35.160360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.248 [2024-11-07 13:44:35.160374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.248 qpair failed and we were unable to recover it. 00:39:27.248 [2024-11-07 13:44:35.160681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.248 [2024-11-07 13:44:35.160694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.248 qpair failed and we were unable to recover it. 00:39:27.248 [2024-11-07 13:44:35.161006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.248 [2024-11-07 13:44:35.161020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.248 qpair failed and we were unable to recover it. 00:39:27.248 [2024-11-07 13:44:35.161221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.248 [2024-11-07 13:44:35.161236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.248 qpair failed and we were unable to recover it. 00:39:27.248 [2024-11-07 13:44:35.161568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.248 [2024-11-07 13:44:35.161581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.248 qpair failed and we were unable to recover it. 00:39:27.248 [2024-11-07 13:44:35.161899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.248 [2024-11-07 13:44:35.161914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.248 qpair failed and we were unable to recover it. 00:39:27.248 [2024-11-07 13:44:35.162225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.248 [2024-11-07 13:44:35.162238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.248 qpair failed and we were unable to recover it. 00:39:27.248 [2024-11-07 13:44:35.162555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.248 [2024-11-07 13:44:35.162568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.248 qpair failed and we were unable to recover it. 00:39:27.248 [2024-11-07 13:44:35.162808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.248 [2024-11-07 13:44:35.162821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.248 qpair failed and we were unable to recover it. 00:39:27.248 [2024-11-07 13:44:35.163110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.248 [2024-11-07 13:44:35.163129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.248 qpair failed and we were unable to recover it. 00:39:27.248 [2024-11-07 13:44:35.163433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.248 [2024-11-07 13:44:35.163446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.248 qpair failed and we were unable to recover it. 00:39:27.248 [2024-11-07 13:44:35.163722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.248 [2024-11-07 13:44:35.163735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.248 qpair failed and we were unable to recover it. 00:39:27.248 [2024-11-07 13:44:35.164075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.248 [2024-11-07 13:44:35.164089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.248 qpair failed and we were unable to recover it. 00:39:27.248 [2024-11-07 13:44:35.164408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.248 [2024-11-07 13:44:35.164421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.248 qpair failed and we were unable to recover it. 00:39:27.248 [2024-11-07 13:44:35.164747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.248 [2024-11-07 13:44:35.164760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.248 qpair failed and we were unable to recover it. 00:39:27.248 [2024-11-07 13:44:35.165119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.248 [2024-11-07 13:44:35.165133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.248 qpair failed and we were unable to recover it. 00:39:27.248 [2024-11-07 13:44:35.165505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.248 [2024-11-07 13:44:35.165518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.248 qpair failed and we were unable to recover it. 00:39:27.248 [2024-11-07 13:44:35.165830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.248 [2024-11-07 13:44:35.165843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.248 qpair failed and we were unable to recover it. 00:39:27.248 [2024-11-07 13:44:35.166175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.249 [2024-11-07 13:44:35.166189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.249 qpair failed and we were unable to recover it. 00:39:27.249 [2024-11-07 13:44:35.166508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.249 [2024-11-07 13:44:35.166521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.249 qpair failed and we were unable to recover it. 00:39:27.249 [2024-11-07 13:44:35.166851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.249 [2024-11-07 13:44:35.166866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.249 qpair failed and we were unable to recover it. 00:39:27.249 [2024-11-07 13:44:35.167163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.249 [2024-11-07 13:44:35.167176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.249 qpair failed and we were unable to recover it. 00:39:27.249 [2024-11-07 13:44:35.167489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.249 [2024-11-07 13:44:35.167502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.249 qpair failed and we were unable to recover it. 00:39:27.249 [2024-11-07 13:44:35.167792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.249 [2024-11-07 13:44:35.167813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.249 qpair failed and we were unable to recover it. 00:39:27.249 [2024-11-07 13:44:35.168148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.249 [2024-11-07 13:44:35.168162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.249 qpair failed and we were unable to recover it. 00:39:27.249 [2024-11-07 13:44:35.168362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.249 [2024-11-07 13:44:35.168375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.249 qpair failed and we were unable to recover it. 00:39:27.249 [2024-11-07 13:44:35.168724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.249 [2024-11-07 13:44:35.168737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.249 qpair failed and we were unable to recover it. 00:39:27.249 [2024-11-07 13:44:35.169026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.249 [2024-11-07 13:44:35.169040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.249 qpair failed and we were unable to recover it. 00:39:27.249 [2024-11-07 13:44:35.169345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.249 [2024-11-07 13:44:35.169358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.249 qpair failed and we were unable to recover it. 00:39:27.249 [2024-11-07 13:44:35.169648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.249 [2024-11-07 13:44:35.169664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.249 qpair failed and we were unable to recover it. 00:39:27.249 [2024-11-07 13:44:35.170017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.249 [2024-11-07 13:44:35.170032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.249 qpair failed and we were unable to recover it. 00:39:27.249 [2024-11-07 13:44:35.170319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.249 [2024-11-07 13:44:35.170333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.249 qpair failed and we were unable to recover it. 00:39:27.249 [2024-11-07 13:44:35.170665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.249 [2024-11-07 13:44:35.170678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.249 qpair failed and we were unable to recover it. 00:39:27.249 [2024-11-07 13:44:35.170989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.249 [2024-11-07 13:44:35.171003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.249 qpair failed and we were unable to recover it. 00:39:27.249 [2024-11-07 13:44:35.171333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.249 [2024-11-07 13:44:35.171346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.249 qpair failed and we were unable to recover it. 00:39:27.249 [2024-11-07 13:44:35.171672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.249 [2024-11-07 13:44:35.171688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.249 qpair failed and we were unable to recover it. 00:39:27.249 [2024-11-07 13:44:35.172017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.249 [2024-11-07 13:44:35.172031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.249 qpair failed and we were unable to recover it. 00:39:27.249 [2024-11-07 13:44:35.172348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.249 [2024-11-07 13:44:35.172370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.249 qpair failed and we were unable to recover it. 00:39:27.249 [2024-11-07 13:44:35.172543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.249 [2024-11-07 13:44:35.172557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.249 qpair failed and we were unable to recover it. 00:39:27.249 [2024-11-07 13:44:35.172889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.249 [2024-11-07 13:44:35.172903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.249 qpair failed and we were unable to recover it. 00:39:27.249 [2024-11-07 13:44:35.173188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.249 [2024-11-07 13:44:35.173201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.249 qpair failed and we were unable to recover it. 00:39:27.249 [2024-11-07 13:44:35.173492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.249 [2024-11-07 13:44:35.173505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.249 qpair failed and we were unable to recover it. 00:39:27.249 [2024-11-07 13:44:35.173879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.249 [2024-11-07 13:44:35.173893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.249 qpair failed and we were unable to recover it. 00:39:27.249 [2024-11-07 13:44:35.174210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.249 [2024-11-07 13:44:35.174223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.249 qpair failed and we were unable to recover it. 00:39:27.249 [2024-11-07 13:44:35.174470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.249 [2024-11-07 13:44:35.174484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.249 qpair failed and we were unable to recover it. 00:39:27.249 [2024-11-07 13:44:35.174790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.249 [2024-11-07 13:44:35.174804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.249 qpair failed and we were unable to recover it. 00:39:27.249 [2024-11-07 13:44:35.175113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.249 [2024-11-07 13:44:35.175127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.249 qpair failed and we were unable to recover it. 00:39:27.249 [2024-11-07 13:44:35.175433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.249 [2024-11-07 13:44:35.175446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.249 qpair failed and we were unable to recover it. 00:39:27.249 [2024-11-07 13:44:35.175750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.249 [2024-11-07 13:44:35.175763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.249 qpair failed and we were unable to recover it. 00:39:27.249 [2024-11-07 13:44:35.176080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.249 [2024-11-07 13:44:35.176100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.249 qpair failed and we were unable to recover it. 00:39:27.249 [2024-11-07 13:44:35.176233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.249 [2024-11-07 13:44:35.176246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.249 qpair failed and we were unable to recover it. 00:39:27.249 [2024-11-07 13:44:35.176504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.249 [2024-11-07 13:44:35.176517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.249 qpair failed and we were unable to recover it. 00:39:27.249 [2024-11-07 13:44:35.176814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.250 [2024-11-07 13:44:35.176827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.250 qpair failed and we were unable to recover it. 00:39:27.250 [2024-11-07 13:44:35.177150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.250 [2024-11-07 13:44:35.177165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.250 qpair failed and we were unable to recover it. 00:39:27.250 [2024-11-07 13:44:35.177362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.250 [2024-11-07 13:44:35.177375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.250 qpair failed and we were unable to recover it. 00:39:27.250 [2024-11-07 13:44:35.177600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.250 [2024-11-07 13:44:35.177613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.250 qpair failed and we were unable to recover it. 00:39:27.250 [2024-11-07 13:44:35.177919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.250 [2024-11-07 13:44:35.177933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.250 qpair failed and we were unable to recover it. 00:39:27.250 [2024-11-07 13:44:35.178237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.250 [2024-11-07 13:44:35.178251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.250 qpair failed and we were unable to recover it. 00:39:27.250 [2024-11-07 13:44:35.178561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.250 [2024-11-07 13:44:35.178574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.250 qpair failed and we were unable to recover it. 00:39:27.250 [2024-11-07 13:44:35.178896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.250 [2024-11-07 13:44:35.178910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.250 qpair failed and we were unable to recover it. 00:39:27.250 [2024-11-07 13:44:35.179220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.250 [2024-11-07 13:44:35.179233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.250 qpair failed and we were unable to recover it. 00:39:27.250 [2024-11-07 13:44:35.179548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.250 [2024-11-07 13:44:35.179562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.250 qpair failed and we were unable to recover it. 00:39:27.250 [2024-11-07 13:44:35.179825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.250 [2024-11-07 13:44:35.179838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.250 qpair failed and we were unable to recover it. 00:39:27.250 [2024-11-07 13:44:35.180198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.250 [2024-11-07 13:44:35.180212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.250 qpair failed and we were unable to recover it. 00:39:27.250 [2024-11-07 13:44:35.180520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.250 [2024-11-07 13:44:35.180533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.250 qpair failed and we were unable to recover it. 00:39:27.250 [2024-11-07 13:44:35.180879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.250 [2024-11-07 13:44:35.180894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.250 qpair failed and we were unable to recover it. 00:39:27.250 [2024-11-07 13:44:35.181209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.250 [2024-11-07 13:44:35.181223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.250 qpair failed and we were unable to recover it. 00:39:27.250 [2024-11-07 13:44:35.181540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.250 [2024-11-07 13:44:35.181553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.250 qpair failed and we were unable to recover it. 00:39:27.250 [2024-11-07 13:44:35.181886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.250 [2024-11-07 13:44:35.181900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.250 qpair failed and we were unable to recover it. 00:39:27.250 [2024-11-07 13:44:35.182227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.250 [2024-11-07 13:44:35.182243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.250 qpair failed and we were unable to recover it. 00:39:27.250 [2024-11-07 13:44:35.182556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.250 [2024-11-07 13:44:35.182570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.250 qpair failed and we were unable to recover it. 00:39:27.250 [2024-11-07 13:44:35.182772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.250 [2024-11-07 13:44:35.182785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.250 qpair failed and we were unable to recover it. 00:39:27.250 [2024-11-07 13:44:35.183090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.250 [2024-11-07 13:44:35.183104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.250 qpair failed and we were unable to recover it. 00:39:27.250 [2024-11-07 13:44:35.183416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.250 [2024-11-07 13:44:35.183429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.250 qpair failed and we were unable to recover it. 00:39:27.250 [2024-11-07 13:44:35.183746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.250 [2024-11-07 13:44:35.183759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.250 qpair failed and we were unable to recover it. 00:39:27.250 [2024-11-07 13:44:35.184095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.250 [2024-11-07 13:44:35.184109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.250 qpair failed and we were unable to recover it. 00:39:27.250 [2024-11-07 13:44:35.184443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.250 [2024-11-07 13:44:35.184456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.250 qpair failed and we were unable to recover it. 00:39:27.250 [2024-11-07 13:44:35.184767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.250 [2024-11-07 13:44:35.184781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.250 qpair failed and we were unable to recover it. 00:39:27.250 [2024-11-07 13:44:35.185089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.250 [2024-11-07 13:44:35.185103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.250 qpair failed and we were unable to recover it. 00:39:27.250 [2024-11-07 13:44:35.185412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.250 [2024-11-07 13:44:35.185426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.250 qpair failed and we were unable to recover it. 00:39:27.250 [2024-11-07 13:44:35.185762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.250 [2024-11-07 13:44:35.185775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.250 qpair failed and we were unable to recover it. 00:39:27.250 [2024-11-07 13:44:35.186093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.250 [2024-11-07 13:44:35.186107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.250 qpair failed and we were unable to recover it. 00:39:27.250 [2024-11-07 13:44:35.186429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.250 [2024-11-07 13:44:35.186443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.250 qpair failed and we were unable to recover it. 00:39:27.251 [2024-11-07 13:44:35.186761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.251 [2024-11-07 13:44:35.186775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.251 qpair failed and we were unable to recover it. 00:39:27.251 [2024-11-07 13:44:35.186975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.251 [2024-11-07 13:44:35.186990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.251 qpair failed and we were unable to recover it. 00:39:27.251 [2024-11-07 13:44:35.187325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.251 [2024-11-07 13:44:35.187338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.251 qpair failed and we were unable to recover it. 00:39:27.251 [2024-11-07 13:44:35.187709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.251 [2024-11-07 13:44:35.187722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.251 qpair failed and we were unable to recover it. 00:39:27.251 [2024-11-07 13:44:35.187947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.251 [2024-11-07 13:44:35.187962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.251 qpair failed and we were unable to recover it. 00:39:27.251 [2024-11-07 13:44:35.188163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.251 [2024-11-07 13:44:35.188176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.251 qpair failed and we were unable to recover it. 00:39:27.251 [2024-11-07 13:44:35.188544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.251 [2024-11-07 13:44:35.188557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.251 qpair failed and we were unable to recover it. 00:39:27.251 [2024-11-07 13:44:35.188883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.251 [2024-11-07 13:44:35.188898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.251 qpair failed and we were unable to recover it. 00:39:27.251 [2024-11-07 13:44:35.189272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.251 [2024-11-07 13:44:35.189286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.251 qpair failed and we were unable to recover it. 00:39:27.251 [2024-11-07 13:44:35.189663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.251 [2024-11-07 13:44:35.189678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.251 qpair failed and we were unable to recover it. 00:39:27.251 [2024-11-07 13:44:35.190000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.251 [2024-11-07 13:44:35.190015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.251 qpair failed and we were unable to recover it. 00:39:27.251 [2024-11-07 13:44:35.190328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.251 [2024-11-07 13:44:35.190340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.251 qpair failed and we were unable to recover it. 00:39:27.251 [2024-11-07 13:44:35.190521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.251 [2024-11-07 13:44:35.190535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.251 qpair failed and we were unable to recover it. 00:39:27.251 [2024-11-07 13:44:35.190839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.251 [2024-11-07 13:44:35.190852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.251 qpair failed and we were unable to recover it. 00:39:27.251 [2024-11-07 13:44:35.191169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.251 [2024-11-07 13:44:35.191182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.251 qpair failed and we were unable to recover it. 00:39:27.251 [2024-11-07 13:44:35.191492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.251 [2024-11-07 13:44:35.191506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.251 qpair failed and we were unable to recover it. 00:39:27.251 [2024-11-07 13:44:35.191834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.251 [2024-11-07 13:44:35.191847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.251 qpair failed and we were unable to recover it. 00:39:27.251 [2024-11-07 13:44:35.192146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.251 [2024-11-07 13:44:35.192160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.251 qpair failed and we were unable to recover it. 00:39:27.251 [2024-11-07 13:44:35.192488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.251 [2024-11-07 13:44:35.192502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.251 qpair failed and we were unable to recover it. 00:39:27.251 [2024-11-07 13:44:35.192830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.251 [2024-11-07 13:44:35.192844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.251 qpair failed and we were unable to recover it. 00:39:27.251 [2024-11-07 13:44:35.193186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.251 [2024-11-07 13:44:35.193199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.251 qpair failed and we were unable to recover it. 00:39:27.251 [2024-11-07 13:44:35.193492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.251 [2024-11-07 13:44:35.193505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.251 qpair failed and we were unable to recover it. 00:39:27.251 [2024-11-07 13:44:35.193838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.251 [2024-11-07 13:44:35.193852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.251 qpair failed and we were unable to recover it. 00:39:27.251 [2024-11-07 13:44:35.194186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.251 [2024-11-07 13:44:35.194200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.251 qpair failed and we were unable to recover it. 00:39:27.251 [2024-11-07 13:44:35.194521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.251 [2024-11-07 13:44:35.194535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.251 qpair failed and we were unable to recover it. 00:39:27.251 [2024-11-07 13:44:35.194874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.251 [2024-11-07 13:44:35.194889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.251 qpair failed and we were unable to recover it. 00:39:27.251 [2024-11-07 13:44:35.195201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.251 [2024-11-07 13:44:35.195217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.251 qpair failed and we were unable to recover it. 00:39:27.251 [2024-11-07 13:44:35.195542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.251 [2024-11-07 13:44:35.195557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.251 qpair failed and we were unable to recover it. 00:39:27.251 [2024-11-07 13:44:35.195872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.251 [2024-11-07 13:44:35.195886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.251 qpair failed and we were unable to recover it. 00:39:27.251 [2024-11-07 13:44:35.196209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.251 [2024-11-07 13:44:35.196223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.251 qpair failed and we were unable to recover it. 00:39:27.251 [2024-11-07 13:44:35.196546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.251 [2024-11-07 13:44:35.196560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.251 qpair failed and we were unable to recover it. 00:39:27.251 [2024-11-07 13:44:35.196955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.251 [2024-11-07 13:44:35.196969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.251 qpair failed and we were unable to recover it. 00:39:27.251 [2024-11-07 13:44:35.197287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.251 [2024-11-07 13:44:35.197300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.251 qpair failed and we were unable to recover it. 00:39:27.251 [2024-11-07 13:44:35.197635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.251 [2024-11-07 13:44:35.197648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.251 qpair failed and we were unable to recover it. 00:39:27.251 [2024-11-07 13:44:35.197851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.251 [2024-11-07 13:44:35.197867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.252 qpair failed and we were unable to recover it. 00:39:27.252 [2024-11-07 13:44:35.198187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.252 [2024-11-07 13:44:35.198201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.252 qpair failed and we were unable to recover it. 00:39:27.252 [2024-11-07 13:44:35.198516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.252 [2024-11-07 13:44:35.198529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.252 qpair failed and we were unable to recover it. 00:39:27.252 [2024-11-07 13:44:35.198769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.252 [2024-11-07 13:44:35.198782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.252 qpair failed and we were unable to recover it. 00:39:27.252 [2024-11-07 13:44:35.199057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.252 [2024-11-07 13:44:35.199071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.252 qpair failed and we were unable to recover it. 00:39:27.252 [2024-11-07 13:44:35.199392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.252 [2024-11-07 13:44:35.199405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.252 qpair failed and we were unable to recover it. 00:39:27.252 [2024-11-07 13:44:35.199640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.252 [2024-11-07 13:44:35.199653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.252 qpair failed and we were unable to recover it. 00:39:27.252 [2024-11-07 13:44:35.199965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.252 [2024-11-07 13:44:35.199978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.252 qpair failed and we were unable to recover it. 00:39:27.252 [2024-11-07 13:44:35.200286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.252 [2024-11-07 13:44:35.200300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.252 qpair failed and we were unable to recover it. 00:39:27.252 [2024-11-07 13:44:35.200607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.252 [2024-11-07 13:44:35.200620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.252 qpair failed and we were unable to recover it. 00:39:27.252 [2024-11-07 13:44:35.200926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.252 [2024-11-07 13:44:35.200939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.252 qpair failed and we were unable to recover it. 00:39:27.252 [2024-11-07 13:44:35.201290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.252 [2024-11-07 13:44:35.201304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.252 qpair failed and we were unable to recover it. 00:39:27.252 [2024-11-07 13:44:35.201615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.252 [2024-11-07 13:44:35.201629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.252 qpair failed and we were unable to recover it. 00:39:27.252 [2024-11-07 13:44:35.201809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.252 [2024-11-07 13:44:35.201824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.252 qpair failed and we were unable to recover it. 00:39:27.252 [2024-11-07 13:44:35.202103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.252 [2024-11-07 13:44:35.202118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.252 qpair failed and we were unable to recover it. 00:39:27.252 [2024-11-07 13:44:35.202420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.252 [2024-11-07 13:44:35.202433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.252 qpair failed and we were unable to recover it. 00:39:27.252 [2024-11-07 13:44:35.202753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.252 [2024-11-07 13:44:35.202766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.252 qpair failed and we were unable to recover it. 00:39:27.252 [2024-11-07 13:44:35.202974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.252 [2024-11-07 13:44:35.202988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.252 qpair failed and we were unable to recover it. 00:39:27.252 [2024-11-07 13:44:35.203310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.252 [2024-11-07 13:44:35.203323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.252 qpair failed and we were unable to recover it. 00:39:27.252 [2024-11-07 13:44:35.203668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.252 [2024-11-07 13:44:35.203681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.252 qpair failed and we were unable to recover it. 00:39:27.252 [2024-11-07 13:44:35.203978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.252 [2024-11-07 13:44:35.203992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.252 qpair failed and we were unable to recover it. 00:39:27.252 [2024-11-07 13:44:35.204374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.252 [2024-11-07 13:44:35.204388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.252 qpair failed and we were unable to recover it. 00:39:27.252 [2024-11-07 13:44:35.204664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.252 [2024-11-07 13:44:35.204677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.252 qpair failed and we were unable to recover it. 00:39:27.252 [2024-11-07 13:44:35.205019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.252 [2024-11-07 13:44:35.205032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.252 qpair failed and we were unable to recover it. 00:39:27.252 [2024-11-07 13:44:35.205341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.252 [2024-11-07 13:44:35.205363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.252 qpair failed and we were unable to recover it. 00:39:27.252 [2024-11-07 13:44:35.205687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.252 [2024-11-07 13:44:35.205700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.252 qpair failed and we were unable to recover it. 00:39:27.252 [2024-11-07 13:44:35.205879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.252 [2024-11-07 13:44:35.205894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.252 qpair failed and we were unable to recover it. 00:39:27.252 [2024-11-07 13:44:35.206196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.252 [2024-11-07 13:44:35.206210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.252 qpair failed and we were unable to recover it. 00:39:27.252 [2024-11-07 13:44:35.206527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.252 [2024-11-07 13:44:35.206540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.252 qpair failed and we were unable to recover it. 00:39:27.252 [2024-11-07 13:44:35.206848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.252 [2024-11-07 13:44:35.206866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.252 qpair failed and we were unable to recover it. 00:39:27.252 [2024-11-07 13:44:35.207164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.252 [2024-11-07 13:44:35.207178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.252 qpair failed and we were unable to recover it. 00:39:27.252 [2024-11-07 13:44:35.207559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.252 [2024-11-07 13:44:35.207573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.252 qpair failed and we were unable to recover it. 00:39:27.252 [2024-11-07 13:44:35.207903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.252 [2024-11-07 13:44:35.207920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.252 qpair failed and we were unable to recover it. 00:39:27.252 [2024-11-07 13:44:35.208227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.252 [2024-11-07 13:44:35.208241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.252 qpair failed and we were unable to recover it. 00:39:27.252 [2024-11-07 13:44:35.208557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.252 [2024-11-07 13:44:35.208571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.252 qpair failed and we were unable to recover it. 00:39:27.252 [2024-11-07 13:44:35.208868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.253 [2024-11-07 13:44:35.208882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.253 qpair failed and we were unable to recover it. 00:39:27.253 [2024-11-07 13:44:35.209201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.253 [2024-11-07 13:44:35.209214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.253 qpair failed and we were unable to recover it. 00:39:27.253 [2024-11-07 13:44:35.209520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.253 [2024-11-07 13:44:35.209534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.253 qpair failed and we were unable to recover it. 00:39:27.253 [2024-11-07 13:44:35.209848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.253 [2024-11-07 13:44:35.209861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.253 qpair failed and we were unable to recover it. 00:39:27.253 [2024-11-07 13:44:35.210143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.253 [2024-11-07 13:44:35.210156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.253 qpair failed and we were unable to recover it. 00:39:27.253 [2024-11-07 13:44:35.210471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.253 [2024-11-07 13:44:35.210485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.253 qpair failed and we were unable to recover it. 00:39:27.253 [2024-11-07 13:44:35.210823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.253 [2024-11-07 13:44:35.210837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.253 qpair failed and we were unable to recover it. 00:39:27.253 [2024-11-07 13:44:35.211023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.253 [2024-11-07 13:44:35.211039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.253 qpair failed and we were unable to recover it. 00:39:27.253 [2024-11-07 13:44:35.211360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.253 [2024-11-07 13:44:35.211373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.253 qpair failed and we were unable to recover it. 00:39:27.253 [2024-11-07 13:44:35.211689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.253 [2024-11-07 13:44:35.211702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.253 qpair failed and we were unable to recover it. 00:39:27.253 [2024-11-07 13:44:35.212037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.253 [2024-11-07 13:44:35.212051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.253 qpair failed and we were unable to recover it. 00:39:27.253 [2024-11-07 13:44:35.212331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.253 [2024-11-07 13:44:35.212350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.253 qpair failed and we were unable to recover it. 00:39:27.253 [2024-11-07 13:44:35.212686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.253 [2024-11-07 13:44:35.212700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.253 qpair failed and we were unable to recover it. 00:39:27.253 [2024-11-07 13:44:35.213023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.253 [2024-11-07 13:44:35.213037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.253 qpair failed and we were unable to recover it. 00:39:27.253 [2024-11-07 13:44:35.213318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.253 [2024-11-07 13:44:35.213331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.253 qpair failed and we were unable to recover it. 00:39:27.253 [2024-11-07 13:44:35.213639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.253 [2024-11-07 13:44:35.213652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.253 qpair failed and we were unable to recover it. 00:39:27.253 [2024-11-07 13:44:35.213984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.253 [2024-11-07 13:44:35.213998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.253 qpair failed and we were unable to recover it. 00:39:27.253 [2024-11-07 13:44:35.214327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.253 [2024-11-07 13:44:35.214341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.253 qpair failed and we were unable to recover it. 00:39:27.527 [2024-11-07 13:44:35.214660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.527 [2024-11-07 13:44:35.214683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.527 qpair failed and we were unable to recover it. 00:39:27.527 [2024-11-07 13:44:35.215002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.527 [2024-11-07 13:44:35.215016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.527 qpair failed and we were unable to recover it. 00:39:27.527 [2024-11-07 13:44:35.215337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.527 [2024-11-07 13:44:35.215352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.527 qpair failed and we were unable to recover it. 00:39:27.527 [2024-11-07 13:44:35.215665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.527 [2024-11-07 13:44:35.215678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.527 qpair failed and we were unable to recover it. 00:39:27.527 [2024-11-07 13:44:35.216069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.527 [2024-11-07 13:44:35.216083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.527 qpair failed and we were unable to recover it. 00:39:27.527 [2024-11-07 13:44:35.216409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.527 [2024-11-07 13:44:35.216423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.527 qpair failed and we were unable to recover it. 00:39:27.527 [2024-11-07 13:44:35.216765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.527 [2024-11-07 13:44:35.216778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.527 qpair failed and we were unable to recover it. 00:39:27.527 [2024-11-07 13:44:35.217102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.527 [2024-11-07 13:44:35.217116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.527 qpair failed and we were unable to recover it. 00:39:27.527 [2024-11-07 13:44:35.217434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.527 [2024-11-07 13:44:35.217449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.527 qpair failed and we were unable to recover it. 00:39:27.527 [2024-11-07 13:44:35.217762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.527 [2024-11-07 13:44:35.217775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.527 qpair failed and we were unable to recover it. 00:39:27.527 [2024-11-07 13:44:35.217955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.527 [2024-11-07 13:44:35.217970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.527 qpair failed and we were unable to recover it. 00:39:27.527 [2024-11-07 13:44:35.218292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.527 [2024-11-07 13:44:35.218305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.527 qpair failed and we were unable to recover it. 00:39:27.527 [2024-11-07 13:44:35.218656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.527 [2024-11-07 13:44:35.218670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.527 qpair failed and we were unable to recover it. 00:39:27.527 [2024-11-07 13:44:35.218990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.527 [2024-11-07 13:44:35.219004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.527 qpair failed and we were unable to recover it. 00:39:27.527 [2024-11-07 13:44:35.219342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.527 [2024-11-07 13:44:35.219356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.527 qpair failed and we were unable to recover it. 00:39:27.527 [2024-11-07 13:44:35.219684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.527 [2024-11-07 13:44:35.219697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.527 qpair failed and we were unable to recover it. 00:39:27.527 [2024-11-07 13:44:35.219980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.527 [2024-11-07 13:44:35.219994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.527 qpair failed and we were unable to recover it. 00:39:27.527 [2024-11-07 13:44:35.220285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.527 [2024-11-07 13:44:35.220299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.527 qpair failed and we were unable to recover it. 00:39:27.527 [2024-11-07 13:44:35.220669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.527 [2024-11-07 13:44:35.220683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.527 qpair failed and we were unable to recover it. 00:39:27.527 [2024-11-07 13:44:35.220990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.527 [2024-11-07 13:44:35.221008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.528 qpair failed and we were unable to recover it. 00:39:27.528 [2024-11-07 13:44:35.221326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.528 [2024-11-07 13:44:35.221339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.528 qpair failed and we were unable to recover it. 00:39:27.528 [2024-11-07 13:44:35.221649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.528 [2024-11-07 13:44:35.221663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.528 qpair failed and we were unable to recover it. 00:39:27.528 [2024-11-07 13:44:35.222000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.528 [2024-11-07 13:44:35.222014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.528 qpair failed and we were unable to recover it. 00:39:27.528 [2024-11-07 13:44:35.222336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.528 [2024-11-07 13:44:35.222349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.528 qpair failed and we were unable to recover it. 00:39:27.528 [2024-11-07 13:44:35.222665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.528 [2024-11-07 13:44:35.222679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.528 qpair failed and we were unable to recover it. 00:39:27.528 [2024-11-07 13:44:35.222993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.528 [2024-11-07 13:44:35.223007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.528 qpair failed and we were unable to recover it. 00:39:27.528 [2024-11-07 13:44:35.223342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.528 [2024-11-07 13:44:35.223355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.528 qpair failed and we were unable to recover it. 00:39:27.528 [2024-11-07 13:44:35.223678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.528 [2024-11-07 13:44:35.223691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.528 qpair failed and we were unable to recover it. 00:39:27.528 [2024-11-07 13:44:35.224002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.528 [2024-11-07 13:44:35.224016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.528 qpair failed and we were unable to recover it. 00:39:27.528 [2024-11-07 13:44:35.224225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.528 [2024-11-07 13:44:35.224239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.528 qpair failed and we were unable to recover it. 00:39:27.528 [2024-11-07 13:44:35.224561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.528 [2024-11-07 13:44:35.224575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.528 qpair failed and we were unable to recover it. 00:39:27.528 [2024-11-07 13:44:35.224893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.528 [2024-11-07 13:44:35.224907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.528 qpair failed and we were unable to recover it. 00:39:27.528 [2024-11-07 13:44:35.225197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.528 [2024-11-07 13:44:35.225210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.528 qpair failed and we were unable to recover it. 00:39:27.528 [2024-11-07 13:44:35.225534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.528 [2024-11-07 13:44:35.225548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.528 qpair failed and we were unable to recover it. 00:39:27.528 [2024-11-07 13:44:35.225749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.528 [2024-11-07 13:44:35.225763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.528 qpair failed and we were unable to recover it. 00:39:27.528 [2024-11-07 13:44:35.225958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.528 [2024-11-07 13:44:35.225974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.528 qpair failed and we were unable to recover it. 00:39:27.528 [2024-11-07 13:44:35.226296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.528 [2024-11-07 13:44:35.226310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.528 qpair failed and we were unable to recover it. 00:39:27.528 [2024-11-07 13:44:35.226641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.528 [2024-11-07 13:44:35.226655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.528 qpair failed and we were unable to recover it. 00:39:27.528 [2024-11-07 13:44:35.226946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.528 [2024-11-07 13:44:35.226960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.528 qpair failed and we were unable to recover it. 00:39:27.528 [2024-11-07 13:44:35.227296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.528 [2024-11-07 13:44:35.227310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.528 qpair failed and we were unable to recover it. 00:39:27.528 [2024-11-07 13:44:35.227620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.528 [2024-11-07 13:44:35.227634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.528 qpair failed and we were unable to recover it. 00:39:27.528 [2024-11-07 13:44:35.227944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.528 [2024-11-07 13:44:35.227958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.528 qpair failed and we were unable to recover it. 00:39:27.528 [2024-11-07 13:44:35.228304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.528 [2024-11-07 13:44:35.228317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.528 qpair failed and we were unable to recover it. 00:39:27.528 [2024-11-07 13:44:35.228602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.528 [2024-11-07 13:44:35.228623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.528 qpair failed and we were unable to recover it. 00:39:27.528 [2024-11-07 13:44:35.228940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.528 [2024-11-07 13:44:35.228954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.528 qpair failed and we were unable to recover it. 00:39:27.528 [2024-11-07 13:44:35.229267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.528 [2024-11-07 13:44:35.229282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.528 qpair failed and we were unable to recover it. 00:39:27.528 [2024-11-07 13:44:35.229623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.528 [2024-11-07 13:44:35.229636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.528 qpair failed and we were unable to recover it. 00:39:27.528 [2024-11-07 13:44:35.229923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.528 [2024-11-07 13:44:35.229937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.528 qpair failed and we were unable to recover it. 00:39:27.528 [2024-11-07 13:44:35.230175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.528 [2024-11-07 13:44:35.230188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.528 qpair failed and we were unable to recover it. 00:39:27.528 [2024-11-07 13:44:35.230514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.528 [2024-11-07 13:44:35.230528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.528 qpair failed and we were unable to recover it. 00:39:27.528 [2024-11-07 13:44:35.230858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.528 [2024-11-07 13:44:35.230876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.528 qpair failed and we were unable to recover it. 00:39:27.528 [2024-11-07 13:44:35.231163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.528 [2024-11-07 13:44:35.231178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.528 qpair failed and we were unable to recover it. 00:39:27.528 [2024-11-07 13:44:35.231503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.528 [2024-11-07 13:44:35.231517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.528 qpair failed and we were unable to recover it. 00:39:27.528 [2024-11-07 13:44:35.231835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.528 [2024-11-07 13:44:35.231848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.528 qpair failed and we were unable to recover it. 00:39:27.528 [2024-11-07 13:44:35.232073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.528 [2024-11-07 13:44:35.232087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.528 qpair failed and we were unable to recover it. 00:39:27.528 [2024-11-07 13:44:35.232409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.528 [2024-11-07 13:44:35.232422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.528 qpair failed and we were unable to recover it. 00:39:27.528 [2024-11-07 13:44:35.232591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.528 [2024-11-07 13:44:35.232606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.528 qpair failed and we were unable to recover it. 00:39:27.528 [2024-11-07 13:44:35.232931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.528 [2024-11-07 13:44:35.232946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.529 qpair failed and we were unable to recover it. 00:39:27.529 [2024-11-07 13:44:35.233306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.529 [2024-11-07 13:44:35.233320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.529 qpair failed and we were unable to recover it. 00:39:27.529 [2024-11-07 13:44:35.233636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.529 [2024-11-07 13:44:35.233651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.529 qpair failed and we were unable to recover it. 00:39:27.529 [2024-11-07 13:44:35.233960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.529 [2024-11-07 13:44:35.233974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.529 qpair failed and we were unable to recover it. 00:39:27.529 [2024-11-07 13:44:35.234263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.529 [2024-11-07 13:44:35.234283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.529 qpair failed and we were unable to recover it. 00:39:27.529 [2024-11-07 13:44:35.234656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.529 [2024-11-07 13:44:35.234669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.529 qpair failed and we were unable to recover it. 00:39:27.529 [2024-11-07 13:44:35.234983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.529 [2024-11-07 13:44:35.234997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.529 qpair failed and we were unable to recover it. 00:39:27.529 [2024-11-07 13:44:35.235317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.529 [2024-11-07 13:44:35.235330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.529 qpair failed and we were unable to recover it. 00:39:27.529 [2024-11-07 13:44:35.235503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.529 [2024-11-07 13:44:35.235518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.529 qpair failed and we were unable to recover it. 00:39:27.529 [2024-11-07 13:44:35.235831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.529 [2024-11-07 13:44:35.235845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.529 qpair failed and we were unable to recover it. 00:39:27.529 [2024-11-07 13:44:35.236149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.529 [2024-11-07 13:44:35.236163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.529 qpair failed and we were unable to recover it. 00:39:27.529 [2024-11-07 13:44:35.236494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.529 [2024-11-07 13:44:35.236508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.529 qpair failed and we were unable to recover it. 00:39:27.529 [2024-11-07 13:44:35.236823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.529 [2024-11-07 13:44:35.236844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.529 qpair failed and we were unable to recover it. 00:39:27.529 [2024-11-07 13:44:35.237241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.529 [2024-11-07 13:44:35.237257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.529 qpair failed and we were unable to recover it. 00:39:27.529 [2024-11-07 13:44:35.237538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.529 [2024-11-07 13:44:35.237551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.529 qpair failed and we were unable to recover it. 00:39:27.529 [2024-11-07 13:44:35.237847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.529 [2024-11-07 13:44:35.237861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.529 qpair failed and we were unable to recover it. 00:39:27.529 [2024-11-07 13:44:35.238215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.529 [2024-11-07 13:44:35.238230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.529 qpair failed and we were unable to recover it. 00:39:27.529 [2024-11-07 13:44:35.238550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.529 [2024-11-07 13:44:35.238564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.529 qpair failed and we were unable to recover it. 00:39:27.529 [2024-11-07 13:44:35.238879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.529 [2024-11-07 13:44:35.238894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.529 qpair failed and we were unable to recover it. 00:39:27.529 [2024-11-07 13:44:35.239221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.529 [2024-11-07 13:44:35.239234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.529 qpair failed and we were unable to recover it. 00:39:27.529 [2024-11-07 13:44:35.239555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.529 [2024-11-07 13:44:35.239568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.529 qpair failed and we were unable to recover it. 00:39:27.529 [2024-11-07 13:44:35.239905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.529 [2024-11-07 13:44:35.239919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.529 qpair failed and we were unable to recover it. 00:39:27.529 [2024-11-07 13:44:35.240232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.529 [2024-11-07 13:44:35.240245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.529 qpair failed and we were unable to recover it. 00:39:27.529 [2024-11-07 13:44:35.240555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.529 [2024-11-07 13:44:35.240570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.529 qpair failed and we were unable to recover it. 00:39:27.529 [2024-11-07 13:44:35.240869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.529 [2024-11-07 13:44:35.240883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.529 qpair failed and we were unable to recover it. 00:39:27.529 [2024-11-07 13:44:35.241159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.529 [2024-11-07 13:44:35.241172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.529 qpair failed and we were unable to recover it. 00:39:27.529 [2024-11-07 13:44:35.241490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.529 [2024-11-07 13:44:35.241503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.529 qpair failed and we were unable to recover it. 00:39:27.529 [2024-11-07 13:44:35.241833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.529 [2024-11-07 13:44:35.241846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.529 qpair failed and we were unable to recover it. 00:39:27.529 [2024-11-07 13:44:35.242227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.529 [2024-11-07 13:44:35.242242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.529 qpair failed and we were unable to recover it. 00:39:27.529 [2024-11-07 13:44:35.242581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.529 [2024-11-07 13:44:35.242595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.529 qpair failed and we were unable to recover it. 00:39:27.529 [2024-11-07 13:44:35.242923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.529 [2024-11-07 13:44:35.242938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.529 qpair failed and we were unable to recover it. 00:39:27.529 [2024-11-07 13:44:35.243229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.529 [2024-11-07 13:44:35.243249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.529 qpair failed and we were unable to recover it. 00:39:27.529 [2024-11-07 13:44:35.243560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.529 [2024-11-07 13:44:35.243573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.529 qpair failed and we were unable to recover it. 00:39:27.529 [2024-11-07 13:44:35.243912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.529 [2024-11-07 13:44:35.243933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.529 qpair failed and we were unable to recover it. 00:39:27.529 [2024-11-07 13:44:35.244257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.529 [2024-11-07 13:44:35.244271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.529 qpair failed and we were unable to recover it. 00:39:27.529 [2024-11-07 13:44:35.244588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.529 [2024-11-07 13:44:35.244602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.529 qpair failed and we were unable to recover it. 00:39:27.529 [2024-11-07 13:44:35.244905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.529 [2024-11-07 13:44:35.244918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.529 qpair failed and we were unable to recover it. 00:39:27.529 [2024-11-07 13:44:35.245095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.529 [2024-11-07 13:44:35.245109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.529 qpair failed and we were unable to recover it. 00:39:27.529 [2024-11-07 13:44:35.245399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.529 [2024-11-07 13:44:35.245412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.529 qpair failed and we were unable to recover it. 00:39:27.530 [2024-11-07 13:44:35.245731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.530 [2024-11-07 13:44:35.245745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.530 qpair failed and we were unable to recover it. 00:39:27.530 [2024-11-07 13:44:35.245957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.530 [2024-11-07 13:44:35.245971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.530 qpair failed and we were unable to recover it. 00:39:27.530 [2024-11-07 13:44:35.246294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.530 [2024-11-07 13:44:35.246308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.530 qpair failed and we were unable to recover it. 00:39:27.530 [2024-11-07 13:44:35.246634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.530 [2024-11-07 13:44:35.246650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.530 qpair failed and we were unable to recover it. 00:39:27.530 [2024-11-07 13:44:35.247030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.530 [2024-11-07 13:44:35.247044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.530 qpair failed and we were unable to recover it. 00:39:27.530 [2024-11-07 13:44:35.247328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.530 [2024-11-07 13:44:35.247341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.530 qpair failed and we were unable to recover it. 00:39:27.530 [2024-11-07 13:44:35.247674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.530 [2024-11-07 13:44:35.247687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.530 qpair failed and we were unable to recover it. 00:39:27.530 [2024-11-07 13:44:35.247997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.530 [2024-11-07 13:44:35.248011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.530 qpair failed and we were unable to recover it. 00:39:27.530 [2024-11-07 13:44:35.248317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.530 [2024-11-07 13:44:35.248331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.530 qpair failed and we were unable to recover it. 00:39:27.530 [2024-11-07 13:44:35.248611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.530 [2024-11-07 13:44:35.248625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.530 qpair failed and we were unable to recover it. 00:39:27.530 [2024-11-07 13:44:35.248971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.530 [2024-11-07 13:44:35.248985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.530 qpair failed and we were unable to recover it. 00:39:27.530 [2024-11-07 13:44:35.249300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.530 [2024-11-07 13:44:35.249314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.530 qpair failed and we were unable to recover it. 00:39:27.530 [2024-11-07 13:44:35.249634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.530 [2024-11-07 13:44:35.249648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.530 qpair failed and we were unable to recover it. 00:39:27.530 [2024-11-07 13:44:35.249994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.530 [2024-11-07 13:44:35.250009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.530 qpair failed and we were unable to recover it. 00:39:27.530 [2024-11-07 13:44:35.250330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.530 [2024-11-07 13:44:35.250344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.530 qpair failed and we were unable to recover it. 00:39:27.530 [2024-11-07 13:44:35.250726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.530 [2024-11-07 13:44:35.250741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.530 qpair failed and we were unable to recover it. 00:39:27.530 [2024-11-07 13:44:35.251042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.530 [2024-11-07 13:44:35.251056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.530 qpair failed and we were unable to recover it. 00:39:27.530 [2024-11-07 13:44:35.251343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.530 [2024-11-07 13:44:35.251357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.530 qpair failed and we were unable to recover it. 00:39:27.530 [2024-11-07 13:44:35.251691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.530 [2024-11-07 13:44:35.251704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.530 qpair failed and we were unable to recover it. 00:39:27.530 [2024-11-07 13:44:35.252018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.530 [2024-11-07 13:44:35.252031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.530 qpair failed and we were unable to recover it. 00:39:27.530 [2024-11-07 13:44:35.252338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.530 [2024-11-07 13:44:35.252351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.530 qpair failed and we were unable to recover it. 00:39:27.530 [2024-11-07 13:44:35.252666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.530 [2024-11-07 13:44:35.252689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.530 qpair failed and we were unable to recover it. 00:39:27.530 [2024-11-07 13:44:35.253023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.530 [2024-11-07 13:44:35.253036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.530 qpair failed and we were unable to recover it. 00:39:27.530 [2024-11-07 13:44:35.253348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.530 [2024-11-07 13:44:35.253362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.530 qpair failed and we were unable to recover it. 00:39:27.530 [2024-11-07 13:44:35.253696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.530 [2024-11-07 13:44:35.253709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.530 qpair failed and we were unable to recover it. 00:39:27.530 [2024-11-07 13:44:35.254030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.530 [2024-11-07 13:44:35.254044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.530 qpair failed and we were unable to recover it. 00:39:27.530 [2024-11-07 13:44:35.254323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.530 [2024-11-07 13:44:35.254336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.530 qpair failed and we were unable to recover it. 00:39:27.530 [2024-11-07 13:44:35.254625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.530 [2024-11-07 13:44:35.254640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.530 qpair failed and we were unable to recover it. 00:39:27.530 [2024-11-07 13:44:35.254968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.530 [2024-11-07 13:44:35.254981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.530 qpair failed and we were unable to recover it. 00:39:27.530 [2024-11-07 13:44:35.255192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.530 [2024-11-07 13:44:35.255206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.530 qpair failed and we were unable to recover it. 00:39:27.530 [2024-11-07 13:44:35.255540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.530 [2024-11-07 13:44:35.255555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.530 qpair failed and we were unable to recover it. 00:39:27.530 [2024-11-07 13:44:35.255883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.530 [2024-11-07 13:44:35.255898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.530 qpair failed and we were unable to recover it. 00:39:27.530 [2024-11-07 13:44:35.256190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.530 [2024-11-07 13:44:35.256204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.530 qpair failed and we were unable to recover it. 00:39:27.530 [2024-11-07 13:44:35.256515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.530 [2024-11-07 13:44:35.256528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.530 qpair failed and we were unable to recover it. 00:39:27.530 [2024-11-07 13:44:35.256841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.530 [2024-11-07 13:44:35.256856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.530 qpair failed and we were unable to recover it. 00:39:27.530 [2024-11-07 13:44:35.257183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.530 [2024-11-07 13:44:35.257196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.530 qpair failed and we were unable to recover it. 00:39:27.530 [2024-11-07 13:44:35.257517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.530 [2024-11-07 13:44:35.257531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.530 qpair failed and we were unable to recover it. 00:39:27.530 [2024-11-07 13:44:35.257865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.530 [2024-11-07 13:44:35.257879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.530 qpair failed and we were unable to recover it. 00:39:27.531 [2024-11-07 13:44:35.258251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.531 [2024-11-07 13:44:35.258264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.531 qpair failed and we were unable to recover it. 00:39:27.531 [2024-11-07 13:44:35.258596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.531 [2024-11-07 13:44:35.258609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.531 qpair failed and we were unable to recover it. 00:39:27.531 [2024-11-07 13:44:35.258919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.531 [2024-11-07 13:44:35.258933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.531 qpair failed and we were unable to recover it. 00:39:27.531 [2024-11-07 13:44:35.259221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.531 [2024-11-07 13:44:35.259235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.531 qpair failed and we were unable to recover it. 00:39:27.531 [2024-11-07 13:44:35.259570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.531 [2024-11-07 13:44:35.259583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.531 qpair failed and we were unable to recover it. 00:39:27.531 [2024-11-07 13:44:35.259864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.531 [2024-11-07 13:44:35.259880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.531 qpair failed and we were unable to recover it. 00:39:27.531 [2024-11-07 13:44:35.260100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.531 [2024-11-07 13:44:35.260115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.531 qpair failed and we were unable to recover it. 00:39:27.531 [2024-11-07 13:44:35.260344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.531 [2024-11-07 13:44:35.260359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.531 qpair failed and we were unable to recover it. 00:39:27.531 [2024-11-07 13:44:35.260737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.531 [2024-11-07 13:44:35.260751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.531 qpair failed and we were unable to recover it. 00:39:27.531 [2024-11-07 13:44:35.261033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.531 [2024-11-07 13:44:35.261047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.531 qpair failed and we were unable to recover it. 00:39:27.531 [2024-11-07 13:44:35.261368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.531 [2024-11-07 13:44:35.261382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.531 qpair failed and we were unable to recover it. 00:39:27.531 [2024-11-07 13:44:35.261664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.531 [2024-11-07 13:44:35.261678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.531 qpair failed and we were unable to recover it. 00:39:27.531 [2024-11-07 13:44:35.262006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.531 [2024-11-07 13:44:35.262020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.531 qpair failed and we were unable to recover it. 00:39:27.531 [2024-11-07 13:44:35.262336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.531 [2024-11-07 13:44:35.262349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.531 qpair failed and we were unable to recover it. 00:39:27.531 [2024-11-07 13:44:35.262646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.531 [2024-11-07 13:44:35.262660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.531 qpair failed and we were unable to recover it. 00:39:27.531 [2024-11-07 13:44:35.262936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.531 [2024-11-07 13:44:35.262950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.531 qpair failed and we were unable to recover it. 00:39:27.531 [2024-11-07 13:44:35.263258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.531 [2024-11-07 13:44:35.263272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.531 qpair failed and we were unable to recover it. 00:39:27.531 [2024-11-07 13:44:35.263498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.531 [2024-11-07 13:44:35.263511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.531 qpair failed and we were unable to recover it. 00:39:27.531 [2024-11-07 13:44:35.263849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.531 [2024-11-07 13:44:35.263871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.531 qpair failed and we were unable to recover it. 00:39:27.531 [2024-11-07 13:44:35.264253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.531 [2024-11-07 13:44:35.264267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.531 qpair failed and we were unable to recover it. 00:39:27.531 [2024-11-07 13:44:35.264551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.531 [2024-11-07 13:44:35.264564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.531 qpair failed and we were unable to recover it. 00:39:27.531 [2024-11-07 13:44:35.264923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.531 [2024-11-07 13:44:35.264937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.531 qpair failed and we were unable to recover it. 00:39:27.531 [2024-11-07 13:44:35.265142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.531 [2024-11-07 13:44:35.265155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.531 qpair failed and we were unable to recover it. 00:39:27.531 [2024-11-07 13:44:35.265518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.531 [2024-11-07 13:44:35.265531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.531 qpair failed and we were unable to recover it. 00:39:27.531 [2024-11-07 13:44:35.265870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.531 [2024-11-07 13:44:35.265889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.531 qpair failed and we were unable to recover it. 00:39:27.531 [2024-11-07 13:44:35.266201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.531 [2024-11-07 13:44:35.266214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.531 qpair failed and we were unable to recover it. 00:39:27.531 [2024-11-07 13:44:35.266530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.531 [2024-11-07 13:44:35.266543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.531 qpair failed and we were unable to recover it. 00:39:27.531 [2024-11-07 13:44:35.266906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.531 [2024-11-07 13:44:35.266920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.531 qpair failed and we were unable to recover it. 00:39:27.531 [2024-11-07 13:44:35.267219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.531 [2024-11-07 13:44:35.267232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.531 qpair failed and we were unable to recover it. 00:39:27.531 [2024-11-07 13:44:35.267565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.531 [2024-11-07 13:44:35.267579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.531 qpair failed and we were unable to recover it. 00:39:27.531 [2024-11-07 13:44:35.267896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.531 [2024-11-07 13:44:35.267910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.531 qpair failed and we were unable to recover it. 00:39:27.531 [2024-11-07 13:44:35.268212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.531 [2024-11-07 13:44:35.268230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.531 qpair failed and we were unable to recover it. 00:39:27.531 [2024-11-07 13:44:35.268560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.531 [2024-11-07 13:44:35.268574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.531 qpair failed and we were unable to recover it. 00:39:27.531 [2024-11-07 13:44:35.268775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.531 [2024-11-07 13:44:35.268788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.531 qpair failed and we were unable to recover it. 00:39:27.531 [2024-11-07 13:44:35.269123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.531 [2024-11-07 13:44:35.269137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.531 qpair failed and we were unable to recover it. 00:39:27.532 [2024-11-07 13:44:35.269456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.532 [2024-11-07 13:44:35.269469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.532 qpair failed and we were unable to recover it. 00:39:27.532 [2024-11-07 13:44:35.269766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.532 [2024-11-07 13:44:35.269779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.532 qpair failed and we were unable to recover it. 00:39:27.532 [2024-11-07 13:44:35.270075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.532 [2024-11-07 13:44:35.270089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.532 qpair failed and we were unable to recover it. 00:39:27.532 [2024-11-07 13:44:35.270310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.532 [2024-11-07 13:44:35.270324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.532 qpair failed and we were unable to recover it. 00:39:27.532 [2024-11-07 13:44:35.270691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.532 [2024-11-07 13:44:35.270704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.532 qpair failed and we were unable to recover it. 00:39:27.532 [2024-11-07 13:44:35.271013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.532 [2024-11-07 13:44:35.271027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.532 qpair failed and we were unable to recover it. 00:39:27.532 [2024-11-07 13:44:35.271341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.532 [2024-11-07 13:44:35.271354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.532 qpair failed and we were unable to recover it. 00:39:27.532 [2024-11-07 13:44:35.271686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.532 [2024-11-07 13:44:35.271699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.532 qpair failed and we were unable to recover it. 00:39:27.532 [2024-11-07 13:44:35.272018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.532 [2024-11-07 13:44:35.272032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.532 qpair failed and we were unable to recover it. 00:39:27.532 [2024-11-07 13:44:35.272363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.532 [2024-11-07 13:44:35.272376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.532 qpair failed and we were unable to recover it. 00:39:27.532 [2024-11-07 13:44:35.272695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.532 [2024-11-07 13:44:35.272717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.532 qpair failed and we were unable to recover it. 00:39:27.532 [2024-11-07 13:44:35.273035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.532 [2024-11-07 13:44:35.273049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.532 qpair failed and we were unable to recover it. 00:39:27.532 [2024-11-07 13:44:35.273339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.532 [2024-11-07 13:44:35.273353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.532 qpair failed and we were unable to recover it. 00:39:27.532 [2024-11-07 13:44:35.273685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.532 [2024-11-07 13:44:35.273698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.532 qpair failed and we were unable to recover it. 00:39:27.532 [2024-11-07 13:44:35.274046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.532 [2024-11-07 13:44:35.274061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.532 qpair failed and we were unable to recover it. 00:39:27.532 [2024-11-07 13:44:35.274269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.532 [2024-11-07 13:44:35.274283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.532 qpair failed and we were unable to recover it. 00:39:27.532 [2024-11-07 13:44:35.274592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.532 [2024-11-07 13:44:35.274605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.532 qpair failed and we were unable to recover it. 00:39:27.532 [2024-11-07 13:44:35.274805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.532 [2024-11-07 13:44:35.274819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.532 qpair failed and we were unable to recover it. 00:39:27.532 [2024-11-07 13:44:35.275120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.532 [2024-11-07 13:44:35.275134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.532 qpair failed and we were unable to recover it. 00:39:27.532 [2024-11-07 13:44:35.275450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.532 [2024-11-07 13:44:35.275463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.532 qpair failed and we were unable to recover it. 00:39:27.532 [2024-11-07 13:44:35.275869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.532 [2024-11-07 13:44:35.275884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.532 qpair failed and we were unable to recover it. 00:39:27.532 [2024-11-07 13:44:35.276197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.532 [2024-11-07 13:44:35.276210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.532 qpair failed and we were unable to recover it. 00:39:27.532 [2024-11-07 13:44:35.276525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.532 [2024-11-07 13:44:35.276538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.532 qpair failed and we were unable to recover it. 00:39:27.532 [2024-11-07 13:44:35.276847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.532 [2024-11-07 13:44:35.276861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.532 qpair failed and we were unable to recover it. 00:39:27.532 [2024-11-07 13:44:35.277260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.532 [2024-11-07 13:44:35.277273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.532 qpair failed and we were unable to recover it. 00:39:27.532 [2024-11-07 13:44:35.277606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.532 [2024-11-07 13:44:35.277620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.532 qpair failed and we were unable to recover it. 00:39:27.532 [2024-11-07 13:44:35.277980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.532 [2024-11-07 13:44:35.277994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.532 qpair failed and we were unable to recover it. 00:39:27.532 [2024-11-07 13:44:35.278286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.532 [2024-11-07 13:44:35.278300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.532 qpair failed and we were unable to recover it. 00:39:27.532 [2024-11-07 13:44:35.278622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.532 [2024-11-07 13:44:35.278635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.532 qpair failed and we were unable to recover it. 00:39:27.532 [2024-11-07 13:44:35.278927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.532 [2024-11-07 13:44:35.278941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.532 qpair failed and we were unable to recover it. 00:39:27.532 [2024-11-07 13:44:35.279256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.532 [2024-11-07 13:44:35.279270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.532 qpair failed and we were unable to recover it. 00:39:27.532 [2024-11-07 13:44:35.279556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.532 [2024-11-07 13:44:35.279569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.532 qpair failed and we were unable to recover it. 00:39:27.532 [2024-11-07 13:44:35.279754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.532 [2024-11-07 13:44:35.279767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.532 qpair failed and we were unable to recover it. 00:39:27.532 [2024-11-07 13:44:35.280089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.532 [2024-11-07 13:44:35.280103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.532 qpair failed and we were unable to recover it. 00:39:27.532 [2024-11-07 13:44:35.280417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.532 [2024-11-07 13:44:35.280431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.532 qpair failed and we were unable to recover it. 00:39:27.532 [2024-11-07 13:44:35.280733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.533 [2024-11-07 13:44:35.280747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.533 qpair failed and we were unable to recover it. 00:39:27.533 [2024-11-07 13:44:35.281085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.533 [2024-11-07 13:44:35.281099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.533 qpair failed and we were unable to recover it. 00:39:27.533 [2024-11-07 13:44:35.281470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.533 [2024-11-07 13:44:35.281484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.533 qpair failed and we were unable to recover it. 00:39:27.533 [2024-11-07 13:44:35.281775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.533 [2024-11-07 13:44:35.281788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.533 qpair failed and we were unable to recover it. 00:39:27.533 [2024-11-07 13:44:35.282103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.533 [2024-11-07 13:44:35.282118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.533 qpair failed and we were unable to recover it. 00:39:27.533 [2024-11-07 13:44:35.282403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.533 [2024-11-07 13:44:35.282416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.533 qpair failed and we were unable to recover it. 00:39:27.533 [2024-11-07 13:44:35.282748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.533 [2024-11-07 13:44:35.282762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.533 qpair failed and we were unable to recover it. 00:39:27.533 [2024-11-07 13:44:35.283084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.533 [2024-11-07 13:44:35.283106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.533 qpair failed and we were unable to recover it. 00:39:27.533 [2024-11-07 13:44:35.283456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.533 [2024-11-07 13:44:35.283469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.533 qpair failed and we were unable to recover it. 00:39:27.533 [2024-11-07 13:44:35.283776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.533 [2024-11-07 13:44:35.283790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.533 qpair failed and we were unable to recover it. 00:39:27.533 [2024-11-07 13:44:35.284072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.533 [2024-11-07 13:44:35.284086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.533 qpair failed and we were unable to recover it. 00:39:27.533 [2024-11-07 13:44:35.284393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.533 [2024-11-07 13:44:35.284407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.533 qpair failed and we were unable to recover it. 00:39:27.533 [2024-11-07 13:44:35.284715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.533 [2024-11-07 13:44:35.284730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.533 qpair failed and we were unable to recover it. 00:39:27.533 [2024-11-07 13:44:35.285079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.533 [2024-11-07 13:44:35.285093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.533 qpair failed and we were unable to recover it. 00:39:27.533 [2024-11-07 13:44:35.285459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.533 [2024-11-07 13:44:35.285472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.533 qpair failed and we were unable to recover it. 00:39:27.533 [2024-11-07 13:44:35.285780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.533 [2024-11-07 13:44:35.285795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.533 qpair failed and we were unable to recover it. 00:39:27.533 [2024-11-07 13:44:35.286109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.533 [2024-11-07 13:44:35.286123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.533 qpair failed and we were unable to recover it. 00:39:27.533 [2024-11-07 13:44:35.286456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.533 [2024-11-07 13:44:35.286469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.533 qpair failed and we were unable to recover it. 00:39:27.533 [2024-11-07 13:44:35.286754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.533 [2024-11-07 13:44:35.286775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.533 qpair failed and we were unable to recover it. 00:39:27.533 [2024-11-07 13:44:35.287093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.533 [2024-11-07 13:44:35.287107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.533 qpair failed and we were unable to recover it. 00:39:27.533 [2024-11-07 13:44:35.287426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.533 [2024-11-07 13:44:35.287440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.533 qpair failed and we were unable to recover it. 00:39:27.533 [2024-11-07 13:44:35.287791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.533 [2024-11-07 13:44:35.287804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.533 qpair failed and we were unable to recover it. 00:39:27.533 [2024-11-07 13:44:35.288109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.533 [2024-11-07 13:44:35.288123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.533 qpair failed and we were unable to recover it. 00:39:27.533 [2024-11-07 13:44:35.288428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.533 [2024-11-07 13:44:35.288442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.533 qpair failed and we were unable to recover it. 00:39:27.533 [2024-11-07 13:44:35.288656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.533 [2024-11-07 13:44:35.288670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.533 qpair failed and we were unable to recover it. 00:39:27.533 [2024-11-07 13:44:35.289004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.533 [2024-11-07 13:44:35.289018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.533 qpair failed and we were unable to recover it. 00:39:27.533 [2024-11-07 13:44:35.289346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.533 [2024-11-07 13:44:35.289359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.533 qpair failed and we were unable to recover it. 00:39:27.533 [2024-11-07 13:44:35.289677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.533 [2024-11-07 13:44:35.289690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.533 qpair failed and we were unable to recover it. 00:39:27.533 [2024-11-07 13:44:35.290001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.533 [2024-11-07 13:44:35.290015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.533 qpair failed and we were unable to recover it. 00:39:27.533 [2024-11-07 13:44:35.290348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.533 [2024-11-07 13:44:35.290362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.533 qpair failed and we were unable to recover it. 00:39:27.533 [2024-11-07 13:44:35.290693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.533 [2024-11-07 13:44:35.290707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.533 qpair failed and we were unable to recover it. 00:39:27.533 [2024-11-07 13:44:35.291022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.533 [2024-11-07 13:44:35.291037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.533 qpair failed and we were unable to recover it. 00:39:27.533 [2024-11-07 13:44:35.291278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.533 [2024-11-07 13:44:35.291292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.533 qpair failed and we were unable to recover it. 00:39:27.533 [2024-11-07 13:44:35.291596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.533 [2024-11-07 13:44:35.291613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.533 qpair failed and we were unable to recover it. 00:39:27.533 [2024-11-07 13:44:35.291904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.533 [2024-11-07 13:44:35.291918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.533 qpair failed and we were unable to recover it. 00:39:27.533 [2024-11-07 13:44:35.292239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.533 [2024-11-07 13:44:35.292252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.533 qpair failed and we were unable to recover it. 00:39:27.533 [2024-11-07 13:44:35.292432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.533 [2024-11-07 13:44:35.292447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.533 qpair failed and we were unable to recover it. 00:39:27.533 [2024-11-07 13:44:35.292774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.533 [2024-11-07 13:44:35.292788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.533 qpair failed and we were unable to recover it. 00:39:27.533 [2024-11-07 13:44:35.293101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.533 [2024-11-07 13:44:35.293115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.534 qpair failed and we were unable to recover it. 00:39:27.534 [2024-11-07 13:44:35.293324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.534 [2024-11-07 13:44:35.293338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.534 qpair failed and we were unable to recover it. 00:39:27.534 [2024-11-07 13:44:35.293654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.534 [2024-11-07 13:44:35.293668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.534 qpair failed and we were unable to recover it. 00:39:27.534 [2024-11-07 13:44:35.293997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.534 [2024-11-07 13:44:35.294010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.534 qpair failed and we were unable to recover it. 00:39:27.534 [2024-11-07 13:44:35.294343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.534 [2024-11-07 13:44:35.294357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.534 qpair failed and we were unable to recover it. 00:39:27.534 [2024-11-07 13:44:35.294685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.534 [2024-11-07 13:44:35.294698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.534 qpair failed and we were unable to recover it. 00:39:27.534 [2024-11-07 13:44:35.295016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.534 [2024-11-07 13:44:35.295030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.534 qpair failed and we were unable to recover it. 00:39:27.534 [2024-11-07 13:44:35.295236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.534 [2024-11-07 13:44:35.295250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.534 qpair failed and we were unable to recover it. 00:39:27.534 [2024-11-07 13:44:35.295598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.534 [2024-11-07 13:44:35.295611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.534 qpair failed and we were unable to recover it. 00:39:27.534 [2024-11-07 13:44:35.295810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.534 [2024-11-07 13:44:35.295823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.534 qpair failed and we were unable to recover it. 00:39:27.534 [2024-11-07 13:44:35.296132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.534 [2024-11-07 13:44:35.296146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.534 qpair failed and we were unable to recover it. 00:39:27.534 [2024-11-07 13:44:35.296459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.534 [2024-11-07 13:44:35.296472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.534 qpair failed and we were unable to recover it. 00:39:27.534 [2024-11-07 13:44:35.296792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.534 [2024-11-07 13:44:35.296805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.534 qpair failed and we were unable to recover it. 00:39:27.534 [2024-11-07 13:44:35.297196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.534 [2024-11-07 13:44:35.297210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.534 qpair failed and we were unable to recover it. 00:39:27.534 [2024-11-07 13:44:35.297540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.534 [2024-11-07 13:44:35.297553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.534 qpair failed and we were unable to recover it. 00:39:27.534 [2024-11-07 13:44:35.297836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.534 [2024-11-07 13:44:35.297850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.534 qpair failed and we were unable to recover it. 00:39:27.534 [2024-11-07 13:44:35.298181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.534 [2024-11-07 13:44:35.298195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.534 qpair failed and we were unable to recover it. 00:39:27.534 [2024-11-07 13:44:35.298507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.534 [2024-11-07 13:44:35.298524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.534 qpair failed and we were unable to recover it. 00:39:27.534 [2024-11-07 13:44:35.298839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.534 [2024-11-07 13:44:35.298853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.534 qpair failed and we were unable to recover it. 00:39:27.534 [2024-11-07 13:44:35.299152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.534 [2024-11-07 13:44:35.299174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.534 qpair failed and we were unable to recover it. 00:39:27.534 [2024-11-07 13:44:35.299485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.534 [2024-11-07 13:44:35.299498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.534 qpair failed and we were unable to recover it. 00:39:27.534 [2024-11-07 13:44:35.299812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.534 [2024-11-07 13:44:35.299826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.534 qpair failed and we were unable to recover it. 00:39:27.534 [2024-11-07 13:44:35.300154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.534 [2024-11-07 13:44:35.300169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.534 qpair failed and we were unable to recover it. 00:39:27.534 [2024-11-07 13:44:35.300482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.534 [2024-11-07 13:44:35.300496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.534 qpair failed and we were unable to recover it. 00:39:27.534 [2024-11-07 13:44:35.300828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.534 [2024-11-07 13:44:35.300842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.534 qpair failed and we were unable to recover it. 00:39:27.534 [2024-11-07 13:44:35.301141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.534 [2024-11-07 13:44:35.301155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.534 qpair failed and we were unable to recover it. 00:39:27.534 [2024-11-07 13:44:35.301470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.534 [2024-11-07 13:44:35.301483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.534 qpair failed and we were unable to recover it. 00:39:27.534 [2024-11-07 13:44:35.301787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.534 [2024-11-07 13:44:35.301800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.534 qpair failed and we were unable to recover it. 00:39:27.534 [2024-11-07 13:44:35.302124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.534 [2024-11-07 13:44:35.302138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.534 qpair failed and we were unable to recover it. 00:39:27.534 [2024-11-07 13:44:35.302488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.534 [2024-11-07 13:44:35.302502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.534 qpair failed and we were unable to recover it. 00:39:27.534 [2024-11-07 13:44:35.302811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.534 [2024-11-07 13:44:35.302824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.534 qpair failed and we were unable to recover it. 00:39:27.534 [2024-11-07 13:44:35.303113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.534 [2024-11-07 13:44:35.303128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.534 qpair failed and we were unable to recover it. 00:39:27.534 [2024-11-07 13:44:35.303439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.534 [2024-11-07 13:44:35.303453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.534 qpair failed and we were unable to recover it. 00:39:27.534 [2024-11-07 13:44:35.303786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.534 [2024-11-07 13:44:35.303800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.534 qpair failed and we were unable to recover it. 00:39:27.534 [2024-11-07 13:44:35.304128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.534 [2024-11-07 13:44:35.304142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.534 qpair failed and we were unable to recover it. 00:39:27.534 [2024-11-07 13:44:35.304372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.534 [2024-11-07 13:44:35.304386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.534 qpair failed and we were unable to recover it. 00:39:27.534 [2024-11-07 13:44:35.304702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.534 [2024-11-07 13:44:35.304716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.534 qpair failed and we were unable to recover it. 00:39:27.534 [2024-11-07 13:44:35.305017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.534 [2024-11-07 13:44:35.305031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.534 qpair failed and we were unable to recover it. 00:39:27.534 [2024-11-07 13:44:35.305350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.534 [2024-11-07 13:44:35.305364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.535 qpair failed and we were unable to recover it. 00:39:27.535 [2024-11-07 13:44:35.305651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.535 [2024-11-07 13:44:35.305664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.535 qpair failed and we were unable to recover it. 00:39:27.535 [2024-11-07 13:44:35.305858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.535 [2024-11-07 13:44:35.305875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.535 qpair failed and we were unable to recover it. 00:39:27.535 [2024-11-07 13:44:35.306229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.535 [2024-11-07 13:44:35.306242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.535 qpair failed and we were unable to recover it. 00:39:27.535 [2024-11-07 13:44:35.306571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.535 [2024-11-07 13:44:35.306585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.535 qpair failed and we were unable to recover it. 00:39:27.535 [2024-11-07 13:44:35.306890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.535 [2024-11-07 13:44:35.306904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.535 qpair failed and we were unable to recover it. 00:39:27.535 [2024-11-07 13:44:35.307203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.535 [2024-11-07 13:44:35.307217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.535 qpair failed and we were unable to recover it. 00:39:27.535 [2024-11-07 13:44:35.307527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.535 [2024-11-07 13:44:35.307540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.535 qpair failed and we were unable to recover it. 00:39:27.535 [2024-11-07 13:44:35.307851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.535 [2024-11-07 13:44:35.307871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.535 qpair failed and we were unable to recover it. 00:39:27.535 [2024-11-07 13:44:35.308165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.535 [2024-11-07 13:44:35.308179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.535 qpair failed and we were unable to recover it. 00:39:27.535 [2024-11-07 13:44:35.308544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.535 [2024-11-07 13:44:35.308559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.535 qpair failed and we were unable to recover it. 00:39:27.535 [2024-11-07 13:44:35.308760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.535 [2024-11-07 13:44:35.308774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.535 qpair failed and we were unable to recover it. 00:39:27.535 [2024-11-07 13:44:35.309106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.535 [2024-11-07 13:44:35.309120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.535 qpair failed and we were unable to recover it. 00:39:27.535 [2024-11-07 13:44:35.309400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.535 [2024-11-07 13:44:35.309413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.535 qpair failed and we were unable to recover it. 00:39:27.535 [2024-11-07 13:44:35.309710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.535 [2024-11-07 13:44:35.309723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.535 qpair failed and we were unable to recover it. 00:39:27.535 [2024-11-07 13:44:35.310035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.535 [2024-11-07 13:44:35.310049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.535 qpair failed and we were unable to recover it. 00:39:27.535 [2024-11-07 13:44:35.310322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.535 [2024-11-07 13:44:35.310335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.535 qpair failed and we were unable to recover it. 00:39:27.535 [2024-11-07 13:44:35.310548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.535 [2024-11-07 13:44:35.310561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.535 qpair failed and we were unable to recover it. 00:39:27.535 [2024-11-07 13:44:35.310738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.535 [2024-11-07 13:44:35.310751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.535 qpair failed and we were unable to recover it. 00:39:27.535 [2024-11-07 13:44:35.311101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.535 [2024-11-07 13:44:35.311115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.535 qpair failed and we were unable to recover it. 00:39:27.535 [2024-11-07 13:44:35.311316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.535 [2024-11-07 13:44:35.311330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.535 qpair failed and we were unable to recover it. 00:39:27.535 [2024-11-07 13:44:35.311541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.535 [2024-11-07 13:44:35.311555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.535 qpair failed and we were unable to recover it. 00:39:27.535 [2024-11-07 13:44:35.311886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.535 [2024-11-07 13:44:35.311900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.535 qpair failed and we were unable to recover it. 00:39:27.535 [2024-11-07 13:44:35.312206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.535 [2024-11-07 13:44:35.312227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.535 qpair failed and we were unable to recover it. 00:39:27.535 [2024-11-07 13:44:35.312544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.535 [2024-11-07 13:44:35.312557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.535 qpair failed and we were unable to recover it. 00:39:27.535 [2024-11-07 13:44:35.312890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.535 [2024-11-07 13:44:35.312904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.535 qpair failed and we were unable to recover it. 00:39:27.535 [2024-11-07 13:44:35.313243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.535 [2024-11-07 13:44:35.313257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.535 qpair failed and we were unable to recover it. 00:39:27.535 [2024-11-07 13:44:35.313433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.535 [2024-11-07 13:44:35.313448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.535 qpair failed and we were unable to recover it. 00:39:27.535 [2024-11-07 13:44:35.313723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.535 [2024-11-07 13:44:35.313736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.535 qpair failed and we were unable to recover it. 00:39:27.535 [2024-11-07 13:44:35.313983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.535 [2024-11-07 13:44:35.313996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.535 qpair failed and we were unable to recover it. 00:39:27.535 [2024-11-07 13:44:35.314311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.535 [2024-11-07 13:44:35.314325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.535 qpair failed and we were unable to recover it. 00:39:27.535 [2024-11-07 13:44:35.314632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.535 [2024-11-07 13:44:35.314645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.535 qpair failed and we were unable to recover it. 00:39:27.535 [2024-11-07 13:44:35.314929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.535 [2024-11-07 13:44:35.314943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.535 qpair failed and we were unable to recover it. 00:39:27.535 [2024-11-07 13:44:35.315306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.535 [2024-11-07 13:44:35.315320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.535 qpair failed and we were unable to recover it. 00:39:27.535 [2024-11-07 13:44:35.315633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.535 [2024-11-07 13:44:35.315646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.535 qpair failed and we were unable to recover it. 00:39:27.535 [2024-11-07 13:44:35.315934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.535 [2024-11-07 13:44:35.315948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.535 qpair failed and we were unable to recover it. 00:39:27.535 [2024-11-07 13:44:35.316246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.535 [2024-11-07 13:44:35.316259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.535 qpair failed and we were unable to recover it. 00:39:27.535 [2024-11-07 13:44:35.316554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.535 [2024-11-07 13:44:35.316573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.535 qpair failed and we were unable to recover it. 00:39:27.535 [2024-11-07 13:44:35.316889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.535 [2024-11-07 13:44:35.316903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.535 qpair failed and we were unable to recover it. 00:39:27.535 [2024-11-07 13:44:35.317091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.536 [2024-11-07 13:44:35.317104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.536 qpair failed and we were unable to recover it. 00:39:27.536 [2024-11-07 13:44:35.317427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.536 [2024-11-07 13:44:35.317440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.536 qpair failed and we were unable to recover it. 00:39:27.536 [2024-11-07 13:44:35.317732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.536 [2024-11-07 13:44:35.317745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.536 qpair failed and we were unable to recover it. 00:39:27.536 [2024-11-07 13:44:35.317936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.536 [2024-11-07 13:44:35.317951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.536 qpair failed and we were unable to recover it. 00:39:27.536 [2024-11-07 13:44:35.318275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.536 [2024-11-07 13:44:35.318289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.536 qpair failed and we were unable to recover it. 00:39:27.536 [2024-11-07 13:44:35.318446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.536 [2024-11-07 13:44:35.318459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.536 qpair failed and we were unable to recover it. 00:39:27.536 [2024-11-07 13:44:35.318789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.536 [2024-11-07 13:44:35.318802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.536 qpair failed and we were unable to recover it. 00:39:27.536 [2024-11-07 13:44:35.319098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.536 [2024-11-07 13:44:35.319116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.536 qpair failed and we were unable to recover it. 00:39:27.536 [2024-11-07 13:44:35.319388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.536 [2024-11-07 13:44:35.319401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.536 qpair failed and we were unable to recover it. 00:39:27.536 [2024-11-07 13:44:35.319620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.536 [2024-11-07 13:44:35.319634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.536 qpair failed and we were unable to recover it. 00:39:27.536 [2024-11-07 13:44:35.319855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.536 [2024-11-07 13:44:35.319872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.536 qpair failed and we were unable to recover it. 00:39:27.536 [2024-11-07 13:44:35.320166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.536 [2024-11-07 13:44:35.320179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.536 qpair failed and we were unable to recover it. 00:39:27.536 [2024-11-07 13:44:35.320492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.536 [2024-11-07 13:44:35.320505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.536 qpair failed and we were unable to recover it. 00:39:27.536 [2024-11-07 13:44:35.320836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.536 [2024-11-07 13:44:35.320849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.536 qpair failed and we were unable to recover it. 00:39:27.536 [2024-11-07 13:44:35.321135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.536 [2024-11-07 13:44:35.321149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.536 qpair failed and we were unable to recover it. 00:39:27.536 [2024-11-07 13:44:35.321417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.536 [2024-11-07 13:44:35.321431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.536 qpair failed and we were unable to recover it. 00:39:27.536 [2024-11-07 13:44:35.321761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.536 [2024-11-07 13:44:35.321774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.536 qpair failed and we were unable to recover it. 00:39:27.536 [2024-11-07 13:44:35.322065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.536 [2024-11-07 13:44:35.322080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.536 qpair failed and we were unable to recover it. 00:39:27.536 [2024-11-07 13:44:35.322384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.536 [2024-11-07 13:44:35.322398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.536 qpair failed and we were unable to recover it. 00:39:27.536 [2024-11-07 13:44:35.322680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.536 [2024-11-07 13:44:35.322694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.536 qpair failed and we were unable to recover it. 00:39:27.536 [2024-11-07 13:44:35.323009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.536 [2024-11-07 13:44:35.323023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.536 qpair failed and we were unable to recover it. 00:39:27.536 [2024-11-07 13:44:35.323303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.536 [2024-11-07 13:44:35.323317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.536 qpair failed and we were unable to recover it. 00:39:27.536 [2024-11-07 13:44:35.323557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.536 [2024-11-07 13:44:35.323571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.536 qpair failed and we were unable to recover it. 00:39:27.536 [2024-11-07 13:44:35.323870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.536 [2024-11-07 13:44:35.323883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.536 qpair failed and we were unable to recover it. 00:39:27.536 [2024-11-07 13:44:35.324207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.536 [2024-11-07 13:44:35.324220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.536 qpair failed and we were unable to recover it. 00:39:27.536 [2024-11-07 13:44:35.324548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.536 [2024-11-07 13:44:35.324562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.536 qpair failed and we were unable to recover it. 00:39:27.536 [2024-11-07 13:44:35.324896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.536 [2024-11-07 13:44:35.324909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.536 qpair failed and we were unable to recover it. 00:39:27.536 [2024-11-07 13:44:35.325202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.536 [2024-11-07 13:44:35.325215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.536 qpair failed and we were unable to recover it. 00:39:27.536 [2024-11-07 13:44:35.325539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.536 [2024-11-07 13:44:35.325552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.536 qpair failed and we were unable to recover it. 00:39:27.536 [2024-11-07 13:44:35.325866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.536 [2024-11-07 13:44:35.325880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.536 qpair failed and we were unable to recover it. 00:39:27.536 [2024-11-07 13:44:35.326081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.536 [2024-11-07 13:44:35.326095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.536 qpair failed and we were unable to recover it. 00:39:27.536 [2024-11-07 13:44:35.326422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.536 [2024-11-07 13:44:35.326435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.536 qpair failed and we were unable to recover it. 00:39:27.536 [2024-11-07 13:44:35.326733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.536 [2024-11-07 13:44:35.326746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.536 qpair failed and we were unable to recover it. 00:39:27.536 [2024-11-07 13:44:35.326957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.536 [2024-11-07 13:44:35.326971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.536 qpair failed and we were unable to recover it. 00:39:27.537 [2024-11-07 13:44:35.327254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.537 [2024-11-07 13:44:35.327268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.537 qpair failed and we were unable to recover it. 00:39:27.537 [2024-11-07 13:44:35.327561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.537 [2024-11-07 13:44:35.327576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.537 qpair failed and we were unable to recover it. 00:39:27.537 [2024-11-07 13:44:35.327868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.537 [2024-11-07 13:44:35.327883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.537 qpair failed and we were unable to recover it. 00:39:27.537 [2024-11-07 13:44:35.328140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.537 [2024-11-07 13:44:35.328154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.537 qpair failed and we were unable to recover it. 00:39:27.537 [2024-11-07 13:44:35.328441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.537 [2024-11-07 13:44:35.328454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.537 qpair failed and we were unable to recover it. 00:39:27.537 [2024-11-07 13:44:35.328770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.537 [2024-11-07 13:44:35.328783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.537 qpair failed and we were unable to recover it. 00:39:27.537 [2024-11-07 13:44:35.329104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.537 [2024-11-07 13:44:35.329117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.537 qpair failed and we were unable to recover it. 00:39:27.537 [2024-11-07 13:44:35.329430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.537 [2024-11-07 13:44:35.329443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.537 qpair failed and we were unable to recover it. 00:39:27.537 [2024-11-07 13:44:35.329666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.537 [2024-11-07 13:44:35.329680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.537 qpair failed and we were unable to recover it. 00:39:27.537 [2024-11-07 13:44:35.329996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.537 [2024-11-07 13:44:35.330010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.537 qpair failed and we were unable to recover it. 00:39:27.537 [2024-11-07 13:44:35.330373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.537 [2024-11-07 13:44:35.330386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.537 qpair failed and we were unable to recover it. 00:39:27.537 [2024-11-07 13:44:35.330706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.537 [2024-11-07 13:44:35.330719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.537 qpair failed and we were unable to recover it. 00:39:27.537 [2024-11-07 13:44:35.331049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.537 [2024-11-07 13:44:35.331063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.537 qpair failed and we were unable to recover it. 00:39:27.537 [2024-11-07 13:44:35.331352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.537 [2024-11-07 13:44:35.331367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.537 qpair failed and we were unable to recover it. 00:39:27.537 [2024-11-07 13:44:35.331665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.537 [2024-11-07 13:44:35.331678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.537 qpair failed and we were unable to recover it. 00:39:27.537 [2024-11-07 13:44:35.331979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.537 [2024-11-07 13:44:35.331992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.537 qpair failed and we were unable to recover it. 00:39:27.537 [2024-11-07 13:44:35.332304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.537 [2024-11-07 13:44:35.332317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.537 qpair failed and we were unable to recover it. 00:39:27.537 [2024-11-07 13:44:35.332612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.537 [2024-11-07 13:44:35.332626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.537 qpair failed and we were unable to recover it. 00:39:27.537 [2024-11-07 13:44:35.332920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.537 [2024-11-07 13:44:35.332934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.537 qpair failed and we were unable to recover it. 00:39:27.537 [2024-11-07 13:44:35.333308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.537 [2024-11-07 13:44:35.333322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.537 qpair failed and we were unable to recover it. 00:39:27.537 [2024-11-07 13:44:35.333517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.537 [2024-11-07 13:44:35.333530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.537 qpair failed and we were unable to recover it. 00:39:27.537 [2024-11-07 13:44:35.333827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.537 [2024-11-07 13:44:35.333840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.537 qpair failed and we were unable to recover it. 00:39:27.537 [2024-11-07 13:44:35.334132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.537 [2024-11-07 13:44:35.334146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.537 qpair failed and we were unable to recover it. 00:39:27.537 [2024-11-07 13:44:35.334393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.537 [2024-11-07 13:44:35.334406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.537 qpair failed and we were unable to recover it. 00:39:27.537 [2024-11-07 13:44:35.334691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.537 [2024-11-07 13:44:35.334704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.537 qpair failed and we were unable to recover it. 00:39:27.537 [2024-11-07 13:44:35.335000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.537 [2024-11-07 13:44:35.335015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.537 qpair failed and we were unable to recover it. 00:39:27.537 [2024-11-07 13:44:35.335368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.537 [2024-11-07 13:44:35.335383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.537 qpair failed and we were unable to recover it. 00:39:27.537 [2024-11-07 13:44:35.335706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.537 [2024-11-07 13:44:35.335721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.537 qpair failed and we were unable to recover it. 00:39:27.537 [2024-11-07 13:44:35.336031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.537 [2024-11-07 13:44:35.336044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.537 qpair failed and we were unable to recover it. 00:39:27.537 [2024-11-07 13:44:35.336374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.537 [2024-11-07 13:44:35.336387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.537 qpair failed and we were unable to recover it. 00:39:27.537 [2024-11-07 13:44:35.336619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.537 [2024-11-07 13:44:35.336632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.537 qpair failed and we were unable to recover it. 00:39:27.537 [2024-11-07 13:44:35.336941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.537 [2024-11-07 13:44:35.336955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.537 qpair failed and we were unable to recover it. 00:39:27.537 [2024-11-07 13:44:35.337268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.537 [2024-11-07 13:44:35.337282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.537 qpair failed and we were unable to recover it. 00:39:27.537 [2024-11-07 13:44:35.337629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.537 [2024-11-07 13:44:35.337643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.537 qpair failed and we were unable to recover it. 00:39:27.537 [2024-11-07 13:44:35.337933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.537 [2024-11-07 13:44:35.337948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.537 qpair failed and we were unable to recover it. 00:39:27.537 [2024-11-07 13:44:35.338299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.537 [2024-11-07 13:44:35.338313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.537 qpair failed and we were unable to recover it. 00:39:27.537 [2024-11-07 13:44:35.338594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.537 [2024-11-07 13:44:35.338608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.537 qpair failed and we were unable to recover it. 00:39:27.537 [2024-11-07 13:44:35.338947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.537 [2024-11-07 13:44:35.338961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.537 qpair failed and we were unable to recover it. 00:39:27.537 [2024-11-07 13:44:35.339282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.538 [2024-11-07 13:44:35.339296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.538 qpair failed and we were unable to recover it. 00:39:27.538 [2024-11-07 13:44:35.339614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.538 [2024-11-07 13:44:35.339627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.538 qpair failed and we were unable to recover it. 00:39:27.538 [2024-11-07 13:44:35.339930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.538 [2024-11-07 13:44:35.339944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.538 qpair failed and we were unable to recover it. 00:39:27.538 [2024-11-07 13:44:35.340266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.538 [2024-11-07 13:44:35.340285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.538 qpair failed and we were unable to recover it. 00:39:27.538 [2024-11-07 13:44:35.340486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.538 [2024-11-07 13:44:35.340512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.538 qpair failed and we were unable to recover it. 00:39:27.538 [2024-11-07 13:44:35.340874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.538 [2024-11-07 13:44:35.340905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.538 qpair failed and we were unable to recover it. 00:39:27.538 [2024-11-07 13:44:35.341251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.538 [2024-11-07 13:44:35.341268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.538 qpair failed and we were unable to recover it. 00:39:27.538 [2024-11-07 13:44:35.341523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.538 [2024-11-07 13:44:35.341537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.538 qpair failed and we were unable to recover it. 00:39:27.538 [2024-11-07 13:44:35.341841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.538 [2024-11-07 13:44:35.341855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.538 qpair failed and we were unable to recover it. 00:39:27.538 [2024-11-07 13:44:35.342197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.538 [2024-11-07 13:44:35.342211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.538 qpair failed and we were unable to recover it. 00:39:27.538 [2024-11-07 13:44:35.342530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.538 [2024-11-07 13:44:35.342544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.538 qpair failed and we were unable to recover it. 00:39:27.538 [2024-11-07 13:44:35.342849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.538 [2024-11-07 13:44:35.342876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.538 qpair failed and we were unable to recover it. 00:39:27.538 [2024-11-07 13:44:35.343170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.538 [2024-11-07 13:44:35.343184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.538 qpair failed and we were unable to recover it. 00:39:27.538 [2024-11-07 13:44:35.343513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.538 [2024-11-07 13:44:35.343527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.538 qpair failed and we were unable to recover it. 00:39:27.538 [2024-11-07 13:44:35.343850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.538 [2024-11-07 13:44:35.343871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.538 qpair failed and we were unable to recover it. 00:39:27.538 [2024-11-07 13:44:35.344220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.538 [2024-11-07 13:44:35.344238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.538 qpair failed and we were unable to recover it. 00:39:27.538 [2024-11-07 13:44:35.344465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.538 [2024-11-07 13:44:35.344480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.538 qpair failed and we were unable to recover it. 00:39:27.538 [2024-11-07 13:44:35.344857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.538 [2024-11-07 13:44:35.344879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.538 qpair failed and we were unable to recover it. 00:39:27.538 [2024-11-07 13:44:35.345051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.538 [2024-11-07 13:44:35.345065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.538 qpair failed and we were unable to recover it. 00:39:27.538 [2024-11-07 13:44:35.345384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.538 [2024-11-07 13:44:35.345399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.538 qpair failed and we were unable to recover it. 00:39:27.538 [2024-11-07 13:44:35.345702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.538 [2024-11-07 13:44:35.345716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.538 qpair failed and we were unable to recover it. 00:39:27.538 [2024-11-07 13:44:35.346032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.538 [2024-11-07 13:44:35.346046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.538 qpair failed and we were unable to recover it. 00:39:27.538 [2024-11-07 13:44:35.346379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.538 [2024-11-07 13:44:35.346394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.538 qpair failed and we were unable to recover it. 00:39:27.538 [2024-11-07 13:44:35.346687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.538 [2024-11-07 13:44:35.346701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.538 qpair failed and we were unable to recover it. 00:39:27.538 [2024-11-07 13:44:35.346918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.538 [2024-11-07 13:44:35.346932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.538 qpair failed and we were unable to recover it. 00:39:27.538 [2024-11-07 13:44:35.347269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.538 [2024-11-07 13:44:35.347283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.538 qpair failed and we were unable to recover it. 00:39:27.538 [2024-11-07 13:44:35.347581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.538 [2024-11-07 13:44:35.347595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.538 qpair failed and we were unable to recover it. 00:39:27.538 [2024-11-07 13:44:35.347760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.538 [2024-11-07 13:44:35.347774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.538 qpair failed and we were unable to recover it. 00:39:27.538 [2024-11-07 13:44:35.348108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.538 [2024-11-07 13:44:35.348123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.538 qpair failed and we were unable to recover it. 00:39:27.538 [2024-11-07 13:44:35.348487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.538 [2024-11-07 13:44:35.348500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.538 qpair failed and we were unable to recover it. 00:39:27.538 [2024-11-07 13:44:35.348889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.538 [2024-11-07 13:44:35.348903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.538 qpair failed and we were unable to recover it. 00:39:27.538 [2024-11-07 13:44:35.349203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.538 [2024-11-07 13:44:35.349217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.538 qpair failed and we were unable to recover it. 00:39:27.538 [2024-11-07 13:44:35.349539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.538 [2024-11-07 13:44:35.349552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.538 qpair failed and we were unable to recover it. 00:39:27.538 [2024-11-07 13:44:35.349881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.538 [2024-11-07 13:44:35.349895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.538 qpair failed and we were unable to recover it. 00:39:27.538 [2024-11-07 13:44:35.350343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.538 [2024-11-07 13:44:35.350357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.538 qpair failed and we were unable to recover it. 00:39:27.538 [2024-11-07 13:44:35.350658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.538 [2024-11-07 13:44:35.350672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.538 qpair failed and we were unable to recover it. 00:39:27.538 [2024-11-07 13:44:35.350914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.538 [2024-11-07 13:44:35.350929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.538 qpair failed and we were unable to recover it. 00:39:27.538 [2024-11-07 13:44:35.351165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.538 [2024-11-07 13:44:35.351178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.538 qpair failed and we were unable to recover it. 00:39:27.538 [2024-11-07 13:44:35.351377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.538 [2024-11-07 13:44:35.351390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.538 qpair failed and we were unable to recover it. 00:39:27.539 [2024-11-07 13:44:35.351578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.539 [2024-11-07 13:44:35.351591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.539 qpair failed and we were unable to recover it. 00:39:27.539 [2024-11-07 13:44:35.351937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.539 [2024-11-07 13:44:35.351951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.539 qpair failed and we were unable to recover it. 00:39:27.539 [2024-11-07 13:44:35.352177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.539 [2024-11-07 13:44:35.352190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.539 qpair failed and we were unable to recover it. 00:39:27.539 [2024-11-07 13:44:35.352522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.539 [2024-11-07 13:44:35.352536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.539 qpair failed and we were unable to recover it. 00:39:27.539 [2024-11-07 13:44:35.352746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.539 [2024-11-07 13:44:35.352759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.539 qpair failed and we were unable to recover it. 00:39:27.539 [2024-11-07 13:44:35.352941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.539 [2024-11-07 13:44:35.352955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.539 qpair failed and we were unable to recover it. 00:39:27.539 [2024-11-07 13:44:35.353297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.539 [2024-11-07 13:44:35.353310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.539 qpair failed and we were unable to recover it. 00:39:27.539 [2024-11-07 13:44:35.353638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.539 [2024-11-07 13:44:35.353652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.539 qpair failed and we were unable to recover it. 00:39:27.539 [2024-11-07 13:44:35.354017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.539 [2024-11-07 13:44:35.354032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.539 qpair failed and we were unable to recover it. 00:39:27.539 [2024-11-07 13:44:35.354415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.539 [2024-11-07 13:44:35.354429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.539 qpair failed and we were unable to recover it. 00:39:27.539 [2024-11-07 13:44:35.354755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.539 [2024-11-07 13:44:35.354770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.539 qpair failed and we were unable to recover it. 00:39:27.539 [2024-11-07 13:44:35.355010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.539 [2024-11-07 13:44:35.355024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.539 qpair failed and we were unable to recover it. 00:39:27.539 [2024-11-07 13:44:35.355217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.539 [2024-11-07 13:44:35.355231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.539 qpair failed and we were unable to recover it. 00:39:27.539 [2024-11-07 13:44:35.355556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.539 [2024-11-07 13:44:35.355569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.539 qpair failed and we were unable to recover it. 00:39:27.539 [2024-11-07 13:44:35.355885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.539 [2024-11-07 13:44:35.355899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.539 qpair failed and we were unable to recover it. 00:39:27.539 [2024-11-07 13:44:35.356212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.539 [2024-11-07 13:44:35.356226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.539 qpair failed and we were unable to recover it. 00:39:27.539 [2024-11-07 13:44:35.356517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.539 [2024-11-07 13:44:35.356533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.539 qpair failed and we were unable to recover it. 00:39:27.539 [2024-11-07 13:44:35.356866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.539 [2024-11-07 13:44:35.356880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.539 qpair failed and we were unable to recover it. 00:39:27.539 [2024-11-07 13:44:35.357118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.539 [2024-11-07 13:44:35.357132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.539 qpair failed and we were unable to recover it. 00:39:27.539 [2024-11-07 13:44:35.357445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.539 [2024-11-07 13:44:35.357459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.539 qpair failed and we were unable to recover it. 00:39:27.539 [2024-11-07 13:44:35.357784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.539 [2024-11-07 13:44:35.357798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.539 qpair failed and we were unable to recover it. 00:39:27.539 [2024-11-07 13:44:35.358148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.539 [2024-11-07 13:44:35.358163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.539 qpair failed and we were unable to recover it. 00:39:27.539 [2024-11-07 13:44:35.358473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.539 [2024-11-07 13:44:35.358487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.539 qpair failed and we were unable to recover it. 00:39:27.539 [2024-11-07 13:44:35.358816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.539 [2024-11-07 13:44:35.358829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.539 qpair failed and we were unable to recover it. 00:39:27.539 [2024-11-07 13:44:35.359217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.539 [2024-11-07 13:44:35.359231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.539 qpair failed and we were unable to recover it. 00:39:27.539 [2024-11-07 13:44:35.359427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.539 [2024-11-07 13:44:35.359441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.539 qpair failed and we were unable to recover it. 00:39:27.539 [2024-11-07 13:44:35.359833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.539 [2024-11-07 13:44:35.359847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.539 qpair failed and we were unable to recover it. 00:39:27.539 [2024-11-07 13:44:35.360052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.539 [2024-11-07 13:44:35.360066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.539 qpair failed and we were unable to recover it. 00:39:27.539 [2024-11-07 13:44:35.360303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.539 [2024-11-07 13:44:35.360316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.539 qpair failed and we were unable to recover it. 00:39:27.539 [2024-11-07 13:44:35.360652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.539 [2024-11-07 13:44:35.360666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.539 qpair failed and we were unable to recover it. 00:39:27.539 [2024-11-07 13:44:35.361010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.539 [2024-11-07 13:44:35.361024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.539 qpair failed and we were unable to recover it. 00:39:27.539 [2024-11-07 13:44:35.361341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.539 [2024-11-07 13:44:35.361355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.539 qpair failed and we were unable to recover it. 00:39:27.539 [2024-11-07 13:44:35.361662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.539 [2024-11-07 13:44:35.361676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.539 qpair failed and we were unable to recover it. 00:39:27.539 [2024-11-07 13:44:35.361939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.539 [2024-11-07 13:44:35.361953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.539 qpair failed and we were unable to recover it. 00:39:27.539 [2024-11-07 13:44:35.362300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.539 [2024-11-07 13:44:35.362314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.539 qpair failed and we were unable to recover it. 00:39:27.539 [2024-11-07 13:44:35.362672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.539 [2024-11-07 13:44:35.362686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.539 qpair failed and we were unable to recover it. 00:39:27.539 [2024-11-07 13:44:35.363048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.539 [2024-11-07 13:44:35.363062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.539 qpair failed and we were unable to recover it. 00:39:27.539 [2024-11-07 13:44:35.363250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.539 [2024-11-07 13:44:35.363264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.539 qpair failed and we were unable to recover it. 00:39:27.540 [2024-11-07 13:44:35.363565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.540 [2024-11-07 13:44:35.363579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.540 qpair failed and we were unable to recover it. 00:39:27.540 [2024-11-07 13:44:35.363910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.540 [2024-11-07 13:44:35.363923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.540 qpair failed and we were unable to recover it. 00:39:27.540 [2024-11-07 13:44:35.364240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.540 [2024-11-07 13:44:35.364253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.540 qpair failed and we were unable to recover it. 00:39:27.540 [2024-11-07 13:44:35.364629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.540 [2024-11-07 13:44:35.364643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.540 qpair failed and we were unable to recover it. 00:39:27.540 [2024-11-07 13:44:35.364968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.540 [2024-11-07 13:44:35.364983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.540 qpair failed and we were unable to recover it. 00:39:27.540 [2024-11-07 13:44:35.365313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.540 [2024-11-07 13:44:35.365326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.540 qpair failed and we were unable to recover it. 00:39:27.540 [2024-11-07 13:44:35.365500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.540 [2024-11-07 13:44:35.365514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.540 qpair failed and we were unable to recover it. 00:39:27.540 [2024-11-07 13:44:35.365832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.540 [2024-11-07 13:44:35.365845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.540 qpair failed and we were unable to recover it. 00:39:27.540 [2024-11-07 13:44:35.366124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.540 [2024-11-07 13:44:35.366138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.540 qpair failed and we were unable to recover it. 00:39:27.540 [2024-11-07 13:44:35.366449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.540 [2024-11-07 13:44:35.366463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.540 qpair failed and we were unable to recover it. 00:39:27.540 [2024-11-07 13:44:35.366838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.540 [2024-11-07 13:44:35.366851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.540 qpair failed and we were unable to recover it. 00:39:27.540 [2024-11-07 13:44:35.367168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.540 [2024-11-07 13:44:35.367182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.540 qpair failed and we were unable to recover it. 00:39:27.540 [2024-11-07 13:44:35.367461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.540 [2024-11-07 13:44:35.367475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.540 qpair failed and we were unable to recover it. 00:39:27.540 [2024-11-07 13:44:35.367808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.540 [2024-11-07 13:44:35.367822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.540 qpair failed and we were unable to recover it. 00:39:27.540 [2024-11-07 13:44:35.368097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.540 [2024-11-07 13:44:35.368112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.540 qpair failed and we were unable to recover it. 00:39:27.540 [2024-11-07 13:44:35.368454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.540 [2024-11-07 13:44:35.368467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.540 qpair failed and we were unable to recover it. 00:39:27.540 [2024-11-07 13:44:35.368792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.540 [2024-11-07 13:44:35.368805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.540 qpair failed and we were unable to recover it. 00:39:27.540 [2024-11-07 13:44:35.368999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.540 [2024-11-07 13:44:35.369013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.540 qpair failed and we were unable to recover it. 00:39:27.540 [2024-11-07 13:44:35.369362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.540 [2024-11-07 13:44:35.369378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.540 qpair failed and we were unable to recover it. 00:39:27.540 [2024-11-07 13:44:35.369599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.540 [2024-11-07 13:44:35.369613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.540 qpair failed and we were unable to recover it. 00:39:27.540 [2024-11-07 13:44:35.369923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.540 [2024-11-07 13:44:35.369937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.540 qpair failed and we were unable to recover it. 00:39:27.540 [2024-11-07 13:44:35.370120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.540 [2024-11-07 13:44:35.370133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.540 qpair failed and we were unable to recover it. 00:39:27.540 [2024-11-07 13:44:35.370426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.540 [2024-11-07 13:44:35.370439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.540 qpair failed and we were unable to recover it. 00:39:27.540 [2024-11-07 13:44:35.370776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.540 [2024-11-07 13:44:35.370789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.540 qpair failed and we were unable to recover it. 00:39:27.540 [2024-11-07 13:44:35.371045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.540 [2024-11-07 13:44:35.371060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.540 qpair failed and we were unable to recover it. 00:39:27.540 [2024-11-07 13:44:35.371405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.540 [2024-11-07 13:44:35.371419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.540 qpair failed and we were unable to recover it. 00:39:27.540 [2024-11-07 13:44:35.371735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.540 [2024-11-07 13:44:35.371748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.540 qpair failed and we were unable to recover it. 00:39:27.540 [2024-11-07 13:44:35.372101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.540 [2024-11-07 13:44:35.372115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.540 qpair failed and we were unable to recover it. 00:39:27.540 [2024-11-07 13:44:35.372455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.540 [2024-11-07 13:44:35.372476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.540 qpair failed and we were unable to recover it. 00:39:27.540 [2024-11-07 13:44:35.372800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.540 [2024-11-07 13:44:35.372814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.540 qpair failed and we were unable to recover it. 00:39:27.540 [2024-11-07 13:44:35.373129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.540 [2024-11-07 13:44:35.373142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.540 qpair failed and we were unable to recover it. 00:39:27.540 [2024-11-07 13:44:35.373427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.540 [2024-11-07 13:44:35.373442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.540 qpair failed and we were unable to recover it. 00:39:27.540 [2024-11-07 13:44:35.373836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.540 [2024-11-07 13:44:35.373849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.540 qpair failed and we were unable to recover it. 00:39:27.540 [2024-11-07 13:44:35.374207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.540 [2024-11-07 13:44:35.374222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.540 qpair failed and we were unable to recover it. 00:39:27.540 [2024-11-07 13:44:35.374568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.540 [2024-11-07 13:44:35.374582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.540 qpair failed and we were unable to recover it. 00:39:27.540 [2024-11-07 13:44:35.374908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.540 [2024-11-07 13:44:35.374923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.540 qpair failed and we were unable to recover it. 00:39:27.540 [2024-11-07 13:44:35.375287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.540 [2024-11-07 13:44:35.375302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.540 qpair failed and we were unable to recover it. 00:39:27.540 [2024-11-07 13:44:35.375619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.540 [2024-11-07 13:44:35.375632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.541 qpair failed and we were unable to recover it. 00:39:27.541 [2024-11-07 13:44:35.375974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.541 [2024-11-07 13:44:35.375988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.541 qpair failed and we were unable to recover it. 00:39:27.541 [2024-11-07 13:44:35.376314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.541 [2024-11-07 13:44:35.376327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.541 qpair failed and we were unable to recover it. 00:39:27.541 [2024-11-07 13:44:35.376540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.541 [2024-11-07 13:44:35.376553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.541 qpair failed and we were unable to recover it. 00:39:27.541 [2024-11-07 13:44:35.376769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.541 [2024-11-07 13:44:35.376783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.541 qpair failed and we were unable to recover it. 00:39:27.541 [2024-11-07 13:44:35.377044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.541 [2024-11-07 13:44:35.377058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.541 qpair failed and we were unable to recover it. 00:39:27.541 [2024-11-07 13:44:35.377303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.541 [2024-11-07 13:44:35.377316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.541 qpair failed and we were unable to recover it. 00:39:27.541 [2024-11-07 13:44:35.377733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.541 [2024-11-07 13:44:35.377746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.541 qpair failed and we were unable to recover it. 00:39:27.541 [2024-11-07 13:44:35.378007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.541 [2024-11-07 13:44:35.378022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.541 qpair failed and we were unable to recover it. 00:39:27.541 [2024-11-07 13:44:35.378146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.541 [2024-11-07 13:44:35.378160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.541 qpair failed and we were unable to recover it. 00:39:27.541 [2024-11-07 13:44:35.378371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.541 [2024-11-07 13:44:35.378384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.541 qpair failed and we were unable to recover it. 00:39:27.541 [2024-11-07 13:44:35.378601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.541 [2024-11-07 13:44:35.378614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.541 qpair failed and we were unable to recover it. 00:39:27.541 [2024-11-07 13:44:35.379057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.541 [2024-11-07 13:44:35.379072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.541 qpair failed and we were unable to recover it. 00:39:27.541 [2024-11-07 13:44:35.379262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.541 [2024-11-07 13:44:35.379276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.541 qpair failed and we were unable to recover it. 00:39:27.541 [2024-11-07 13:44:35.379684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.541 [2024-11-07 13:44:35.379697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.541 qpair failed and we were unable to recover it. 00:39:27.541 [2024-11-07 13:44:35.379987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.541 [2024-11-07 13:44:35.380001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.541 qpair failed and we were unable to recover it. 00:39:27.541 [2024-11-07 13:44:35.380213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.541 [2024-11-07 13:44:35.380228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.541 qpair failed and we were unable to recover it. 00:39:27.541 [2024-11-07 13:44:35.380539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.541 [2024-11-07 13:44:35.380553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.541 qpair failed and we were unable to recover it. 00:39:27.541 [2024-11-07 13:44:35.380728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.541 [2024-11-07 13:44:35.380744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.541 qpair failed and we were unable to recover it. 00:39:27.541 [2024-11-07 13:44:35.381097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.541 [2024-11-07 13:44:35.381111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.541 qpair failed and we were unable to recover it. 00:39:27.541 [2024-11-07 13:44:35.381454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.541 [2024-11-07 13:44:35.381467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.541 qpair failed and we were unable to recover it. 00:39:27.541 [2024-11-07 13:44:35.381658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.541 [2024-11-07 13:44:35.381674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.541 qpair failed and we were unable to recover it. 00:39:27.541 [2024-11-07 13:44:35.381898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.541 [2024-11-07 13:44:35.381912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.541 qpair failed and we were unable to recover it. 00:39:27.541 [2024-11-07 13:44:35.382243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.541 [2024-11-07 13:44:35.382257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.541 qpair failed and we were unable to recover it. 00:39:27.541 [2024-11-07 13:44:35.382620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.541 [2024-11-07 13:44:35.382634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.541 qpair failed and we were unable to recover it. 00:39:27.541 [2024-11-07 13:44:35.382890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.541 [2024-11-07 13:44:35.382905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.541 qpair failed and we were unable to recover it. 00:39:27.541 [2024-11-07 13:44:35.383351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.541 [2024-11-07 13:44:35.383364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.541 qpair failed and we were unable to recover it. 00:39:27.541 [2024-11-07 13:44:35.383642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.541 [2024-11-07 13:44:35.383655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.541 qpair failed and we were unable to recover it. 00:39:27.541 [2024-11-07 13:44:35.383895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.541 [2024-11-07 13:44:35.383909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.541 qpair failed and we were unable to recover it. 00:39:27.541 [2024-11-07 13:44:35.384219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.541 [2024-11-07 13:44:35.384232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.541 qpair failed and we were unable to recover it. 00:39:27.541 [2024-11-07 13:44:35.384558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.541 [2024-11-07 13:44:35.384572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.541 qpair failed and we were unable to recover it. 00:39:27.541 [2024-11-07 13:44:35.384877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.541 [2024-11-07 13:44:35.384891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.541 qpair failed and we were unable to recover it. 00:39:27.541 [2024-11-07 13:44:35.385097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.541 [2024-11-07 13:44:35.385111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.541 qpair failed and we were unable to recover it. 00:39:27.541 [2024-11-07 13:44:35.385398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.541 [2024-11-07 13:44:35.385412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.541 qpair failed and we were unable to recover it. 00:39:27.541 [2024-11-07 13:44:35.385636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.542 [2024-11-07 13:44:35.385650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.542 qpair failed and we were unable to recover it. 00:39:27.542 [2024-11-07 13:44:35.385901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.542 [2024-11-07 13:44:35.385915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.542 qpair failed and we were unable to recover it. 00:39:27.542 [2024-11-07 13:44:35.386245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.542 [2024-11-07 13:44:35.386259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.542 qpair failed and we were unable to recover it. 00:39:27.542 [2024-11-07 13:44:35.386543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.542 [2024-11-07 13:44:35.386557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.542 qpair failed and we were unable to recover it. 00:39:27.542 [2024-11-07 13:44:35.386897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.542 [2024-11-07 13:44:35.386911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.542 qpair failed and we were unable to recover it. 00:39:27.542 [2024-11-07 13:44:35.387121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.542 [2024-11-07 13:44:35.387136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.542 qpair failed and we were unable to recover it. 00:39:27.542 Read completed with error (sct=0, sc=8) 00:39:27.542 starting I/O failed 00:39:27.542 Read completed with error (sct=0, sc=8) 00:39:27.542 starting I/O failed 00:39:27.542 Read completed with error (sct=0, sc=8) 00:39:27.542 starting I/O failed 00:39:27.542 Read completed with error (sct=0, sc=8) 00:39:27.542 starting I/O failed 00:39:27.542 Read completed with error (sct=0, sc=8) 00:39:27.542 starting I/O failed 00:39:27.542 Read completed with error (sct=0, sc=8) 00:39:27.542 starting I/O failed 00:39:27.542 Read completed with error (sct=0, sc=8) 00:39:27.542 starting I/O failed 00:39:27.542 Write completed with error (sct=0, sc=8) 00:39:27.542 starting I/O failed 00:39:27.542 Read completed with error (sct=0, sc=8) 00:39:27.542 starting I/O failed 00:39:27.542 Read completed with error (sct=0, sc=8) 00:39:27.542 starting I/O failed 00:39:27.542 Write completed with error (sct=0, sc=8) 00:39:27.542 starting I/O failed 00:39:27.542 Read completed with error (sct=0, sc=8) 00:39:27.542 starting I/O failed 00:39:27.542 Read completed with error (sct=0, sc=8) 00:39:27.542 starting I/O failed 00:39:27.542 Read completed with error (sct=0, sc=8) 00:39:27.542 starting I/O failed 00:39:27.542 Write completed with error (sct=0, sc=8) 00:39:27.542 starting I/O failed 00:39:27.542 Read completed with error (sct=0, sc=8) 00:39:27.542 starting I/O failed 00:39:27.542 Write completed with error (sct=0, sc=8) 00:39:27.542 starting I/O failed 00:39:27.542 Write completed with error (sct=0, sc=8) 00:39:27.542 starting I/O failed 00:39:27.542 Read completed with error (sct=0, sc=8) 00:39:27.542 starting I/O failed 00:39:27.542 Read completed with error (sct=0, sc=8) 00:39:27.542 starting I/O failed 00:39:27.542 Read completed with error (sct=0, sc=8) 00:39:27.542 starting I/O failed 00:39:27.542 Write completed with error (sct=0, sc=8) 00:39:27.542 starting I/O failed 00:39:27.542 Read completed with error (sct=0, sc=8) 00:39:27.542 starting I/O failed 00:39:27.542 Read completed with error (sct=0, sc=8) 00:39:27.542 starting I/O failed 00:39:27.542 Write completed with error (sct=0, sc=8) 00:39:27.542 starting I/O failed 00:39:27.542 Read completed with error (sct=0, sc=8) 00:39:27.542 starting I/O failed 00:39:27.542 Read completed with error (sct=0, sc=8) 00:39:27.542 starting I/O failed 00:39:27.542 Write completed with error (sct=0, sc=8) 00:39:27.542 starting I/O failed 00:39:27.542 Write completed with error (sct=0, sc=8) 00:39:27.542 starting I/O failed 00:39:27.542 Read completed with error (sct=0, sc=8) 00:39:27.542 starting I/O failed 00:39:27.542 Read completed with error (sct=0, sc=8) 00:39:27.542 starting I/O failed 00:39:27.542 Write completed with error (sct=0, sc=8) 00:39:27.542 starting I/O failed 00:39:27.542 [2024-11-07 13:44:35.387474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.542 [2024-11-07 13:44:35.387827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.542 [2024-11-07 13:44:35.387842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:27.542 qpair failed and we were unable to recover it. 00:39:27.542 [2024-11-07 13:44:35.388240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.542 [2024-11-07 13:44:35.388275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:27.542 qpair failed and we were unable to recover it. 00:39:27.542 [2024-11-07 13:44:35.388585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.542 [2024-11-07 13:44:35.388601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.542 qpair failed and we were unable to recover it. 00:39:27.542 [2024-11-07 13:44:35.388947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.542 [2024-11-07 13:44:35.388961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.542 qpair failed and we were unable to recover it. 00:39:27.542 [2024-11-07 13:44:35.389169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.542 [2024-11-07 13:44:35.389184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.542 qpair failed and we were unable to recover it. 00:39:27.542 [2024-11-07 13:44:35.389548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.542 [2024-11-07 13:44:35.389562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.542 qpair failed and we were unable to recover it. 00:39:27.542 [2024-11-07 13:44:35.389909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.542 [2024-11-07 13:44:35.389923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.542 qpair failed and we were unable to recover it. 00:39:27.542 [2024-11-07 13:44:35.390271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.542 [2024-11-07 13:44:35.390285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.542 qpair failed and we were unable to recover it. 00:39:27.542 [2024-11-07 13:44:35.390576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.542 [2024-11-07 13:44:35.390589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.542 qpair failed and we were unable to recover it. 00:39:27.542 [2024-11-07 13:44:35.390894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.542 [2024-11-07 13:44:35.390908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.542 qpair failed and we were unable to recover it. 00:39:27.542 [2024-11-07 13:44:35.391273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.542 [2024-11-07 13:44:35.391286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.542 qpair failed and we were unable to recover it. 00:39:27.542 [2024-11-07 13:44:35.391575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.542 [2024-11-07 13:44:35.391589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.542 qpair failed and we were unable to recover it. 00:39:27.542 [2024-11-07 13:44:35.391929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.542 [2024-11-07 13:44:35.391944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.542 qpair failed and we were unable to recover it. 00:39:27.542 [2024-11-07 13:44:35.392311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.542 [2024-11-07 13:44:35.392326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.542 qpair failed and we were unable to recover it. 00:39:27.542 [2024-11-07 13:44:35.392677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.542 [2024-11-07 13:44:35.392694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.542 qpair failed and we were unable to recover it. 00:39:27.542 [2024-11-07 13:44:35.393033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.542 [2024-11-07 13:44:35.393047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.542 qpair failed and we were unable to recover it. 00:39:27.542 [2024-11-07 13:44:35.393363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.542 [2024-11-07 13:44:35.393376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.542 qpair failed and we were unable to recover it. 00:39:27.542 [2024-11-07 13:44:35.393704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.542 [2024-11-07 13:44:35.393718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.542 qpair failed and we were unable to recover it. 00:39:27.542 [2024-11-07 13:44:35.393932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.542 [2024-11-07 13:44:35.393946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.542 qpair failed and we were unable to recover it. 00:39:27.542 [2024-11-07 13:44:35.394278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.542 [2024-11-07 13:44:35.394292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.542 qpair failed and we were unable to recover it. 00:39:27.542 [2024-11-07 13:44:35.394501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.542 [2024-11-07 13:44:35.394514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.542 qpair failed and we were unable to recover it. 00:39:27.542 [2024-11-07 13:44:35.394884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.542 [2024-11-07 13:44:35.394898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.543 qpair failed and we were unable to recover it. 00:39:27.543 [2024-11-07 13:44:35.395048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.543 [2024-11-07 13:44:35.395061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.543 qpair failed and we were unable to recover it. 00:39:27.543 [2024-11-07 13:44:35.395150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.543 [2024-11-07 13:44:35.395164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.543 qpair failed and we were unable to recover it. 00:39:27.543 [2024-11-07 13:44:35.395514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.543 [2024-11-07 13:44:35.395528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.543 qpair failed and we were unable to recover it. 00:39:27.543 [2024-11-07 13:44:35.395734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.543 [2024-11-07 13:44:35.395749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.543 qpair failed and we were unable to recover it. 00:39:27.543 [2024-11-07 13:44:35.396076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.543 [2024-11-07 13:44:35.396090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.543 qpair failed and we were unable to recover it. 00:39:27.543 [2024-11-07 13:44:35.396364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.543 [2024-11-07 13:44:35.396377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.543 qpair failed and we were unable to recover it. 00:39:27.543 [2024-11-07 13:44:35.396469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.543 [2024-11-07 13:44:35.396483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.543 qpair failed and we were unable to recover it. 00:39:27.543 [2024-11-07 13:44:35.396681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.543 [2024-11-07 13:44:35.396694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.543 qpair failed and we were unable to recover it. 00:39:27.543 [2024-11-07 13:44:35.396947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.543 [2024-11-07 13:44:35.396960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.543 qpair failed and we were unable to recover it. 00:39:27.543 [2024-11-07 13:44:35.397314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.543 [2024-11-07 13:44:35.397328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.543 qpair failed and we were unable to recover it. 00:39:27.543 [2024-11-07 13:44:35.397637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.543 [2024-11-07 13:44:35.397651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.543 qpair failed and we were unable to recover it. 00:39:27.543 [2024-11-07 13:44:35.398042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.543 [2024-11-07 13:44:35.398057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.543 qpair failed and we were unable to recover it. 00:39:27.543 [2024-11-07 13:44:35.398243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.543 [2024-11-07 13:44:35.398259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.543 qpair failed and we were unable to recover it. 00:39:27.543 [2024-11-07 13:44:35.398581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.543 [2024-11-07 13:44:35.398596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.543 qpair failed and we were unable to recover it. 00:39:27.543 [2024-11-07 13:44:35.398884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.543 [2024-11-07 13:44:35.398898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.543 qpair failed and we were unable to recover it. 00:39:27.543 [2024-11-07 13:44:35.399238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.543 [2024-11-07 13:44:35.399252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.543 qpair failed and we were unable to recover it. 00:39:27.543 [2024-11-07 13:44:35.399525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.543 [2024-11-07 13:44:35.399539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.543 qpair failed and we were unable to recover it. 00:39:27.543 [2024-11-07 13:44:35.399760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.543 [2024-11-07 13:44:35.399774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.543 qpair failed and we were unable to recover it. 00:39:27.543 [2024-11-07 13:44:35.400063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.543 [2024-11-07 13:44:35.400078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.543 qpair failed and we were unable to recover it. 00:39:27.543 [2024-11-07 13:44:35.400269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.543 [2024-11-07 13:44:35.400285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.543 qpair failed and we were unable to recover it. 00:39:27.543 [2024-11-07 13:44:35.400487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.543 [2024-11-07 13:44:35.400501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.543 qpair failed and we were unable to recover it. 00:39:27.543 [2024-11-07 13:44:35.400685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.543 [2024-11-07 13:44:35.400698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.543 qpair failed and we were unable to recover it. 00:39:27.543 [2024-11-07 13:44:35.401037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.543 [2024-11-07 13:44:35.401052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.543 qpair failed and we were unable to recover it. 00:39:27.543 [2024-11-07 13:44:35.401346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.543 [2024-11-07 13:44:35.401359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.543 qpair failed and we were unable to recover it. 00:39:27.543 [2024-11-07 13:44:35.401691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.543 [2024-11-07 13:44:35.401705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.543 qpair failed and we were unable to recover it. 00:39:27.543 [2024-11-07 13:44:35.401995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.543 [2024-11-07 13:44:35.402009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.543 qpair failed and we were unable to recover it. 00:39:27.543 [2024-11-07 13:44:35.402284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.543 [2024-11-07 13:44:35.402297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.543 qpair failed and we were unable to recover it. 00:39:27.543 [2024-11-07 13:44:35.402588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.543 [2024-11-07 13:44:35.402602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.543 qpair failed and we were unable to recover it. 00:39:27.543 [2024-11-07 13:44:35.402817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.543 [2024-11-07 13:44:35.402831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.543 qpair failed and we were unable to recover it. 00:39:27.543 [2024-11-07 13:44:35.403234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.543 [2024-11-07 13:44:35.403249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.543 qpair failed and we were unable to recover it. 00:39:27.543 [2024-11-07 13:44:35.403566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.543 [2024-11-07 13:44:35.403579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.543 qpair failed and we were unable to recover it. 00:39:27.543 [2024-11-07 13:44:35.403795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.543 [2024-11-07 13:44:35.403808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.543 qpair failed and we were unable to recover it. 00:39:27.543 [2024-11-07 13:44:35.403931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.543 [2024-11-07 13:44:35.403948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.543 qpair failed and we were unable to recover it. 00:39:27.543 [2024-11-07 13:44:35.404167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000417100 is same with the state(6) to be set 00:39:27.543 [2024-11-07 13:44:35.404825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.543 [2024-11-07 13:44:35.404958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:27.543 qpair failed and we were unable to recover it. 00:39:27.543 [2024-11-07 13:44:35.405362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.543 [2024-11-07 13:44:35.405391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:27.543 qpair failed and we were unable to recover it. 00:39:27.543 [2024-11-07 13:44:35.405640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.543 [2024-11-07 13:44:35.405654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:27.543 qpair failed and we were unable to recover it. 00:39:27.543 [2024-11-07 13:44:35.405854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.543 [2024-11-07 13:44:35.405871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:27.543 qpair failed and we were unable to recover it. 00:39:27.543 [2024-11-07 13:44:35.406190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.543 [2024-11-07 13:44:35.406201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:27.544 qpair failed and we were unable to recover it. 00:39:27.544 [2024-11-07 13:44:35.406493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.544 [2024-11-07 13:44:35.406514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:27.544 qpair failed and we were unable to recover it. 00:39:27.544 [2024-11-07 13:44:35.406834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.544 [2024-11-07 13:44:35.406844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:27.544 qpair failed and we were unable to recover it. 00:39:27.544 [2024-11-07 13:44:35.407261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.544 [2024-11-07 13:44:35.407300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:27.544 qpair failed and we were unable to recover it. 00:39:27.544 [2024-11-07 13:44:35.407646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.544 [2024-11-07 13:44:35.407658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:27.544 qpair failed and we were unable to recover it. 00:39:27.544 [2024-11-07 13:44:35.407916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.544 [2024-11-07 13:44:35.407940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:27.544 qpair failed and we were unable to recover it. 00:39:27.544 [2024-11-07 13:44:35.408153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.544 [2024-11-07 13:44:35.408165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:27.544 qpair failed and we were unable to recover it. 00:39:27.544 [2024-11-07 13:44:35.408495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.544 [2024-11-07 13:44:35.408505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:27.544 qpair failed and we were unable to recover it. 00:39:27.544 [2024-11-07 13:44:35.408802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.544 [2024-11-07 13:44:35.408816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:27.544 qpair failed and we were unable to recover it. 00:39:27.544 [2024-11-07 13:44:35.409248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.544 [2024-11-07 13:44:35.409258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:27.544 qpair failed and we were unable to recover it. 00:39:27.544 [2024-11-07 13:44:35.409509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.544 [2024-11-07 13:44:35.409527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:27.544 qpair failed and we were unable to recover it. 00:39:27.544 [2024-11-07 13:44:35.409778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.544 [2024-11-07 13:44:35.409791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:27.544 qpair failed and we were unable to recover it. 00:39:27.544 [2024-11-07 13:44:35.409913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.544 [2024-11-07 13:44:35.409924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:27.544 qpair failed and we were unable to recover it. 00:39:27.544 [2024-11-07 13:44:35.410174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.544 [2024-11-07 13:44:35.410184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:27.544 qpair failed and we were unable to recover it. 00:39:27.544 [2024-11-07 13:44:35.410573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.544 [2024-11-07 13:44:35.410584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:27.544 qpair failed and we were unable to recover it. 00:39:27.544 [2024-11-07 13:44:35.410928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.544 [2024-11-07 13:44:35.410938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:27.544 qpair failed and we were unable to recover it. 00:39:27.544 [2024-11-07 13:44:35.411052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.544 [2024-11-07 13:44:35.411061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:27.544 qpair failed and we were unable to recover it. 00:39:27.544 [2024-11-07 13:44:35.411284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.544 [2024-11-07 13:44:35.411321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.544 qpair failed and we were unable to recover it. 00:39:27.544 [2024-11-07 13:44:35.411499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.544 [2024-11-07 13:44:35.411514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.544 qpair failed and we were unable to recover it. 00:39:27.544 [2024-11-07 13:44:35.411733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.544 [2024-11-07 13:44:35.411756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.544 qpair failed and we were unable to recover it. 00:39:27.544 [2024-11-07 13:44:35.412086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.544 [2024-11-07 13:44:35.412101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.544 qpair failed and we were unable to recover it. 00:39:27.544 [2024-11-07 13:44:35.412362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.544 [2024-11-07 13:44:35.412375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.544 qpair failed and we were unable to recover it. 00:39:27.544 [2024-11-07 13:44:35.412679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.544 [2024-11-07 13:44:35.412693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.544 qpair failed and we were unable to recover it. 00:39:27.544 [2024-11-07 13:44:35.412901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.544 [2024-11-07 13:44:35.412915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.544 qpair failed and we were unable to recover it. 00:39:27.544 [2024-11-07 13:44:35.413370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.544 [2024-11-07 13:44:35.413383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.544 qpair failed and we were unable to recover it. 00:39:27.544 [2024-11-07 13:44:35.413725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.544 [2024-11-07 13:44:35.413739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.544 qpair failed and we were unable to recover it. 00:39:27.544 [2024-11-07 13:44:35.414089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.544 [2024-11-07 13:44:35.414103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.544 qpair failed and we were unable to recover it. 00:39:27.544 [2024-11-07 13:44:35.414408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.544 [2024-11-07 13:44:35.414428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.544 qpair failed and we were unable to recover it. 00:39:27.544 [2024-11-07 13:44:35.414623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.544 [2024-11-07 13:44:35.414636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.544 qpair failed and we were unable to recover it. 00:39:27.544 [2024-11-07 13:44:35.414978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.544 [2024-11-07 13:44:35.414992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.544 qpair failed and we were unable to recover it. 00:39:27.544 [2024-11-07 13:44:35.415401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.544 [2024-11-07 13:44:35.415414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.544 qpair failed and we were unable to recover it. 00:39:27.544 [2024-11-07 13:44:35.415715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.544 [2024-11-07 13:44:35.415736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.544 qpair failed and we were unable to recover it. 00:39:27.544 [2024-11-07 13:44:35.416016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.544 [2024-11-07 13:44:35.416031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.544 qpair failed and we were unable to recover it. 00:39:27.544 [2024-11-07 13:44:35.416372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.544 [2024-11-07 13:44:35.416386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.544 qpair failed and we were unable to recover it. 00:39:27.544 [2024-11-07 13:44:35.416769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.544 [2024-11-07 13:44:35.416783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.544 qpair failed and we were unable to recover it. 00:39:27.544 [2024-11-07 13:44:35.417176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.544 [2024-11-07 13:44:35.417190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.544 qpair failed and we were unable to recover it. 00:39:27.544 [2024-11-07 13:44:35.417366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.544 [2024-11-07 13:44:35.417379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.544 qpair failed and we were unable to recover it. 00:39:27.544 [2024-11-07 13:44:35.417681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.544 [2024-11-07 13:44:35.417695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.544 qpair failed and we were unable to recover it. 00:39:27.544 [2024-11-07 13:44:35.418008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.544 [2024-11-07 13:44:35.418021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.544 qpair failed and we were unable to recover it. 00:39:27.545 [2024-11-07 13:44:35.418360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.545 [2024-11-07 13:44:35.418372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.545 qpair failed and we were unable to recover it. 00:39:27.545 [2024-11-07 13:44:35.418484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.545 [2024-11-07 13:44:35.418500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.545 qpair failed and we were unable to recover it. 00:39:27.545 [2024-11-07 13:44:35.418877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.545 [2024-11-07 13:44:35.418891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.545 qpair failed and we were unable to recover it. 00:39:27.545 [2024-11-07 13:44:35.419273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.545 [2024-11-07 13:44:35.419287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.545 qpair failed and we were unable to recover it. 00:39:27.545 [2024-11-07 13:44:35.419640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.545 [2024-11-07 13:44:35.419653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.545 qpair failed and we were unable to recover it. 00:39:27.545 [2024-11-07 13:44:35.419858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.545 [2024-11-07 13:44:35.419880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.545 qpair failed and we were unable to recover it. 00:39:27.545 [2024-11-07 13:44:35.420208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.545 [2024-11-07 13:44:35.420221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.545 qpair failed and we were unable to recover it. 00:39:27.545 [2024-11-07 13:44:35.420546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.545 [2024-11-07 13:44:35.420559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.545 qpair failed and we were unable to recover it. 00:39:27.545 [2024-11-07 13:44:35.420892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.545 [2024-11-07 13:44:35.420906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.545 qpair failed and we were unable to recover it. 00:39:27.545 [2024-11-07 13:44:35.421235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.545 [2024-11-07 13:44:35.421252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.545 qpair failed and we were unable to recover it. 00:39:27.545 [2024-11-07 13:44:35.421564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.545 [2024-11-07 13:44:35.421578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.545 qpair failed and we were unable to recover it. 00:39:27.545 [2024-11-07 13:44:35.421910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.545 [2024-11-07 13:44:35.421928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.545 qpair failed and we were unable to recover it. 00:39:27.545 [2024-11-07 13:44:35.422133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.545 [2024-11-07 13:44:35.422147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.545 qpair failed and we were unable to recover it. 00:39:27.545 [2024-11-07 13:44:35.422384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.545 [2024-11-07 13:44:35.422397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.545 qpair failed and we were unable to recover it. 00:39:27.545 [2024-11-07 13:44:35.422737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.545 [2024-11-07 13:44:35.422750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.545 qpair failed and we were unable to recover it. 00:39:27.545 [2024-11-07 13:44:35.423076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.545 [2024-11-07 13:44:35.423090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.545 qpair failed and we were unable to recover it. 00:39:27.545 [2024-11-07 13:44:35.423417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.545 [2024-11-07 13:44:35.423430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.545 qpair failed and we were unable to recover it. 00:39:27.545 [2024-11-07 13:44:35.423663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.545 [2024-11-07 13:44:35.423676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.545 qpair failed and we were unable to recover it. 00:39:27.545 [2024-11-07 13:44:35.424006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.545 [2024-11-07 13:44:35.424020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.545 qpair failed and we were unable to recover it. 00:39:27.545 [2024-11-07 13:44:35.424354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.545 [2024-11-07 13:44:35.424367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.545 qpair failed and we were unable to recover it. 00:39:27.545 [2024-11-07 13:44:35.424576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.545 [2024-11-07 13:44:35.424589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.545 qpair failed and we were unable to recover it. 00:39:27.545 [2024-11-07 13:44:35.424905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.545 [2024-11-07 13:44:35.424919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.545 qpair failed and we were unable to recover it. 00:39:27.545 [2024-11-07 13:44:35.425229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.545 [2024-11-07 13:44:35.425251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.545 qpair failed and we were unable to recover it. 00:39:27.545 [2024-11-07 13:44:35.425459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.545 [2024-11-07 13:44:35.425472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.545 qpair failed and we were unable to recover it. 00:39:27.545 [2024-11-07 13:44:35.425877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.545 [2024-11-07 13:44:35.425891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.545 qpair failed and we were unable to recover it. 00:39:27.545 [2024-11-07 13:44:35.426209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.545 [2024-11-07 13:44:35.426223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.545 qpair failed and we were unable to recover it. 00:39:27.545 [2024-11-07 13:44:35.426391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.545 [2024-11-07 13:44:35.426404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.545 qpair failed and we were unable to recover it. 00:39:27.545 [2024-11-07 13:44:35.426738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.545 [2024-11-07 13:44:35.426751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.545 qpair failed and we were unable to recover it. 00:39:27.545 [2024-11-07 13:44:35.427191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.545 [2024-11-07 13:44:35.427205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.545 qpair failed and we were unable to recover it. 00:39:27.545 [2024-11-07 13:44:35.427570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.545 [2024-11-07 13:44:35.427584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.545 qpair failed and we were unable to recover it. 00:39:27.545 [2024-11-07 13:44:35.427900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.545 [2024-11-07 13:44:35.427914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.545 qpair failed and we were unable to recover it. 00:39:27.545 [2024-11-07 13:44:35.428082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.545 [2024-11-07 13:44:35.428097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.545 qpair failed and we were unable to recover it. 00:39:27.545 [2024-11-07 13:44:35.428417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.545 [2024-11-07 13:44:35.428431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.545 qpair failed and we were unable to recover it. 00:39:27.545 [2024-11-07 13:44:35.428710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.545 [2024-11-07 13:44:35.428724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.546 qpair failed and we were unable to recover it. 00:39:27.546 [2024-11-07 13:44:35.429046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.546 [2024-11-07 13:44:35.429060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.546 qpair failed and we were unable to recover it. 00:39:27.546 [2024-11-07 13:44:35.429381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.546 [2024-11-07 13:44:35.429394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.546 qpair failed and we were unable to recover it. 00:39:27.546 [2024-11-07 13:44:35.429726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.546 [2024-11-07 13:44:35.429740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.546 qpair failed and we were unable to recover it. 00:39:27.546 [2024-11-07 13:44:35.430131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.546 [2024-11-07 13:44:35.430145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.546 qpair failed and we were unable to recover it. 00:39:27.546 [2024-11-07 13:44:35.430452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.546 [2024-11-07 13:44:35.430465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.546 qpair failed and we were unable to recover it. 00:39:27.546 [2024-11-07 13:44:35.430739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.546 [2024-11-07 13:44:35.430753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.546 qpair failed and we were unable to recover it. 00:39:27.546 [2024-11-07 13:44:35.431004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.546 [2024-11-07 13:44:35.431018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.546 qpair failed and we were unable to recover it. 00:39:27.546 [2024-11-07 13:44:35.431382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.546 [2024-11-07 13:44:35.431395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.546 qpair failed and we were unable to recover it. 00:39:27.546 [2024-11-07 13:44:35.431708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.546 [2024-11-07 13:44:35.431723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.546 qpair failed and we were unable to recover it. 00:39:27.546 [2024-11-07 13:44:35.432050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.546 [2024-11-07 13:44:35.432064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.546 qpair failed and we were unable to recover it. 00:39:27.546 [2024-11-07 13:44:35.432348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.546 [2024-11-07 13:44:35.432361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.546 qpair failed and we were unable to recover it. 00:39:27.546 [2024-11-07 13:44:35.432676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.546 [2024-11-07 13:44:35.432689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.546 qpair failed and we were unable to recover it. 00:39:27.546 [2024-11-07 13:44:35.433033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.546 [2024-11-07 13:44:35.433048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.546 qpair failed and we were unable to recover it. 00:39:27.546 [2024-11-07 13:44:35.433361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.546 [2024-11-07 13:44:35.433374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.546 qpair failed and we were unable to recover it. 00:39:27.546 [2024-11-07 13:44:35.433694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.546 [2024-11-07 13:44:35.433707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.546 qpair failed and we were unable to recover it. 00:39:27.546 [2024-11-07 13:44:35.433946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.546 [2024-11-07 13:44:35.433962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.546 qpair failed and we were unable to recover it. 00:39:27.546 [2024-11-07 13:44:35.434305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.546 [2024-11-07 13:44:35.434319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.546 qpair failed and we were unable to recover it. 00:39:27.546 [2024-11-07 13:44:35.434640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.546 [2024-11-07 13:44:35.434654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.546 qpair failed and we were unable to recover it. 00:39:27.546 [2024-11-07 13:44:35.434834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.546 [2024-11-07 13:44:35.434849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.546 qpair failed and we were unable to recover it. 00:39:27.546 [2024-11-07 13:44:35.435181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.546 [2024-11-07 13:44:35.435195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.546 qpair failed and we were unable to recover it. 00:39:27.546 [2024-11-07 13:44:35.435508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.546 [2024-11-07 13:44:35.435521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.546 qpair failed and we were unable to recover it. 00:39:27.546 [2024-11-07 13:44:35.435821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.546 [2024-11-07 13:44:35.435834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.546 qpair failed and we were unable to recover it. 00:39:27.546 [2024-11-07 13:44:35.436176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.546 [2024-11-07 13:44:35.436190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.546 qpair failed and we were unable to recover it. 00:39:27.546 [2024-11-07 13:44:35.436499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.546 [2024-11-07 13:44:35.436513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.546 qpair failed and we were unable to recover it. 00:39:27.546 [2024-11-07 13:44:35.436844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.546 [2024-11-07 13:44:35.436858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.546 qpair failed and we were unable to recover it. 00:39:27.546 [2024-11-07 13:44:35.437225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.546 [2024-11-07 13:44:35.437238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.546 qpair failed and we were unable to recover it. 00:39:27.546 [2024-11-07 13:44:35.437546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.546 [2024-11-07 13:44:35.437560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.546 qpair failed and we were unable to recover it. 00:39:27.546 [2024-11-07 13:44:35.437890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.546 [2024-11-07 13:44:35.437904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.546 qpair failed and we were unable to recover it. 00:39:27.546 [2024-11-07 13:44:35.438244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.546 [2024-11-07 13:44:35.438257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.546 qpair failed and we were unable to recover it. 00:39:27.546 [2024-11-07 13:44:35.438543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.546 [2024-11-07 13:44:35.438557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.546 qpair failed and we were unable to recover it. 00:39:27.546 [2024-11-07 13:44:35.438873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.546 [2024-11-07 13:44:35.438887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.546 qpair failed and we were unable to recover it. 00:39:27.546 [2024-11-07 13:44:35.439213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.546 [2024-11-07 13:44:35.439226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.546 qpair failed and we were unable to recover it. 00:39:27.546 [2024-11-07 13:44:35.439555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.546 [2024-11-07 13:44:35.439568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.546 qpair failed and we were unable to recover it. 00:39:27.546 [2024-11-07 13:44:35.439891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.546 [2024-11-07 13:44:35.439905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.546 qpair failed and we were unable to recover it. 00:39:27.546 [2024-11-07 13:44:35.440275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.546 [2024-11-07 13:44:35.440288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.546 qpair failed and we were unable to recover it. 00:39:27.546 [2024-11-07 13:44:35.440577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.546 [2024-11-07 13:44:35.440597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.546 qpair failed and we were unable to recover it. 00:39:27.546 [2024-11-07 13:44:35.440814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.546 [2024-11-07 13:44:35.440828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.546 qpair failed and we were unable to recover it. 00:39:27.546 [2024-11-07 13:44:35.441229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.546 [2024-11-07 13:44:35.441242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.546 qpair failed and we were unable to recover it. 00:39:27.547 [2024-11-07 13:44:35.441438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.547 [2024-11-07 13:44:35.441451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.547 qpair failed and we were unable to recover it. 00:39:27.547 [2024-11-07 13:44:35.441890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.547 [2024-11-07 13:44:35.441904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.547 qpair failed and we were unable to recover it. 00:39:27.547 [2024-11-07 13:44:35.442252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.547 [2024-11-07 13:44:35.442266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.547 qpair failed and we were unable to recover it. 00:39:27.547 [2024-11-07 13:44:35.442496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.547 [2024-11-07 13:44:35.442509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.547 qpair failed and we were unable to recover it. 00:39:27.547 [2024-11-07 13:44:35.442831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.547 [2024-11-07 13:44:35.442845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.547 qpair failed and we were unable to recover it. 00:39:27.547 [2024-11-07 13:44:35.443146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.547 [2024-11-07 13:44:35.443160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.547 qpair failed and we were unable to recover it. 00:39:27.547 [2024-11-07 13:44:35.443355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.547 [2024-11-07 13:44:35.443369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.547 qpair failed and we were unable to recover it. 00:39:27.547 [2024-11-07 13:44:35.443550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.547 [2024-11-07 13:44:35.443563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.547 qpair failed and we were unable to recover it. 00:39:27.547 [2024-11-07 13:44:35.443849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.547 [2024-11-07 13:44:35.443871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.547 qpair failed and we were unable to recover it. 00:39:27.547 [2024-11-07 13:44:35.444243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.547 [2024-11-07 13:44:35.444257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.547 qpair failed and we were unable to recover it. 00:39:27.547 [2024-11-07 13:44:35.444586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.547 [2024-11-07 13:44:35.444600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.547 qpair failed and we were unable to recover it. 00:39:27.547 [2024-11-07 13:44:35.444907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.547 [2024-11-07 13:44:35.444922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.547 qpair failed and we were unable to recover it. 00:39:27.547 [2024-11-07 13:44:35.445237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.547 [2024-11-07 13:44:35.445251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.547 qpair failed and we were unable to recover it. 00:39:27.547 [2024-11-07 13:44:35.445585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.547 [2024-11-07 13:44:35.445599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.547 qpair failed and we were unable to recover it. 00:39:27.547 [2024-11-07 13:44:35.445914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.547 [2024-11-07 13:44:35.445928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.547 qpair failed and we were unable to recover it. 00:39:27.547 [2024-11-07 13:44:35.446098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.547 [2024-11-07 13:44:35.446112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.547 qpair failed and we were unable to recover it. 00:39:27.547 [2024-11-07 13:44:35.446402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.547 [2024-11-07 13:44:35.446415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.547 qpair failed and we were unable to recover it. 00:39:27.547 [2024-11-07 13:44:35.446746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.547 [2024-11-07 13:44:35.446762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.547 qpair failed and we were unable to recover it. 00:39:27.547 [2024-11-07 13:44:35.446976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.547 [2024-11-07 13:44:35.446991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.547 qpair failed and we were unable to recover it. 00:39:27.547 [2024-11-07 13:44:35.447307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.547 [2024-11-07 13:44:35.447321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.547 qpair failed and we were unable to recover it. 00:39:27.547 [2024-11-07 13:44:35.447637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.547 [2024-11-07 13:44:35.447651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.547 qpair failed and we were unable to recover it. 00:39:27.547 [2024-11-07 13:44:35.447985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.547 [2024-11-07 13:44:35.447999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.547 qpair failed and we were unable to recover it. 00:39:27.547 [2024-11-07 13:44:35.448280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.547 [2024-11-07 13:44:35.448294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.547 qpair failed and we were unable to recover it. 00:39:27.547 [2024-11-07 13:44:35.448616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.547 [2024-11-07 13:44:35.448629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.547 qpair failed and we were unable to recover it. 00:39:27.547 [2024-11-07 13:44:35.448909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.547 [2024-11-07 13:44:35.448923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.547 qpair failed and we were unable to recover it. 00:39:27.547 [2024-11-07 13:44:35.449235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.547 [2024-11-07 13:44:35.449249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.547 qpair failed and we were unable to recover it. 00:39:27.547 [2024-11-07 13:44:35.449610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.547 [2024-11-07 13:44:35.449624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.547 qpair failed and we were unable to recover it. 00:39:27.547 [2024-11-07 13:44:35.449941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.547 [2024-11-07 13:44:35.449956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.547 qpair failed and we were unable to recover it. 00:39:27.547 [2024-11-07 13:44:35.450257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.547 [2024-11-07 13:44:35.450270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.547 qpair failed and we were unable to recover it. 00:39:27.547 [2024-11-07 13:44:35.450450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.547 [2024-11-07 13:44:35.450465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.547 qpair failed and we were unable to recover it. 00:39:27.547 [2024-11-07 13:44:35.450747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.547 [2024-11-07 13:44:35.450761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.547 qpair failed and we were unable to recover it. 00:39:27.547 [2024-11-07 13:44:35.451083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.547 [2024-11-07 13:44:35.451097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.547 qpair failed and we were unable to recover it. 00:39:27.547 [2024-11-07 13:44:35.451289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.547 [2024-11-07 13:44:35.451303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.547 qpair failed and we were unable to recover it. 00:39:27.547 [2024-11-07 13:44:35.451625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.547 [2024-11-07 13:44:35.451639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.547 qpair failed and we were unable to recover it. 00:39:27.547 [2024-11-07 13:44:35.451943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.547 [2024-11-07 13:44:35.451957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.547 qpair failed and we were unable to recover it. 00:39:27.547 [2024-11-07 13:44:35.452271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.547 [2024-11-07 13:44:35.452285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.547 qpair failed and we were unable to recover it. 00:39:27.547 [2024-11-07 13:44:35.452601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.547 [2024-11-07 13:44:35.452614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.547 qpair failed and we were unable to recover it. 00:39:27.547 [2024-11-07 13:44:35.452928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.547 [2024-11-07 13:44:35.452941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.547 qpair failed and we were unable to recover it. 00:39:27.547 [2024-11-07 13:44:35.453234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.548 [2024-11-07 13:44:35.453248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.548 qpair failed and we were unable to recover it. 00:39:27.548 [2024-11-07 13:44:35.453578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.548 [2024-11-07 13:44:35.453592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.548 qpair failed and we were unable to recover it. 00:39:27.548 [2024-11-07 13:44:35.453910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.548 [2024-11-07 13:44:35.453924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.548 qpair failed and we were unable to recover it. 00:39:27.548 [2024-11-07 13:44:35.454249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.548 [2024-11-07 13:44:35.454262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.548 qpair failed and we were unable to recover it. 00:39:27.548 [2024-11-07 13:44:35.454542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.548 [2024-11-07 13:44:35.454555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.548 qpair failed and we were unable to recover it. 00:39:27.548 [2024-11-07 13:44:35.454875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.548 [2024-11-07 13:44:35.454889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.548 qpair failed and we were unable to recover it. 00:39:27.548 [2024-11-07 13:44:35.455206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.548 [2024-11-07 13:44:35.455227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.548 qpair failed and we were unable to recover it. 00:39:27.548 [2024-11-07 13:44:35.455514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.548 [2024-11-07 13:44:35.455528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.548 qpair failed and we were unable to recover it. 00:39:27.548 [2024-11-07 13:44:35.455838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.548 [2024-11-07 13:44:35.455852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.548 qpair failed and we were unable to recover it. 00:39:27.548 [2024-11-07 13:44:35.456153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.548 [2024-11-07 13:44:35.456166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.548 qpair failed and we were unable to recover it. 00:39:27.548 [2024-11-07 13:44:35.456453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.548 [2024-11-07 13:44:35.456473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.548 qpair failed and we were unable to recover it. 00:39:27.548 [2024-11-07 13:44:35.456762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.548 [2024-11-07 13:44:35.456776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.548 qpair failed and we were unable to recover it. 00:39:27.548 [2024-11-07 13:44:35.457095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.548 [2024-11-07 13:44:35.457110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.548 qpair failed and we were unable to recover it. 00:39:27.548 [2024-11-07 13:44:35.457426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.548 [2024-11-07 13:44:35.457440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.548 qpair failed and we were unable to recover it. 00:39:27.548 [2024-11-07 13:44:35.457771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.548 [2024-11-07 13:44:35.457784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.548 qpair failed and we were unable to recover it. 00:39:27.548 [2024-11-07 13:44:35.458116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.548 [2024-11-07 13:44:35.458130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.548 qpair failed and we were unable to recover it. 00:39:27.548 [2024-11-07 13:44:35.458445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.548 [2024-11-07 13:44:35.458459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.548 qpair failed and we were unable to recover it. 00:39:27.548 [2024-11-07 13:44:35.458676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.548 [2024-11-07 13:44:35.458689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.548 qpair failed and we were unable to recover it. 00:39:27.548 [2024-11-07 13:44:35.458992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.548 [2024-11-07 13:44:35.459006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.548 qpair failed and we were unable to recover it. 00:39:27.548 [2024-11-07 13:44:35.459339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.548 [2024-11-07 13:44:35.459356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.548 qpair failed and we were unable to recover it. 00:39:27.548 [2024-11-07 13:44:35.459722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.548 [2024-11-07 13:44:35.459735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.548 qpair failed and we were unable to recover it. 00:39:27.548 [2024-11-07 13:44:35.459989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.548 [2024-11-07 13:44:35.460003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.548 qpair failed and we were unable to recover it. 00:39:27.548 [2024-11-07 13:44:35.460320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.548 [2024-11-07 13:44:35.460334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.548 qpair failed and we were unable to recover it. 00:39:27.548 [2024-11-07 13:44:35.460618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.548 [2024-11-07 13:44:35.460631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.548 qpair failed and we were unable to recover it. 00:39:27.548 [2024-11-07 13:44:35.460943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.548 [2024-11-07 13:44:35.460956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.548 qpair failed and we were unable to recover it. 00:39:27.548 [2024-11-07 13:44:35.461240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.548 [2024-11-07 13:44:35.461261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.548 qpair failed and we were unable to recover it. 00:39:27.548 [2024-11-07 13:44:35.461618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.548 [2024-11-07 13:44:35.461632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.548 qpair failed and we were unable to recover it. 00:39:27.548 [2024-11-07 13:44:35.461952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.548 [2024-11-07 13:44:35.461965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.548 qpair failed and we were unable to recover it. 00:39:27.548 [2024-11-07 13:44:35.462248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.548 [2024-11-07 13:44:35.462262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.548 qpair failed and we were unable to recover it. 00:39:27.548 [2024-11-07 13:44:35.462598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.548 [2024-11-07 13:44:35.462615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.548 qpair failed and we were unable to recover it. 00:39:27.548 [2024-11-07 13:44:35.462927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.548 [2024-11-07 13:44:35.462941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.548 qpair failed and we were unable to recover it. 00:39:27.548 [2024-11-07 13:44:35.463253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.548 [2024-11-07 13:44:35.463268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.548 qpair failed and we were unable to recover it. 00:39:27.548 [2024-11-07 13:44:35.463556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.548 [2024-11-07 13:44:35.463570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.548 qpair failed and we were unable to recover it. 00:39:27.548 [2024-11-07 13:44:35.463884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.548 [2024-11-07 13:44:35.463898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.548 qpair failed and we were unable to recover it. 00:39:27.548 [2024-11-07 13:44:35.464215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.548 [2024-11-07 13:44:35.464238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.548 qpair failed and we were unable to recover it. 00:39:27.548 [2024-11-07 13:44:35.464574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.548 [2024-11-07 13:44:35.464588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.548 qpair failed and we were unable to recover it. 00:39:27.548 [2024-11-07 13:44:35.464801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.548 [2024-11-07 13:44:35.464814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.548 qpair failed and we were unable to recover it. 00:39:27.548 [2024-11-07 13:44:35.465111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.548 [2024-11-07 13:44:35.465125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.548 qpair failed and we were unable to recover it. 00:39:27.549 [2024-11-07 13:44:35.465414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.549 [2024-11-07 13:44:35.465427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.549 qpair failed and we were unable to recover it. 00:39:27.549 [2024-11-07 13:44:35.465726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.549 [2024-11-07 13:44:35.465739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.549 qpair failed and we were unable to recover it. 00:39:27.549 [2024-11-07 13:44:35.465910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.549 [2024-11-07 13:44:35.465925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.549 qpair failed and we were unable to recover it. 00:39:27.549 [2024-11-07 13:44:35.466290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.549 [2024-11-07 13:44:35.466303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.549 qpair failed and we were unable to recover it. 00:39:27.549 [2024-11-07 13:44:35.466620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.549 [2024-11-07 13:44:35.466633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.549 qpair failed and we were unable to recover it. 00:39:27.549 [2024-11-07 13:44:35.466945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.549 [2024-11-07 13:44:35.466959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.549 qpair failed and we were unable to recover it. 00:39:27.549 [2024-11-07 13:44:35.467242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.549 [2024-11-07 13:44:35.467256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.549 qpair failed and we were unable to recover it. 00:39:27.549 [2024-11-07 13:44:35.467567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.549 [2024-11-07 13:44:35.467580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.549 qpair failed and we were unable to recover it. 00:39:27.549 [2024-11-07 13:44:35.467918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.549 [2024-11-07 13:44:35.467933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.549 qpair failed and we were unable to recover it. 00:39:27.549 [2024-11-07 13:44:35.468367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.549 [2024-11-07 13:44:35.468380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.549 qpair failed and we were unable to recover it. 00:39:27.549 [2024-11-07 13:44:35.468712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.549 [2024-11-07 13:44:35.468725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.549 qpair failed and we were unable to recover it. 00:39:27.549 [2024-11-07 13:44:35.468999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.549 [2024-11-07 13:44:35.469013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.549 qpair failed and we were unable to recover it. 00:39:27.549 [2024-11-07 13:44:35.469294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.549 [2024-11-07 13:44:35.469308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.549 qpair failed and we were unable to recover it. 00:39:27.549 [2024-11-07 13:44:35.469602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.549 [2024-11-07 13:44:35.469616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.549 qpair failed and we were unable to recover it. 00:39:27.549 [2024-11-07 13:44:35.469838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.549 [2024-11-07 13:44:35.469852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.549 qpair failed and we were unable to recover it. 00:39:27.549 [2024-11-07 13:44:35.470190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.549 [2024-11-07 13:44:35.470204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.549 qpair failed and we were unable to recover it. 00:39:27.549 [2024-11-07 13:44:35.470508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.549 [2024-11-07 13:44:35.470521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.549 qpair failed and we were unable to recover it. 00:39:27.549 [2024-11-07 13:44:35.470851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.549 [2024-11-07 13:44:35.470869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.549 qpair failed and we were unable to recover it. 00:39:27.549 [2024-11-07 13:44:35.471176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.549 [2024-11-07 13:44:35.471190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.549 qpair failed and we were unable to recover it. 00:39:27.549 [2024-11-07 13:44:35.471412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.549 [2024-11-07 13:44:35.471425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.549 qpair failed and we were unable to recover it. 00:39:27.549 [2024-11-07 13:44:35.471746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.549 [2024-11-07 13:44:35.471760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.549 qpair failed and we were unable to recover it. 00:39:27.549 [2024-11-07 13:44:35.472094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.549 [2024-11-07 13:44:35.472110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.549 qpair failed and we were unable to recover it. 00:39:27.549 [2024-11-07 13:44:35.472422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.549 [2024-11-07 13:44:35.472435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.549 qpair failed and we were unable to recover it. 00:39:27.549 [2024-11-07 13:44:35.472750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.549 [2024-11-07 13:44:35.472763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.549 qpair failed and we were unable to recover it. 00:39:27.549 [2024-11-07 13:44:35.473075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.549 [2024-11-07 13:44:35.473089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.549 qpair failed and we were unable to recover it. 00:39:27.549 [2024-11-07 13:44:35.473395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.549 [2024-11-07 13:44:35.473408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.549 qpair failed and we were unable to recover it. 00:39:27.549 [2024-11-07 13:44:35.473791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.549 [2024-11-07 13:44:35.473805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.549 qpair failed and we were unable to recover it. 00:39:27.549 [2024-11-07 13:44:35.474089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.549 [2024-11-07 13:44:35.474103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.549 qpair failed and we were unable to recover it. 00:39:27.549 [2024-11-07 13:44:35.474413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.549 [2024-11-07 13:44:35.474426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.549 qpair failed and we were unable to recover it. 00:39:27.549 [2024-11-07 13:44:35.474815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.549 [2024-11-07 13:44:35.474828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.549 qpair failed and we were unable to recover it. 00:39:27.549 [2024-11-07 13:44:35.475218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.549 [2024-11-07 13:44:35.475233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.549 qpair failed and we were unable to recover it. 00:39:27.549 [2024-11-07 13:44:35.475551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.549 [2024-11-07 13:44:35.475564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.549 qpair failed and we were unable to recover it. 00:39:27.549 [2024-11-07 13:44:35.475857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.549 [2024-11-07 13:44:35.475876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.549 qpair failed and we were unable to recover it. 00:39:27.549 [2024-11-07 13:44:35.476056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.549 [2024-11-07 13:44:35.476069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.549 qpair failed and we were unable to recover it. 00:39:27.549 [2024-11-07 13:44:35.476215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.549 [2024-11-07 13:44:35.476229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.549 qpair failed and we were unable to recover it. 00:39:27.549 [2024-11-07 13:44:35.476438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.549 [2024-11-07 13:44:35.476451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.549 qpair failed and we were unable to recover it. 00:39:27.549 [2024-11-07 13:44:35.476743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.549 [2024-11-07 13:44:35.476756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.549 qpair failed and we were unable to recover it. 00:39:27.549 [2024-11-07 13:44:35.477050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.549 [2024-11-07 13:44:35.477065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.549 qpair failed and we were unable to recover it. 00:39:27.549 [2024-11-07 13:44:35.477378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.550 [2024-11-07 13:44:35.477392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.550 qpair failed and we were unable to recover it. 00:39:27.550 [2024-11-07 13:44:35.477783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.550 [2024-11-07 13:44:35.477797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.550 qpair failed and we were unable to recover it. 00:39:27.550 [2024-11-07 13:44:35.478110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.550 [2024-11-07 13:44:35.478125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.550 qpair failed and we were unable to recover it. 00:39:27.550 [2024-11-07 13:44:35.478409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.550 [2024-11-07 13:44:35.478423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.550 qpair failed and we were unable to recover it. 00:39:27.550 [2024-11-07 13:44:35.478756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.550 [2024-11-07 13:44:35.478770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.550 qpair failed and we were unable to recover it. 00:39:27.550 [2024-11-07 13:44:35.478979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.550 [2024-11-07 13:44:35.478993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.550 qpair failed and we were unable to recover it. 00:39:27.550 [2024-11-07 13:44:35.479306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.550 [2024-11-07 13:44:35.479319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.550 qpair failed and we were unable to recover it. 00:39:27.550 [2024-11-07 13:44:35.479619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.550 [2024-11-07 13:44:35.479632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.550 qpair failed and we were unable to recover it. 00:39:27.550 [2024-11-07 13:44:35.479898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.550 [2024-11-07 13:44:35.479912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.550 qpair failed and we were unable to recover it. 00:39:27.550 [2024-11-07 13:44:35.480205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.550 [2024-11-07 13:44:35.480219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.550 qpair failed and we were unable to recover it. 00:39:27.550 [2024-11-07 13:44:35.480510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.550 [2024-11-07 13:44:35.480525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.550 qpair failed and we were unable to recover it. 00:39:27.550 [2024-11-07 13:44:35.480850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.550 [2024-11-07 13:44:35.480873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.550 qpair failed and we were unable to recover it. 00:39:27.550 [2024-11-07 13:44:35.481163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.550 [2024-11-07 13:44:35.481177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.550 qpair failed and we were unable to recover it. 00:39:27.550 [2024-11-07 13:44:35.481509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.550 [2024-11-07 13:44:35.481522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.550 qpair failed and we were unable to recover it. 00:39:27.550 [2024-11-07 13:44:35.481838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.550 [2024-11-07 13:44:35.481851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.550 qpair failed and we were unable to recover it. 00:39:27.550 [2024-11-07 13:44:35.482155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.550 [2024-11-07 13:44:35.482169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.550 qpair failed and we were unable to recover it. 00:39:27.550 [2024-11-07 13:44:35.482500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.550 [2024-11-07 13:44:35.482514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.550 qpair failed and we were unable to recover it. 00:39:27.550 [2024-11-07 13:44:35.482837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.550 [2024-11-07 13:44:35.482850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.550 qpair failed and we were unable to recover it. 00:39:27.550 [2024-11-07 13:44:35.483152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.550 [2024-11-07 13:44:35.483166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.550 qpair failed and we were unable to recover it. 00:39:27.550 [2024-11-07 13:44:35.483501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.550 [2024-11-07 13:44:35.483515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.550 qpair failed and we were unable to recover it. 00:39:27.550 [2024-11-07 13:44:35.483831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.550 [2024-11-07 13:44:35.483845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.550 qpair failed and we were unable to recover it. 00:39:27.550 [2024-11-07 13:44:35.484160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.550 [2024-11-07 13:44:35.484174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.550 qpair failed and we were unable to recover it. 00:39:27.550 [2024-11-07 13:44:35.484508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.550 [2024-11-07 13:44:35.484522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.550 qpair failed and we were unable to recover it. 00:39:27.550 [2024-11-07 13:44:35.484849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.550 [2024-11-07 13:44:35.484870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.550 qpair failed and we were unable to recover it. 00:39:27.550 [2024-11-07 13:44:35.485200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.550 [2024-11-07 13:44:35.485214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.550 qpair failed and we were unable to recover it. 00:39:27.550 [2024-11-07 13:44:35.485536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.550 [2024-11-07 13:44:35.485550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.550 qpair failed and we were unable to recover it. 00:39:27.550 [2024-11-07 13:44:35.485875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.550 [2024-11-07 13:44:35.485890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.550 qpair failed and we were unable to recover it. 00:39:27.550 [2024-11-07 13:44:35.486146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.550 [2024-11-07 13:44:35.486159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.550 qpair failed and we were unable to recover it. 00:39:27.550 [2024-11-07 13:44:35.486281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.550 [2024-11-07 13:44:35.486294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.550 qpair failed and we were unable to recover it. 00:39:27.550 [2024-11-07 13:44:35.486468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.550 [2024-11-07 13:44:35.486482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.550 qpair failed and we were unable to recover it. 00:39:27.550 [2024-11-07 13:44:35.486747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.550 [2024-11-07 13:44:35.486761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.550 qpair failed and we were unable to recover it. 00:39:27.550 [2024-11-07 13:44:35.486990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.550 [2024-11-07 13:44:35.487004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.550 qpair failed and we were unable to recover it. 00:39:27.550 [2024-11-07 13:44:35.487230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.550 [2024-11-07 13:44:35.487244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.550 qpair failed and we were unable to recover it. 00:39:27.550 [2024-11-07 13:44:35.487551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.550 [2024-11-07 13:44:35.487564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.550 qpair failed and we were unable to recover it. 00:39:27.550 [2024-11-07 13:44:35.487890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.551 [2024-11-07 13:44:35.487904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.551 qpair failed and we were unable to recover it. 00:39:27.551 [2024-11-07 13:44:35.488195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.551 [2024-11-07 13:44:35.488209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.551 qpair failed and we were unable to recover it. 00:39:27.551 [2024-11-07 13:44:35.488514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.551 [2024-11-07 13:44:35.488528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.551 qpair failed and we were unable to recover it. 00:39:27.551 [2024-11-07 13:44:35.488877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.551 [2024-11-07 13:44:35.488891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.551 qpair failed and we were unable to recover it. 00:39:27.551 [2024-11-07 13:44:35.489212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.551 [2024-11-07 13:44:35.489225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.551 qpair failed and we were unable to recover it. 00:39:27.551 [2024-11-07 13:44:35.489539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.551 [2024-11-07 13:44:35.489553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.551 qpair failed and we were unable to recover it. 00:39:27.551 [2024-11-07 13:44:35.489882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.551 [2024-11-07 13:44:35.489895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.551 qpair failed and we were unable to recover it. 00:39:27.551 [2024-11-07 13:44:35.490091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.551 [2024-11-07 13:44:35.490106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.551 qpair failed and we were unable to recover it. 00:39:27.551 [2024-11-07 13:44:35.490383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.551 [2024-11-07 13:44:35.490396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.551 qpair failed and we were unable to recover it. 00:39:27.551 [2024-11-07 13:44:35.490741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.551 [2024-11-07 13:44:35.490755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.551 qpair failed and we were unable to recover it. 00:39:27.551 [2024-11-07 13:44:35.491092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.551 [2024-11-07 13:44:35.491106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.551 qpair failed and we were unable to recover it. 00:39:27.551 [2024-11-07 13:44:35.491368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.551 [2024-11-07 13:44:35.491381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.551 qpair failed and we were unable to recover it. 00:39:27.551 [2024-11-07 13:44:35.491678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.551 [2024-11-07 13:44:35.491692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.551 qpair failed and we were unable to recover it. 00:39:27.551 [2024-11-07 13:44:35.492005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.551 [2024-11-07 13:44:35.492019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.551 qpair failed and we were unable to recover it. 00:39:27.551 [2024-11-07 13:44:35.492325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.551 [2024-11-07 13:44:35.492339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.551 qpair failed and we were unable to recover it. 00:39:27.551 [2024-11-07 13:44:35.492534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.551 [2024-11-07 13:44:35.492549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.551 qpair failed and we were unable to recover it. 00:39:27.551 [2024-11-07 13:44:35.492830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.551 [2024-11-07 13:44:35.492844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.551 qpair failed and we were unable to recover it. 00:39:27.551 [2024-11-07 13:44:35.493223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.551 [2024-11-07 13:44:35.493237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.551 qpair failed and we were unable to recover it. 00:39:27.551 [2024-11-07 13:44:35.493571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.551 [2024-11-07 13:44:35.493585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.551 qpair failed and we were unable to recover it. 00:39:27.551 [2024-11-07 13:44:35.493898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.551 [2024-11-07 13:44:35.493912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.551 qpair failed and we were unable to recover it. 00:39:27.551 [2024-11-07 13:44:35.494131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.551 [2024-11-07 13:44:35.494145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.551 qpair failed and we were unable to recover it. 00:39:27.551 [2024-11-07 13:44:35.494453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.551 [2024-11-07 13:44:35.494466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.551 qpair failed and we were unable to recover it. 00:39:27.551 [2024-11-07 13:44:35.494777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.551 [2024-11-07 13:44:35.494790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.551 qpair failed and we were unable to recover it. 00:39:27.551 [2024-11-07 13:44:35.495097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.551 [2024-11-07 13:44:35.495111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.551 qpair failed and we were unable to recover it. 00:39:27.551 [2024-11-07 13:44:35.495396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.551 [2024-11-07 13:44:35.495410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.551 qpair failed and we were unable to recover it. 00:39:27.551 [2024-11-07 13:44:35.495724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.551 [2024-11-07 13:44:35.495737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.551 qpair failed and we were unable to recover it. 00:39:27.551 [2024-11-07 13:44:35.496115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.551 [2024-11-07 13:44:35.496129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.551 qpair failed and we were unable to recover it. 00:39:27.551 [2024-11-07 13:44:35.496424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.551 [2024-11-07 13:44:35.496437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.551 qpair failed and we were unable to recover it. 00:39:27.551 [2024-11-07 13:44:35.496750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.551 [2024-11-07 13:44:35.496764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.551 qpair failed and we were unable to recover it. 00:39:27.551 [2024-11-07 13:44:35.497022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.551 [2024-11-07 13:44:35.497038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.551 qpair failed and we were unable to recover it. 00:39:27.551 [2024-11-07 13:44:35.497337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.551 [2024-11-07 13:44:35.497351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.551 qpair failed and we were unable to recover it. 00:39:27.551 [2024-11-07 13:44:35.497668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.551 [2024-11-07 13:44:35.497682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.551 qpair failed and we were unable to recover it. 00:39:27.551 [2024-11-07 13:44:35.498013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.551 [2024-11-07 13:44:35.498027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.551 qpair failed and we were unable to recover it. 00:39:27.551 [2024-11-07 13:44:35.498415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.551 [2024-11-07 13:44:35.498429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.551 qpair failed and we were unable to recover it. 00:39:27.551 [2024-11-07 13:44:35.498754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.551 [2024-11-07 13:44:35.498768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.551 qpair failed and we were unable to recover it. 00:39:27.551 [2024-11-07 13:44:35.499099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.551 [2024-11-07 13:44:35.499113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.551 qpair failed and we were unable to recover it. 00:39:27.551 [2024-11-07 13:44:35.499446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.551 [2024-11-07 13:44:35.499460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.551 qpair failed and we were unable to recover it. 00:39:27.551 [2024-11-07 13:44:35.499764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.551 [2024-11-07 13:44:35.499778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.551 qpair failed and we were unable to recover it. 00:39:27.551 [2024-11-07 13:44:35.500073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.551 [2024-11-07 13:44:35.500087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.551 qpair failed and we were unable to recover it. 00:39:27.552 [2024-11-07 13:44:35.500395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.552 [2024-11-07 13:44:35.500409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.552 qpair failed and we were unable to recover it. 00:39:27.552 [2024-11-07 13:44:35.500710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.552 [2024-11-07 13:44:35.500724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.552 qpair failed and we were unable to recover it. 00:39:27.552 [2024-11-07 13:44:35.501036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.552 [2024-11-07 13:44:35.501051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.552 qpair failed and we were unable to recover it. 00:39:27.552 [2024-11-07 13:44:35.501412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.552 [2024-11-07 13:44:35.501426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.552 qpair failed and we were unable to recover it. 00:39:27.552 [2024-11-07 13:44:35.501741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.552 [2024-11-07 13:44:35.501755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.552 qpair failed and we were unable to recover it. 00:39:27.552 [2024-11-07 13:44:35.501972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.552 [2024-11-07 13:44:35.501986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.552 qpair failed and we were unable to recover it. 00:39:27.552 [2024-11-07 13:44:35.502302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.552 [2024-11-07 13:44:35.502316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.552 qpair failed and we were unable to recover it. 00:39:27.552 [2024-11-07 13:44:35.502646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.552 [2024-11-07 13:44:35.502663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.552 qpair failed and we were unable to recover it. 00:39:27.552 [2024-11-07 13:44:35.502976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.552 [2024-11-07 13:44:35.502989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.552 qpair failed and we were unable to recover it. 00:39:27.552 [2024-11-07 13:44:35.503194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.552 [2024-11-07 13:44:35.503209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.552 qpair failed and we were unable to recover it. 00:39:27.552 [2024-11-07 13:44:35.503491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.552 [2024-11-07 13:44:35.503505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.552 qpair failed and we were unable to recover it. 00:39:27.552 [2024-11-07 13:44:35.503833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.552 [2024-11-07 13:44:35.503846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.552 qpair failed and we were unable to recover it. 00:39:27.552 [2024-11-07 13:44:35.504220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.552 [2024-11-07 13:44:35.504234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.552 qpair failed and we were unable to recover it. 00:39:27.552 [2024-11-07 13:44:35.504445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.552 [2024-11-07 13:44:35.504459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.552 qpair failed and we were unable to recover it. 00:39:27.552 [2024-11-07 13:44:35.504772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.552 [2024-11-07 13:44:35.504785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.552 qpair failed and we were unable to recover it. 00:39:27.552 [2024-11-07 13:44:35.505050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.552 [2024-11-07 13:44:35.505063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.552 qpair failed and we were unable to recover it. 00:39:27.552 [2024-11-07 13:44:35.505383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.552 [2024-11-07 13:44:35.505397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.552 qpair failed and we were unable to recover it. 00:39:27.552 [2024-11-07 13:44:35.505717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.552 [2024-11-07 13:44:35.505731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.552 qpair failed and we were unable to recover it. 00:39:27.552 [2024-11-07 13:44:35.506082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.552 [2024-11-07 13:44:35.506097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.552 qpair failed and we were unable to recover it. 00:39:27.552 [2024-11-07 13:44:35.506374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.552 [2024-11-07 13:44:35.506389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.552 qpair failed and we were unable to recover it. 00:39:27.552 [2024-11-07 13:44:35.506698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.552 [2024-11-07 13:44:35.506711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.552 qpair failed and we were unable to recover it. 00:39:27.552 [2024-11-07 13:44:35.507045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.552 [2024-11-07 13:44:35.507059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.552 qpair failed and we were unable to recover it. 00:39:27.552 [2024-11-07 13:44:35.507367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.552 [2024-11-07 13:44:35.507381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.552 qpair failed and we were unable to recover it. 00:39:27.552 [2024-11-07 13:44:35.507695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.552 [2024-11-07 13:44:35.507709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.552 qpair failed and we were unable to recover it. 00:39:27.552 [2024-11-07 13:44:35.508034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.552 [2024-11-07 13:44:35.508048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.552 qpair failed and we were unable to recover it. 00:39:27.552 [2024-11-07 13:44:35.508409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.552 [2024-11-07 13:44:35.508423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.552 qpair failed and we were unable to recover it. 00:39:27.552 [2024-11-07 13:44:35.508733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.552 [2024-11-07 13:44:35.508747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.552 qpair failed and we were unable to recover it. 00:39:27.552 [2024-11-07 13:44:35.509072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.552 [2024-11-07 13:44:35.509085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.552 qpair failed and we were unable to recover it. 00:39:27.552 [2024-11-07 13:44:35.509388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.552 [2024-11-07 13:44:35.509401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.552 qpair failed and we were unable to recover it. 00:39:27.552 [2024-11-07 13:44:35.509620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.552 [2024-11-07 13:44:35.509634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.552 qpair failed and we were unable to recover it. 00:39:27.552 [2024-11-07 13:44:35.509922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.552 [2024-11-07 13:44:35.509939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.552 qpair failed and we were unable to recover it. 00:39:27.552 [2024-11-07 13:44:35.510247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.552 [2024-11-07 13:44:35.510261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.552 qpair failed and we were unable to recover it. 00:39:27.552 [2024-11-07 13:44:35.510641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.552 [2024-11-07 13:44:35.510655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.552 qpair failed and we were unable to recover it. 00:39:27.552 [2024-11-07 13:44:35.510838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.552 [2024-11-07 13:44:35.510851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.552 qpair failed and we were unable to recover it. 00:39:27.552 [2024-11-07 13:44:35.511170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.552 [2024-11-07 13:44:35.511183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.552 qpair failed and we were unable to recover it. 00:39:27.552 [2024-11-07 13:44:35.511495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.552 [2024-11-07 13:44:35.511509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.552 qpair failed and we were unable to recover it. 00:39:27.552 [2024-11-07 13:44:35.511876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.552 [2024-11-07 13:44:35.511890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.552 qpair failed and we were unable to recover it. 00:39:27.552 [2024-11-07 13:44:35.512192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.552 [2024-11-07 13:44:35.512206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.553 qpair failed and we were unable to recover it. 00:39:27.553 [2024-11-07 13:44:35.512502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.553 [2024-11-07 13:44:35.512516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.553 qpair failed and we were unable to recover it. 00:39:27.553 [2024-11-07 13:44:35.512842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.553 [2024-11-07 13:44:35.512859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.553 qpair failed and we were unable to recover it. 00:39:27.553 [2024-11-07 13:44:35.513064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.553 [2024-11-07 13:44:35.513078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.553 qpair failed and we were unable to recover it. 00:39:27.553 [2024-11-07 13:44:35.513390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.553 [2024-11-07 13:44:35.513403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.553 qpair failed and we were unable to recover it. 00:39:27.553 [2024-11-07 13:44:35.513693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.553 [2024-11-07 13:44:35.513706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.553 qpair failed and we were unable to recover it. 00:39:27.553 [2024-11-07 13:44:35.514026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.553 [2024-11-07 13:44:35.514040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.553 qpair failed and we were unable to recover it. 00:39:27.553 [2024-11-07 13:44:35.514266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.553 [2024-11-07 13:44:35.514280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.553 qpair failed and we were unable to recover it. 00:39:27.553 [2024-11-07 13:44:35.514661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.553 [2024-11-07 13:44:35.514675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.553 qpair failed and we were unable to recover it. 00:39:27.553 [2024-11-07 13:44:35.515003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.553 [2024-11-07 13:44:35.515017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.553 qpair failed and we were unable to recover it. 00:39:27.553 [2024-11-07 13:44:35.515354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.553 [2024-11-07 13:44:35.515367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.553 qpair failed and we were unable to recover it. 00:39:27.553 [2024-11-07 13:44:35.515587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.553 [2024-11-07 13:44:35.515601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.553 qpair failed and we were unable to recover it. 00:39:27.553 [2024-11-07 13:44:35.515922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.553 [2024-11-07 13:44:35.515937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.553 qpair failed and we were unable to recover it. 00:39:27.553 [2024-11-07 13:44:35.516137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.553 [2024-11-07 13:44:35.516150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.553 qpair failed and we were unable to recover it. 00:39:27.553 [2024-11-07 13:44:35.516446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.553 [2024-11-07 13:44:35.516460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.553 qpair failed and we were unable to recover it. 00:39:27.553 [2024-11-07 13:44:35.516826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.553 [2024-11-07 13:44:35.516840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.553 qpair failed and we were unable to recover it. 00:39:27.553 [2024-11-07 13:44:35.517181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.553 [2024-11-07 13:44:35.517195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.553 qpair failed and we were unable to recover it. 00:39:27.826 [2024-11-07 13:44:35.517541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.826 [2024-11-07 13:44:35.517555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.826 qpair failed and we were unable to recover it. 00:39:27.826 [2024-11-07 13:44:35.517874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.826 [2024-11-07 13:44:35.517889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.826 qpair failed and we were unable to recover it. 00:39:27.826 [2024-11-07 13:44:35.518222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.826 [2024-11-07 13:44:35.518236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.826 qpair failed and we were unable to recover it. 00:39:27.826 [2024-11-07 13:44:35.518549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.826 [2024-11-07 13:44:35.518565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.826 qpair failed and we were unable to recover it. 00:39:27.826 [2024-11-07 13:44:35.519623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.826 [2024-11-07 13:44:35.519653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.826 qpair failed and we were unable to recover it. 00:39:27.826 [2024-11-07 13:44:35.519983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.826 [2024-11-07 13:44:35.520003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.826 qpair failed and we were unable to recover it. 00:39:27.826 [2024-11-07 13:44:35.520200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.826 [2024-11-07 13:44:35.520213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.826 qpair failed and we were unable to recover it. 00:39:27.826 [2024-11-07 13:44:35.520574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.826 [2024-11-07 13:44:35.520589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.826 qpair failed and we were unable to recover it. 00:39:27.826 [2024-11-07 13:44:35.520953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.826 [2024-11-07 13:44:35.520969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.826 qpair failed and we were unable to recover it. 00:39:27.826 [2024-11-07 13:44:35.521330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.826 [2024-11-07 13:44:35.521344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.826 qpair failed and we were unable to recover it. 00:39:27.826 [2024-11-07 13:44:35.521682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.521696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.521909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.521924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.522140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.522154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.522452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.522466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.522783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.522797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.523073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.523088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.523259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.523273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.523554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.523568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.523895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.523910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.524245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.524260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.524583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.524597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.524942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.524957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.525264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.525278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.525607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.525621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.525931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.525946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.526260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.526275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.526598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.526612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.526928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.526943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.527289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.527303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.527501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.527516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.527833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.527847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.528169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.528183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.528499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.528513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.528840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.528855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.529086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.529101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.529431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.529445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.529642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.529658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.529984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.529999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.530311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.530326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.530637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.530652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.530967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.530981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.531293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.531306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.531634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.531648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.531926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.531943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.532295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.532309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.532661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.532674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.532993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.533007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.533338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.533351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.533535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.533550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.533826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.533840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.534147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.534162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.534546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.534560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.534907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.534921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.535229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.535243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.535437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.535450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.535773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.535786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.536102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.536116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.827 [2024-11-07 13:44:35.536443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.827 [2024-11-07 13:44:35.536456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.827 qpair failed and we were unable to recover it. 00:39:27.828 [2024-11-07 13:44:35.536757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.828 [2024-11-07 13:44:35.536771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.828 qpair failed and we were unable to recover it. 00:39:27.828 [2024-11-07 13:44:35.537092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.828 [2024-11-07 13:44:35.537106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.828 qpair failed and we were unable to recover it. 00:39:27.828 [2024-11-07 13:44:35.537422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.828 [2024-11-07 13:44:35.537436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.828 qpair failed and we were unable to recover it. 00:39:27.828 [2024-11-07 13:44:35.537646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.828 [2024-11-07 13:44:35.537659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.828 qpair failed and we were unable to recover it. 00:39:27.828 [2024-11-07 13:44:35.537983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.828 [2024-11-07 13:44:35.537997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.828 qpair failed and we were unable to recover it. 00:39:27.828 [2024-11-07 13:44:35.538311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.828 [2024-11-07 13:44:35.538324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.828 qpair failed and we were unable to recover it. 00:39:27.828 [2024-11-07 13:44:35.538615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.828 [2024-11-07 13:44:35.538635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.828 qpair failed and we were unable to recover it. 00:39:27.828 [2024-11-07 13:44:35.538936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.828 [2024-11-07 13:44:35.538951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.828 qpair failed and we were unable to recover it. 00:39:27.828 [2024-11-07 13:44:35.539259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.828 [2024-11-07 13:44:35.539273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.828 qpair failed and we were unable to recover it. 00:39:27.828 [2024-11-07 13:44:35.539508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.828 [2024-11-07 13:44:35.539522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.828 qpair failed and we were unable to recover it. 00:39:27.828 [2024-11-07 13:44:35.539841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.828 [2024-11-07 13:44:35.539854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.828 qpair failed and we were unable to recover it. 00:39:27.828 [2024-11-07 13:44:35.540185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.828 [2024-11-07 13:44:35.540198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.828 qpair failed and we were unable to recover it. 00:39:27.828 [2024-11-07 13:44:35.540572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.828 [2024-11-07 13:44:35.540586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.828 qpair failed and we were unable to recover it. 00:39:27.828 [2024-11-07 13:44:35.540796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.828 [2024-11-07 13:44:35.540810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.828 qpair failed and we were unable to recover it. 00:39:27.828 [2024-11-07 13:44:35.541129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.828 [2024-11-07 13:44:35.541144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.828 qpair failed and we were unable to recover it. 00:39:27.828 [2024-11-07 13:44:35.541374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.828 [2024-11-07 13:44:35.541387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.828 qpair failed and we were unable to recover it. 00:39:27.828 [2024-11-07 13:44:35.541697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.828 [2024-11-07 13:44:35.541711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.828 qpair failed and we were unable to recover it. 00:39:27.828 [2024-11-07 13:44:35.542031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.828 [2024-11-07 13:44:35.542045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.828 qpair failed and we were unable to recover it. 00:39:27.828 [2024-11-07 13:44:35.542330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.828 [2024-11-07 13:44:35.542343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.828 qpair failed and we were unable to recover it. 00:39:27.828 [2024-11-07 13:44:35.542653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.828 [2024-11-07 13:44:35.542667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.828 qpair failed and we were unable to recover it. 00:39:27.828 [2024-11-07 13:44:35.542981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.828 [2024-11-07 13:44:35.542995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.828 qpair failed and we were unable to recover it. 00:39:27.828 [2024-11-07 13:44:35.543227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.828 [2024-11-07 13:44:35.543240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.828 qpair failed and we were unable to recover it. 00:39:27.828 [2024-11-07 13:44:35.543428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.828 [2024-11-07 13:44:35.543443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.828 qpair failed and we were unable to recover it. 00:39:27.828 [2024-11-07 13:44:35.543804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.828 [2024-11-07 13:44:35.543820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.828 qpair failed and we were unable to recover it. 00:39:27.828 [2024-11-07 13:44:35.544130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.828 [2024-11-07 13:44:35.544143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.828 qpair failed and we were unable to recover it. 00:39:27.828 [2024-11-07 13:44:35.544485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.828 [2024-11-07 13:44:35.544503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.828 qpair failed and we were unable to recover it. 00:39:27.828 [2024-11-07 13:44:35.544710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.828 [2024-11-07 13:44:35.544723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.828 qpair failed and we were unable to recover it. 00:39:27.828 [2024-11-07 13:44:35.545030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.828 [2024-11-07 13:44:35.545044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.828 qpair failed and we were unable to recover it. 00:39:27.828 [2024-11-07 13:44:35.545351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.828 [2024-11-07 13:44:35.545364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.828 qpair failed and we were unable to recover it. 00:39:27.828 [2024-11-07 13:44:35.545672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.828 [2024-11-07 13:44:35.545686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.828 qpair failed and we were unable to recover it. 00:39:27.828 [2024-11-07 13:44:35.546062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.828 [2024-11-07 13:44:35.546078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.828 qpair failed and we were unable to recover it. 00:39:27.828 [2024-11-07 13:44:35.546300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.828 [2024-11-07 13:44:35.546314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.828 qpair failed and we were unable to recover it. 00:39:27.828 [2024-11-07 13:44:35.546524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.828 [2024-11-07 13:44:35.546538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.828 qpair failed and we were unable to recover it. 00:39:27.828 [2024-11-07 13:44:35.546848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.828 [2024-11-07 13:44:35.546872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.828 qpair failed and we were unable to recover it. 00:39:27.828 [2024-11-07 13:44:35.547189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.828 [2024-11-07 13:44:35.547212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.828 qpair failed and we were unable to recover it. 00:39:27.828 [2024-11-07 13:44:35.547520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.828 [2024-11-07 13:44:35.547534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.828 qpair failed and we were unable to recover it. 00:39:27.828 [2024-11-07 13:44:35.547845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.828 [2024-11-07 13:44:35.547858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.828 qpair failed and we were unable to recover it. 00:39:27.828 [2024-11-07 13:44:35.548196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.828 [2024-11-07 13:44:35.548210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.828 qpair failed and we were unable to recover it. 00:39:27.829 [2024-11-07 13:44:35.548528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.829 [2024-11-07 13:44:35.548541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.829 qpair failed and we were unable to recover it. 00:39:27.829 [2024-11-07 13:44:35.548749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.829 [2024-11-07 13:44:35.548765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.829 qpair failed and we were unable to recover it. 00:39:27.829 [2024-11-07 13:44:35.549055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.829 [2024-11-07 13:44:35.549069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.829 qpair failed and we were unable to recover it. 00:39:27.829 [2024-11-07 13:44:35.549358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.829 [2024-11-07 13:44:35.549371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.829 qpair failed and we were unable to recover it. 00:39:27.829 [2024-11-07 13:44:35.549701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.829 [2024-11-07 13:44:35.549715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.829 qpair failed and we were unable to recover it. 00:39:27.829 [2024-11-07 13:44:35.550070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.829 [2024-11-07 13:44:35.550084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.829 qpair failed and we were unable to recover it. 00:39:27.829 [2024-11-07 13:44:35.550408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.829 [2024-11-07 13:44:35.550421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.829 qpair failed and we were unable to recover it. 00:39:27.829 [2024-11-07 13:44:35.550654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.829 [2024-11-07 13:44:35.550668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.829 qpair failed and we were unable to recover it. 00:39:27.829 [2024-11-07 13:44:35.551060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.829 [2024-11-07 13:44:35.551073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.829 qpair failed and we were unable to recover it. 00:39:27.829 [2024-11-07 13:44:35.551359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.829 [2024-11-07 13:44:35.551372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.829 qpair failed and we were unable to recover it. 00:39:27.829 [2024-11-07 13:44:35.551665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.829 [2024-11-07 13:44:35.551679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.829 qpair failed and we were unable to recover it. 00:39:27.829 [2024-11-07 13:44:35.551993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.829 [2024-11-07 13:44:35.552007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.829 qpair failed and we were unable to recover it. 00:39:27.829 [2024-11-07 13:44:35.552301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.829 [2024-11-07 13:44:35.552316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.829 qpair failed and we were unable to recover it. 00:39:27.829 [2024-11-07 13:44:35.552714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.829 [2024-11-07 13:44:35.552728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.829 qpair failed and we were unable to recover it. 00:39:27.829 [2024-11-07 13:44:35.553044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.829 [2024-11-07 13:44:35.553059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.829 qpair failed and we were unable to recover it. 00:39:27.829 [2024-11-07 13:44:35.553278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.829 [2024-11-07 13:44:35.553293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.829 qpair failed and we were unable to recover it. 00:39:27.829 [2024-11-07 13:44:35.553629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.829 [2024-11-07 13:44:35.553643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.829 qpair failed and we were unable to recover it. 00:39:27.829 [2024-11-07 13:44:35.553977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.829 [2024-11-07 13:44:35.553991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.829 qpair failed and we were unable to recover it. 00:39:27.829 [2024-11-07 13:44:35.554360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.829 [2024-11-07 13:44:35.554374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.829 qpair failed and we were unable to recover it. 00:39:27.829 [2024-11-07 13:44:35.554668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.829 [2024-11-07 13:44:35.554683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.829 qpair failed and we were unable to recover it. 00:39:27.829 [2024-11-07 13:44:35.554910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.829 [2024-11-07 13:44:35.554925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.829 qpair failed and we were unable to recover it. 00:39:27.829 [2024-11-07 13:44:35.555258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.829 [2024-11-07 13:44:35.555272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.829 qpair failed and we were unable to recover it. 00:39:27.829 [2024-11-07 13:44:35.555614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.829 [2024-11-07 13:44:35.555628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.829 qpair failed and we were unable to recover it. 00:39:27.829 [2024-11-07 13:44:35.555827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.829 [2024-11-07 13:44:35.555841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.829 qpair failed and we were unable to recover it. 00:39:27.829 [2024-11-07 13:44:35.556055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.829 [2024-11-07 13:44:35.556069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.829 qpair failed and we were unable to recover it. 00:39:27.829 [2024-11-07 13:44:35.556476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.829 [2024-11-07 13:44:35.556491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.829 qpair failed and we were unable to recover it. 00:39:27.829 [2024-11-07 13:44:35.556794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.829 [2024-11-07 13:44:35.556808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.829 qpair failed and we were unable to recover it. 00:39:27.829 [2024-11-07 13:44:35.557190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.829 [2024-11-07 13:44:35.557207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.829 qpair failed and we were unable to recover it. 00:39:27.829 [2024-11-07 13:44:35.557537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.829 [2024-11-07 13:44:35.557551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.829 qpair failed and we were unable to recover it. 00:39:27.829 [2024-11-07 13:44:35.557856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.829 [2024-11-07 13:44:35.557875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.829 qpair failed and we were unable to recover it. 00:39:27.829 [2024-11-07 13:44:35.558234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.829 [2024-11-07 13:44:35.558248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.829 qpair failed and we were unable to recover it. 00:39:27.829 [2024-11-07 13:44:35.558565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.829 [2024-11-07 13:44:35.558579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.829 qpair failed and we were unable to recover it. 00:39:27.829 [2024-11-07 13:44:35.558924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.829 [2024-11-07 13:44:35.558938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.829 qpair failed and we were unable to recover it. 00:39:27.829 [2024-11-07 13:44:35.559228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.829 [2024-11-07 13:44:35.559242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.829 qpair failed and we were unable to recover it. 00:39:27.829 [2024-11-07 13:44:35.559561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.829 [2024-11-07 13:44:35.559574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.829 qpair failed and we were unable to recover it. 00:39:27.829 [2024-11-07 13:44:35.559774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.829 [2024-11-07 13:44:35.559788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.829 qpair failed and we were unable to recover it. 00:39:27.829 [2024-11-07 13:44:35.560111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.829 [2024-11-07 13:44:35.560125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.829 qpair failed and we were unable to recover it. 00:39:27.829 [2024-11-07 13:44:35.560412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.829 [2024-11-07 13:44:35.560433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.829 qpair failed and we were unable to recover it. 00:39:27.829 [2024-11-07 13:44:35.560750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.830 [2024-11-07 13:44:35.560763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.830 qpair failed and we were unable to recover it. 00:39:27.830 [2024-11-07 13:44:35.561123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.830 [2024-11-07 13:44:35.561137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.830 qpair failed and we were unable to recover it. 00:39:27.830 [2024-11-07 13:44:35.561200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.830 [2024-11-07 13:44:35.561215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.830 qpair failed and we were unable to recover it. 00:39:27.830 [2024-11-07 13:44:35.561404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.830 [2024-11-07 13:44:35.561418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.830 qpair failed and we were unable to recover it. 00:39:27.830 [2024-11-07 13:44:35.561747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.830 [2024-11-07 13:44:35.561761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.830 qpair failed and we were unable to recover it. 00:39:27.830 [2024-11-07 13:44:35.562110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.830 [2024-11-07 13:44:35.562123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.830 qpair failed and we were unable to recover it. 00:39:27.830 [2024-11-07 13:44:35.562311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.830 [2024-11-07 13:44:35.562325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.830 qpair failed and we were unable to recover it. 00:39:27.830 [2024-11-07 13:44:35.562650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.830 [2024-11-07 13:44:35.562664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.830 qpair failed and we were unable to recover it. 00:39:27.830 [2024-11-07 13:44:35.562876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.830 [2024-11-07 13:44:35.562889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.830 qpair failed and we were unable to recover it. 00:39:27.830 [2024-11-07 13:44:35.563273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.830 [2024-11-07 13:44:35.563287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.830 qpair failed and we were unable to recover it. 00:39:27.830 [2024-11-07 13:44:35.563598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.830 [2024-11-07 13:44:35.563611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.830 qpair failed and we were unable to recover it. 00:39:27.830 [2024-11-07 13:44:35.563829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.830 [2024-11-07 13:44:35.563842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.830 qpair failed and we were unable to recover it. 00:39:27.830 [2024-11-07 13:44:35.563993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.830 [2024-11-07 13:44:35.564007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.830 qpair failed and we were unable to recover it. 00:39:27.830 [2024-11-07 13:44:35.564299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.830 [2024-11-07 13:44:35.564312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.830 qpair failed and we were unable to recover it. 00:39:27.830 [2024-11-07 13:44:35.564524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.830 [2024-11-07 13:44:35.564537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.830 qpair failed and we were unable to recover it. 00:39:27.830 [2024-11-07 13:44:35.564757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.830 [2024-11-07 13:44:35.564770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.830 qpair failed and we were unable to recover it. 00:39:27.830 [2024-11-07 13:44:35.565089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.830 [2024-11-07 13:44:35.565103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.830 qpair failed and we were unable to recover it. 00:39:27.830 [2024-11-07 13:44:35.565481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.830 [2024-11-07 13:44:35.565494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.830 qpair failed and we were unable to recover it. 00:39:27.830 [2024-11-07 13:44:35.565803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.830 [2024-11-07 13:44:35.565816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.830 qpair failed and we were unable to recover it. 00:39:27.830 [2024-11-07 13:44:35.566058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.830 [2024-11-07 13:44:35.566072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.830 qpair failed and we were unable to recover it. 00:39:27.830 [2024-11-07 13:44:35.566433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.830 [2024-11-07 13:44:35.566446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.830 qpair failed and we were unable to recover it. 00:39:27.830 [2024-11-07 13:44:35.566737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.830 [2024-11-07 13:44:35.566752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.830 qpair failed and we were unable to recover it. 00:39:27.830 [2024-11-07 13:44:35.567088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.830 [2024-11-07 13:44:35.567103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.830 qpair failed and we were unable to recover it. 00:39:27.830 [2024-11-07 13:44:35.567286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.830 [2024-11-07 13:44:35.567300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.830 qpair failed and we were unable to recover it. 00:39:27.830 [2024-11-07 13:44:35.567586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.830 [2024-11-07 13:44:35.567600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.830 qpair failed and we were unable to recover it. 00:39:27.830 [2024-11-07 13:44:35.567892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.830 [2024-11-07 13:44:35.567907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.830 qpair failed and we were unable to recover it. 00:39:27.830 [2024-11-07 13:44:35.568169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.830 [2024-11-07 13:44:35.568182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.830 qpair failed and we were unable to recover it. 00:39:27.830 [2024-11-07 13:44:35.568484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.830 [2024-11-07 13:44:35.568497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.830 qpair failed and we were unable to recover it. 00:39:27.830 [2024-11-07 13:44:35.568677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.830 [2024-11-07 13:44:35.568691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.830 qpair failed and we were unable to recover it. 00:39:27.830 [2024-11-07 13:44:35.568962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.830 [2024-11-07 13:44:35.568979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.830 qpair failed and we were unable to recover it. 00:39:27.830 [2024-11-07 13:44:35.569286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.830 [2024-11-07 13:44:35.569300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.830 qpair failed and we were unable to recover it. 00:39:27.830 [2024-11-07 13:44:35.569472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.830 [2024-11-07 13:44:35.569485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.830 qpair failed and we were unable to recover it. 00:39:27.830 [2024-11-07 13:44:35.569819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.830 [2024-11-07 13:44:35.569832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.830 qpair failed and we were unable to recover it. 00:39:27.830 [2024-11-07 13:44:35.570069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.830 [2024-11-07 13:44:35.570083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.830 qpair failed and we were unable to recover it. 00:39:27.830 [2024-11-07 13:44:35.570390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.830 [2024-11-07 13:44:35.570403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.830 qpair failed and we were unable to recover it. 00:39:27.831 [2024-11-07 13:44:35.570696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.831 [2024-11-07 13:44:35.570710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.831 qpair failed and we were unable to recover it. 00:39:27.831 [2024-11-07 13:44:35.571003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.831 [2024-11-07 13:44:35.571017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.831 qpair failed and we were unable to recover it. 00:39:27.831 [2024-11-07 13:44:35.571314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.831 [2024-11-07 13:44:35.571327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.831 qpair failed and we were unable to recover it. 00:39:27.831 [2024-11-07 13:44:35.571638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.831 [2024-11-07 13:44:35.571651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.831 qpair failed and we were unable to recover it. 00:39:27.831 [2024-11-07 13:44:35.571978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.831 [2024-11-07 13:44:35.571992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.831 qpair failed and we were unable to recover it. 00:39:27.831 [2024-11-07 13:44:35.572320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.831 [2024-11-07 13:44:35.572333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.831 qpair failed and we were unable to recover it. 00:39:27.831 [2024-11-07 13:44:35.572615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.831 [2024-11-07 13:44:35.572628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.831 qpair failed and we were unable to recover it. 00:39:27.831 [2024-11-07 13:44:35.572933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.831 [2024-11-07 13:44:35.572947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.831 qpair failed and we were unable to recover it. 00:39:27.831 [2024-11-07 13:44:35.573238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.831 [2024-11-07 13:44:35.573252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.831 qpair failed and we were unable to recover it. 00:39:27.831 [2024-11-07 13:44:35.573425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.831 [2024-11-07 13:44:35.573439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.831 qpair failed and we were unable to recover it. 00:39:27.831 [2024-11-07 13:44:35.573753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.831 [2024-11-07 13:44:35.573767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.831 qpair failed and we were unable to recover it. 00:39:27.831 [2024-11-07 13:44:35.574088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.831 [2024-11-07 13:44:35.574101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.831 qpair failed and we were unable to recover it. 00:39:27.831 [2024-11-07 13:44:35.574291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.831 [2024-11-07 13:44:35.574305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.831 qpair failed and we were unable to recover it. 00:39:27.831 [2024-11-07 13:44:35.574531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.831 [2024-11-07 13:44:35.574544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.831 qpair failed and we were unable to recover it. 00:39:27.831 [2024-11-07 13:44:35.574877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.831 [2024-11-07 13:44:35.574891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.831 qpair failed and we were unable to recover it. 00:39:27.831 [2024-11-07 13:44:35.575193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.831 [2024-11-07 13:44:35.575213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.831 qpair failed and we were unable to recover it. 00:39:27.831 [2024-11-07 13:44:35.575543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.831 [2024-11-07 13:44:35.575557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.831 qpair failed and we were unable to recover it. 00:39:27.831 [2024-11-07 13:44:35.575887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.831 [2024-11-07 13:44:35.575901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.831 qpair failed and we were unable to recover it. 00:39:27.831 [2024-11-07 13:44:35.576139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.831 [2024-11-07 13:44:35.576152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.831 qpair failed and we were unable to recover it. 00:39:27.831 [2024-11-07 13:44:35.576441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.831 [2024-11-07 13:44:35.576454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.831 qpair failed and we were unable to recover it. 00:39:27.831 [2024-11-07 13:44:35.576769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.831 [2024-11-07 13:44:35.576782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.831 qpair failed and we were unable to recover it. 00:39:27.831 [2024-11-07 13:44:35.576981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.831 [2024-11-07 13:44:35.576995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.831 qpair failed and we were unable to recover it. 00:39:27.831 [2024-11-07 13:44:35.577380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.831 [2024-11-07 13:44:35.577394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.831 qpair failed and we were unable to recover it. 00:39:27.831 [2024-11-07 13:44:35.577676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.831 [2024-11-07 13:44:35.577690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.831 qpair failed and we were unable to recover it. 00:39:27.831 [2024-11-07 13:44:35.577917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.831 [2024-11-07 13:44:35.577931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.831 qpair failed and we were unable to recover it. 00:39:27.831 [2024-11-07 13:44:35.578224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.831 [2024-11-07 13:44:35.578237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.831 qpair failed and we were unable to recover it. 00:39:27.831 [2024-11-07 13:44:35.578529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.831 [2024-11-07 13:44:35.578542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.831 qpair failed and we were unable to recover it. 00:39:27.831 [2024-11-07 13:44:35.578857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.831 [2024-11-07 13:44:35.578878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.831 qpair failed and we were unable to recover it. 00:39:27.831 [2024-11-07 13:44:35.579192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.831 [2024-11-07 13:44:35.579205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.831 qpair failed and we were unable to recover it. 00:39:27.831 [2024-11-07 13:44:35.579539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.831 [2024-11-07 13:44:35.579553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.831 qpair failed and we were unable to recover it. 00:39:27.831 [2024-11-07 13:44:35.579768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.831 [2024-11-07 13:44:35.579782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.831 qpair failed and we were unable to recover it. 00:39:27.831 [2024-11-07 13:44:35.580131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.831 [2024-11-07 13:44:35.580146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.831 qpair failed and we were unable to recover it. 00:39:27.831 [2024-11-07 13:44:35.580480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.831 [2024-11-07 13:44:35.580493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.831 qpair failed and we were unable to recover it. 00:39:27.831 [2024-11-07 13:44:35.580798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.831 [2024-11-07 13:44:35.580811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.831 qpair failed and we were unable to recover it. 00:39:27.831 [2024-11-07 13:44:35.581055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.831 [2024-11-07 13:44:35.581071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.832 qpair failed and we were unable to recover it. 00:39:27.832 [2024-11-07 13:44:35.581361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.832 [2024-11-07 13:44:35.581374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.832 qpair failed and we were unable to recover it. 00:39:27.832 [2024-11-07 13:44:35.581681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.832 [2024-11-07 13:44:35.581695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.832 qpair failed and we were unable to recover it. 00:39:27.832 [2024-11-07 13:44:35.582016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.832 [2024-11-07 13:44:35.582030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.832 qpair failed and we were unable to recover it. 00:39:27.832 [2024-11-07 13:44:35.582321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.832 [2024-11-07 13:44:35.582334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.832 qpair failed and we were unable to recover it. 00:39:27.832 [2024-11-07 13:44:35.582649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.832 [2024-11-07 13:44:35.582662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.832 qpair failed and we were unable to recover it. 00:39:27.832 [2024-11-07 13:44:35.582993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.832 [2024-11-07 13:44:35.583008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.832 qpair failed and we were unable to recover it. 00:39:27.832 [2024-11-07 13:44:35.583305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.832 [2024-11-07 13:44:35.583319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.832 qpair failed and we were unable to recover it. 00:39:27.832 [2024-11-07 13:44:35.583654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.832 [2024-11-07 13:44:35.583668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.832 qpair failed and we were unable to recover it. 00:39:27.832 [2024-11-07 13:44:35.583989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.832 [2024-11-07 13:44:35.584002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.832 qpair failed and we were unable to recover it. 00:39:27.832 [2024-11-07 13:44:35.584341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.832 [2024-11-07 13:44:35.584355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.832 qpair failed and we were unable to recover it. 00:39:27.832 [2024-11-07 13:44:35.584661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.832 [2024-11-07 13:44:35.584675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.832 qpair failed and we were unable to recover it. 00:39:27.832 [2024-11-07 13:44:35.584874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.832 [2024-11-07 13:44:35.584888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.832 qpair failed and we were unable to recover it. 00:39:27.832 [2024-11-07 13:44:35.585181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.832 [2024-11-07 13:44:35.585194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.832 qpair failed and we were unable to recover it. 00:39:27.832 [2024-11-07 13:44:35.585517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.832 [2024-11-07 13:44:35.585532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.832 qpair failed and we were unable to recover it. 00:39:27.832 [2024-11-07 13:44:35.585929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.832 [2024-11-07 13:44:35.585943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.832 qpair failed and we were unable to recover it. 00:39:27.832 [2024-11-07 13:44:35.586237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.832 [2024-11-07 13:44:35.586251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.832 qpair failed and we were unable to recover it. 00:39:27.832 [2024-11-07 13:44:35.586581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.832 [2024-11-07 13:44:35.586594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.832 qpair failed and we were unable to recover it. 00:39:27.832 [2024-11-07 13:44:35.586931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.832 [2024-11-07 13:44:35.586954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.832 qpair failed and we were unable to recover it. 00:39:27.832 [2024-11-07 13:44:35.587246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.832 [2024-11-07 13:44:35.587260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.832 qpair failed and we were unable to recover it. 00:39:27.832 [2024-11-07 13:44:35.587599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.832 [2024-11-07 13:44:35.587613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.832 qpair failed and we were unable to recover it. 00:39:27.832 [2024-11-07 13:44:35.587934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.832 [2024-11-07 13:44:35.587948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.832 qpair failed and we were unable to recover it. 00:39:27.832 [2024-11-07 13:44:35.588155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.832 [2024-11-07 13:44:35.588170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.832 qpair failed and we were unable to recover it. 00:39:27.832 [2024-11-07 13:44:35.588543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.832 [2024-11-07 13:44:35.588556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.832 qpair failed and we were unable to recover it. 00:39:27.832 [2024-11-07 13:44:35.588872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.832 [2024-11-07 13:44:35.588886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.832 qpair failed and we were unable to recover it. 00:39:27.832 [2024-11-07 13:44:35.589124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.832 [2024-11-07 13:44:35.589138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.832 qpair failed and we were unable to recover it. 00:39:27.832 [2024-11-07 13:44:35.589454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.832 [2024-11-07 13:44:35.589467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.832 qpair failed and we were unable to recover it. 00:39:27.832 [2024-11-07 13:44:35.589796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.832 [2024-11-07 13:44:35.589809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.832 qpair failed and we were unable to recover it. 00:39:27.832 [2024-11-07 13:44:35.590188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.832 [2024-11-07 13:44:35.590203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.832 qpair failed and we were unable to recover it. 00:39:27.832 [2024-11-07 13:44:35.590521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.832 [2024-11-07 13:44:35.590534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.832 qpair failed and we were unable to recover it. 00:39:27.832 [2024-11-07 13:44:35.590732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.832 [2024-11-07 13:44:35.590746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.832 qpair failed and we were unable to recover it. 00:39:27.832 [2024-11-07 13:44:35.591067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.832 [2024-11-07 13:44:35.591082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.832 qpair failed and we were unable to recover it. 00:39:27.832 [2024-11-07 13:44:35.591387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.832 [2024-11-07 13:44:35.591401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.832 qpair failed and we were unable to recover it. 00:39:27.832 [2024-11-07 13:44:35.591711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.832 [2024-11-07 13:44:35.591725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.832 qpair failed and we were unable to recover it. 00:39:27.832 [2024-11-07 13:44:35.592015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.832 [2024-11-07 13:44:35.592029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.832 qpair failed and we were unable to recover it. 00:39:27.832 [2024-11-07 13:44:35.592352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.832 [2024-11-07 13:44:35.592366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.832 qpair failed and we were unable to recover it. 00:39:27.832 [2024-11-07 13:44:35.592648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.832 [2024-11-07 13:44:35.592662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.832 qpair failed and we were unable to recover it. 00:39:27.832 [2024-11-07 13:44:35.592983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.832 [2024-11-07 13:44:35.592998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.832 qpair failed and we were unable to recover it. 00:39:27.832 [2024-11-07 13:44:35.593297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.832 [2024-11-07 13:44:35.593310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.832 qpair failed and we were unable to recover it. 00:39:27.832 [2024-11-07 13:44:35.593640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.833 [2024-11-07 13:44:35.593654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.833 qpair failed and we were unable to recover it. 00:39:27.833 [2024-11-07 13:44:35.593986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.833 [2024-11-07 13:44:35.594003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.833 qpair failed and we were unable to recover it. 00:39:27.833 [2024-11-07 13:44:35.594303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.833 [2024-11-07 13:44:35.594317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.833 qpair failed and we were unable to recover it. 00:39:27.833 [2024-11-07 13:44:35.594520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.833 [2024-11-07 13:44:35.594533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.833 qpair failed and we were unable to recover it. 00:39:27.833 [2024-11-07 13:44:35.594866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.833 [2024-11-07 13:44:35.594882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.833 qpair failed and we were unable to recover it. 00:39:27.833 [2024-11-07 13:44:35.595207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.833 [2024-11-07 13:44:35.595222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.833 qpair failed and we were unable to recover it. 00:39:27.833 [2024-11-07 13:44:35.595408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.833 [2024-11-07 13:44:35.595422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.833 qpair failed and we were unable to recover it. 00:39:27.833 [2024-11-07 13:44:35.595738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.833 [2024-11-07 13:44:35.595751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.833 qpair failed and we were unable to recover it. 00:39:27.833 [2024-11-07 13:44:35.596064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.833 [2024-11-07 13:44:35.596079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.833 qpair failed and we were unable to recover it. 00:39:27.833 [2024-11-07 13:44:35.596395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.833 [2024-11-07 13:44:35.596408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.833 qpair failed and we were unable to recover it. 00:39:27.833 [2024-11-07 13:44:35.596742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.833 [2024-11-07 13:44:35.596757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.833 qpair failed and we were unable to recover it. 00:39:27.833 [2024-11-07 13:44:35.597143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.833 [2024-11-07 13:44:35.597158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.833 qpair failed and we were unable to recover it. 00:39:27.833 [2024-11-07 13:44:35.597442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.833 [2024-11-07 13:44:35.597455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.833 qpair failed and we were unable to recover it. 00:39:27.833 [2024-11-07 13:44:35.597792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.833 [2024-11-07 13:44:35.597806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.833 qpair failed and we were unable to recover it. 00:39:27.833 [2024-11-07 13:44:35.598148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.833 [2024-11-07 13:44:35.598163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.833 qpair failed and we were unable to recover it. 00:39:27.833 [2024-11-07 13:44:35.598489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.833 [2024-11-07 13:44:35.598503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.833 qpair failed and we were unable to recover it. 00:39:27.833 [2024-11-07 13:44:35.598841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.833 [2024-11-07 13:44:35.598855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.833 qpair failed and we were unable to recover it. 00:39:27.833 [2024-11-07 13:44:35.599152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.833 [2024-11-07 13:44:35.599166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.833 qpair failed and we were unable to recover it. 00:39:27.833 [2024-11-07 13:44:35.599502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.833 [2024-11-07 13:44:35.599516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.833 qpair failed and we were unable to recover it. 00:39:27.833 [2024-11-07 13:44:35.599909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.833 [2024-11-07 13:44:35.599923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.833 qpair failed and we were unable to recover it. 00:39:27.833 [2024-11-07 13:44:35.600202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.833 [2024-11-07 13:44:35.600215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.833 qpair failed and we were unable to recover it. 00:39:27.833 [2024-11-07 13:44:35.600531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.833 [2024-11-07 13:44:35.600545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.833 qpair failed and we were unable to recover it. 00:39:27.833 [2024-11-07 13:44:35.600888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.833 [2024-11-07 13:44:35.600902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.833 qpair failed and we were unable to recover it. 00:39:27.833 [2024-11-07 13:44:35.601102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.833 [2024-11-07 13:44:35.601115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.833 qpair failed and we were unable to recover it. 00:39:27.833 [2024-11-07 13:44:35.601445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.833 [2024-11-07 13:44:35.601459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.833 qpair failed and we were unable to recover it. 00:39:27.833 [2024-11-07 13:44:35.601796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.833 [2024-11-07 13:44:35.601811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.833 qpair failed and we were unable to recover it. 00:39:27.833 [2024-11-07 13:44:35.602125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.833 [2024-11-07 13:44:35.602139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.833 qpair failed and we were unable to recover it. 00:39:27.833 [2024-11-07 13:44:35.602473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.833 [2024-11-07 13:44:35.602486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.833 qpair failed and we were unable to recover it. 00:39:27.833 [2024-11-07 13:44:35.602803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.833 [2024-11-07 13:44:35.602817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.833 qpair failed and we were unable to recover it. 00:39:27.833 [2024-11-07 13:44:35.603099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.833 [2024-11-07 13:44:35.603114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.833 qpair failed and we were unable to recover it. 00:39:27.833 [2024-11-07 13:44:35.603433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.833 [2024-11-07 13:44:35.603447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.833 qpair failed and we were unable to recover it. 00:39:27.833 [2024-11-07 13:44:35.603785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.833 [2024-11-07 13:44:35.603799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.833 qpair failed and we were unable to recover it. 00:39:27.833 [2024-11-07 13:44:35.604097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.833 [2024-11-07 13:44:35.604112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.833 qpair failed and we were unable to recover it. 00:39:27.833 [2024-11-07 13:44:35.604429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.833 [2024-11-07 13:44:35.604443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.833 qpair failed and we were unable to recover it. 00:39:27.833 [2024-11-07 13:44:35.604782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.833 [2024-11-07 13:44:35.604795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.833 qpair failed and we were unable to recover it. 00:39:27.833 [2024-11-07 13:44:35.605096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.833 [2024-11-07 13:44:35.605109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.833 qpair failed and we were unable to recover it. 00:39:27.833 [2024-11-07 13:44:35.605309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.833 [2024-11-07 13:44:35.605322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.833 qpair failed and we were unable to recover it. 00:39:27.833 [2024-11-07 13:44:35.605505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.833 [2024-11-07 13:44:35.605521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.833 qpair failed and we were unable to recover it. 00:39:27.833 [2024-11-07 13:44:35.605891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.834 [2024-11-07 13:44:35.605906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.834 qpair failed and we were unable to recover it. 00:39:27.834 [2024-11-07 13:44:35.606290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.834 [2024-11-07 13:44:35.606304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.834 qpair failed and we were unable to recover it. 00:39:27.834 [2024-11-07 13:44:35.606451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.834 [2024-11-07 13:44:35.606465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.834 qpair failed and we were unable to recover it. 00:39:27.834 [2024-11-07 13:44:35.606859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.834 [2024-11-07 13:44:35.606894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.834 qpair failed and we were unable to recover it. 00:39:27.834 [2024-11-07 13:44:35.607211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.834 [2024-11-07 13:44:35.607224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.834 qpair failed and we were unable to recover it. 00:39:27.834 [2024-11-07 13:44:35.607562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.834 [2024-11-07 13:44:35.607577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.834 qpair failed and we were unable to recover it. 00:39:27.834 [2024-11-07 13:44:35.607771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.834 [2024-11-07 13:44:35.607785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.834 qpair failed and we were unable to recover it. 00:39:27.834 [2024-11-07 13:44:35.607982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.834 [2024-11-07 13:44:35.607997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.834 qpair failed and we were unable to recover it. 00:39:27.834 [2024-11-07 13:44:35.608320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.834 [2024-11-07 13:44:35.608333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.834 qpair failed and we were unable to recover it. 00:39:27.834 [2024-11-07 13:44:35.608646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.834 [2024-11-07 13:44:35.608659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.834 qpair failed and we were unable to recover it. 00:39:27.834 [2024-11-07 13:44:35.608990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.834 [2024-11-07 13:44:35.609004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.834 qpair failed and we were unable to recover it. 00:39:27.834 [2024-11-07 13:44:35.609278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.834 [2024-11-07 13:44:35.609292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.834 qpair failed and we were unable to recover it. 00:39:27.834 [2024-11-07 13:44:35.609649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.834 [2024-11-07 13:44:35.609663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.834 qpair failed and we were unable to recover it. 00:39:27.834 [2024-11-07 13:44:35.609957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.834 [2024-11-07 13:44:35.609971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.834 qpair failed and we were unable to recover it. 00:39:27.834 [2024-11-07 13:44:35.610313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.834 [2024-11-07 13:44:35.610327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.834 qpair failed and we were unable to recover it. 00:39:27.834 [2024-11-07 13:44:35.610680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.834 [2024-11-07 13:44:35.610695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.834 qpair failed and we were unable to recover it. 00:39:27.834 [2024-11-07 13:44:35.611024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.834 [2024-11-07 13:44:35.611038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.834 qpair failed and we were unable to recover it. 00:39:27.834 [2024-11-07 13:44:35.611352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.834 [2024-11-07 13:44:35.611367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.834 qpair failed and we were unable to recover it. 00:39:27.834 [2024-11-07 13:44:35.611685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.834 [2024-11-07 13:44:35.611699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.834 qpair failed and we were unable to recover it. 00:39:27.834 [2024-11-07 13:44:35.611957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.834 [2024-11-07 13:44:35.611971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.834 qpair failed and we were unable to recover it. 00:39:27.834 [2024-11-07 13:44:35.612302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.834 [2024-11-07 13:44:35.612316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.834 qpair failed and we were unable to recover it. 00:39:27.834 [2024-11-07 13:44:35.612633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.834 [2024-11-07 13:44:35.612646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.834 qpair failed and we were unable to recover it. 00:39:27.834 [2024-11-07 13:44:35.612959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.834 [2024-11-07 13:44:35.612974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.834 qpair failed and we were unable to recover it. 00:39:27.834 [2024-11-07 13:44:35.613250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.834 [2024-11-07 13:44:35.613264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.834 qpair failed and we were unable to recover it. 00:39:27.834 [2024-11-07 13:44:35.613582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.834 [2024-11-07 13:44:35.613595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.834 qpair failed and we were unable to recover it. 00:39:27.834 [2024-11-07 13:44:35.613909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.834 [2024-11-07 13:44:35.613923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.834 qpair failed and we were unable to recover it. 00:39:27.834 [2024-11-07 13:44:35.614244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.834 [2024-11-07 13:44:35.614258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.834 qpair failed and we were unable to recover it. 00:39:27.834 [2024-11-07 13:44:35.614582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.834 [2024-11-07 13:44:35.614596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.834 qpair failed and we were unable to recover it. 00:39:27.834 [2024-11-07 13:44:35.614914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.834 [2024-11-07 13:44:35.614928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.834 qpair failed and we were unable to recover it. 00:39:27.834 [2024-11-07 13:44:35.615244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.834 [2024-11-07 13:44:35.615258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.834 qpair failed and we were unable to recover it. 00:39:27.834 [2024-11-07 13:44:35.615567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.834 [2024-11-07 13:44:35.615580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.834 qpair failed and we were unable to recover it. 00:39:27.834 [2024-11-07 13:44:35.615884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.834 [2024-11-07 13:44:35.615899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.834 qpair failed and we were unable to recover it. 00:39:27.834 [2024-11-07 13:44:35.616230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.834 [2024-11-07 13:44:35.616244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.834 qpair failed and we were unable to recover it. 00:39:27.834 [2024-11-07 13:44:35.616559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.834 [2024-11-07 13:44:35.616572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.834 qpair failed and we were unable to recover it. 00:39:27.834 [2024-11-07 13:44:35.616895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.834 [2024-11-07 13:44:35.616909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.834 qpair failed and we were unable to recover it. 00:39:27.834 [2024-11-07 13:44:35.617220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.834 [2024-11-07 13:44:35.617233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.834 qpair failed and we were unable to recover it. 00:39:27.834 [2024-11-07 13:44:35.617561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.834 [2024-11-07 13:44:35.617575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.834 qpair failed and we were unable to recover it. 00:39:27.834 [2024-11-07 13:44:35.617855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.834 [2024-11-07 13:44:35.617873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.834 qpair failed and we were unable to recover it. 00:39:27.834 [2024-11-07 13:44:35.618113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.835 [2024-11-07 13:44:35.618127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.835 qpair failed and we were unable to recover it. 00:39:27.835 [2024-11-07 13:44:35.618439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.835 [2024-11-07 13:44:35.618452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.835 qpair failed and we were unable to recover it. 00:39:27.835 [2024-11-07 13:44:35.618784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.835 [2024-11-07 13:44:35.618798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.835 qpair failed and we were unable to recover it. 00:39:27.835 [2024-11-07 13:44:35.619095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.835 [2024-11-07 13:44:35.619109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.835 qpair failed and we were unable to recover it. 00:39:27.835 [2024-11-07 13:44:35.619413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.835 [2024-11-07 13:44:35.619427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.835 qpair failed and we were unable to recover it. 00:39:27.835 [2024-11-07 13:44:35.619642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.835 [2024-11-07 13:44:35.619658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.835 qpair failed and we were unable to recover it. 00:39:27.835 [2024-11-07 13:44:35.619964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.835 [2024-11-07 13:44:35.619979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.835 qpair failed and we were unable to recover it. 00:39:27.835 [2024-11-07 13:44:35.620302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.835 [2024-11-07 13:44:35.620316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.835 qpair failed and we were unable to recover it. 00:39:27.835 [2024-11-07 13:44:35.620716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.835 [2024-11-07 13:44:35.620730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.835 qpair failed and we were unable to recover it. 00:39:27.835 [2024-11-07 13:44:35.621078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.835 [2024-11-07 13:44:35.621093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.835 qpair failed and we were unable to recover it. 00:39:27.835 [2024-11-07 13:44:35.621317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.835 [2024-11-07 13:44:35.621332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.835 qpair failed and we were unable to recover it. 00:39:27.835 [2024-11-07 13:44:35.621658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.835 [2024-11-07 13:44:35.621672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.835 qpair failed and we were unable to recover it. 00:39:27.835 [2024-11-07 13:44:35.621898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.835 [2024-11-07 13:44:35.621912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.835 qpair failed and we were unable to recover it. 00:39:27.835 [2024-11-07 13:44:35.622225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.835 [2024-11-07 13:44:35.622240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.835 qpair failed and we were unable to recover it. 00:39:27.835 [2024-11-07 13:44:35.622566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.835 [2024-11-07 13:44:35.622580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.835 qpair failed and we were unable to recover it. 00:39:27.835 [2024-11-07 13:44:35.622895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.835 [2024-11-07 13:44:35.622911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.835 qpair failed and we were unable to recover it. 00:39:27.835 [2024-11-07 13:44:35.623233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.835 [2024-11-07 13:44:35.623247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.835 qpair failed and we were unable to recover it. 00:39:27.835 [2024-11-07 13:44:35.623526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.835 [2024-11-07 13:44:35.623540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.835 qpair failed and we were unable to recover it. 00:39:27.835 [2024-11-07 13:44:35.623877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.835 [2024-11-07 13:44:35.623892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.835 qpair failed and we were unable to recover it. 00:39:27.835 [2024-11-07 13:44:35.624220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.835 [2024-11-07 13:44:35.624233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.835 qpair failed and we were unable to recover it. 00:39:27.835 [2024-11-07 13:44:35.624553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.835 [2024-11-07 13:44:35.624567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.835 qpair failed and we were unable to recover it. 00:39:27.835 [2024-11-07 13:44:35.624899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.835 [2024-11-07 13:44:35.624913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.835 qpair failed and we were unable to recover it. 00:39:27.835 [2024-11-07 13:44:35.625205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.835 [2024-11-07 13:44:35.625219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.835 qpair failed and we were unable to recover it. 00:39:27.835 [2024-11-07 13:44:35.625532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.835 [2024-11-07 13:44:35.625546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.835 qpair failed and we were unable to recover it. 00:39:27.835 [2024-11-07 13:44:35.625834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.835 [2024-11-07 13:44:35.625848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.835 qpair failed and we were unable to recover it. 00:39:27.835 [2024-11-07 13:44:35.626176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.835 [2024-11-07 13:44:35.626190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.835 qpair failed and we were unable to recover it. 00:39:27.835 [2024-11-07 13:44:35.626504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.835 [2024-11-07 13:44:35.626517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.835 qpair failed and we were unable to recover it. 00:39:27.835 [2024-11-07 13:44:35.626851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.835 [2024-11-07 13:44:35.626868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.835 qpair failed and we were unable to recover it. 00:39:27.835 [2024-11-07 13:44:35.627183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.835 [2024-11-07 13:44:35.627197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.835 qpair failed and we were unable to recover it. 00:39:27.835 [2024-11-07 13:44:35.627561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.835 [2024-11-07 13:44:35.627575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.835 qpair failed and we were unable to recover it. 00:39:27.835 [2024-11-07 13:44:35.627769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.835 [2024-11-07 13:44:35.627782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.835 qpair failed and we were unable to recover it. 00:39:27.835 [2024-11-07 13:44:35.628100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.835 [2024-11-07 13:44:35.628115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.835 qpair failed and we were unable to recover it. 00:39:27.835 [2024-11-07 13:44:35.628433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.835 [2024-11-07 13:44:35.628447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.835 qpair failed and we were unable to recover it. 00:39:27.835 [2024-11-07 13:44:35.628743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.835 [2024-11-07 13:44:35.628756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.835 qpair failed and we were unable to recover it. 00:39:27.836 [2024-11-07 13:44:35.629078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.836 [2024-11-07 13:44:35.629093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.836 qpair failed and we were unable to recover it. 00:39:27.836 [2024-11-07 13:44:35.629377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.836 [2024-11-07 13:44:35.629391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.836 qpair failed and we were unable to recover it. 00:39:27.836 [2024-11-07 13:44:35.629722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.836 [2024-11-07 13:44:35.629737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.836 qpair failed and we were unable to recover it. 00:39:27.836 [2024-11-07 13:44:35.630071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.836 [2024-11-07 13:44:35.630092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.836 qpair failed and we were unable to recover it. 00:39:27.836 [2024-11-07 13:44:35.630414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.836 [2024-11-07 13:44:35.630429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.836 qpair failed and we were unable to recover it. 00:39:27.836 [2024-11-07 13:44:35.630713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.836 [2024-11-07 13:44:35.630727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.836 qpair failed and we were unable to recover it. 00:39:27.836 [2024-11-07 13:44:35.631027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.836 [2024-11-07 13:44:35.631041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.836 qpair failed and we were unable to recover it. 00:39:27.836 [2024-11-07 13:44:35.631349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.836 [2024-11-07 13:44:35.631363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.836 qpair failed and we were unable to recover it. 00:39:27.836 [2024-11-07 13:44:35.631646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.836 [2024-11-07 13:44:35.631660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.836 qpair failed and we were unable to recover it. 00:39:27.836 [2024-11-07 13:44:35.631977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.836 [2024-11-07 13:44:35.631992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.836 qpair failed and we were unable to recover it. 00:39:27.836 [2024-11-07 13:44:35.632321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.836 [2024-11-07 13:44:35.632335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.836 qpair failed and we were unable to recover it. 00:39:27.836 [2024-11-07 13:44:35.632623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.836 [2024-11-07 13:44:35.632640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.836 qpair failed and we were unable to recover it. 00:39:27.836 [2024-11-07 13:44:35.632969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.836 [2024-11-07 13:44:35.632983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.836 qpair failed and we were unable to recover it. 00:39:27.836 [2024-11-07 13:44:35.633297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.836 [2024-11-07 13:44:35.633310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.836 qpair failed and we were unable to recover it. 00:39:27.836 [2024-11-07 13:44:35.633605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.836 [2024-11-07 13:44:35.633619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.836 qpair failed and we were unable to recover it. 00:39:27.836 [2024-11-07 13:44:35.633924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.836 [2024-11-07 13:44:35.633938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.836 qpair failed and we were unable to recover it. 00:39:27.836 [2024-11-07 13:44:35.634246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.836 [2024-11-07 13:44:35.634260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.836 qpair failed and we were unable to recover it. 00:39:27.836 [2024-11-07 13:44:35.634592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.836 [2024-11-07 13:44:35.634614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.836 qpair failed and we were unable to recover it. 00:39:27.836 [2024-11-07 13:44:35.634927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.836 [2024-11-07 13:44:35.634942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.836 qpair failed and we were unable to recover it. 00:39:27.836 [2024-11-07 13:44:35.635331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.836 [2024-11-07 13:44:35.635345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.836 qpair failed and we were unable to recover it. 00:39:27.836 [2024-11-07 13:44:35.635656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.836 [2024-11-07 13:44:35.635670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.836 qpair failed and we were unable to recover it. 00:39:27.836 [2024-11-07 13:44:35.635989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.836 [2024-11-07 13:44:35.636003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.836 qpair failed and we were unable to recover it. 00:39:27.836 [2024-11-07 13:44:35.636336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.836 [2024-11-07 13:44:35.636350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.836 qpair failed and we were unable to recover it. 00:39:27.836 [2024-11-07 13:44:35.636632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.836 [2024-11-07 13:44:35.636646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.836 qpair failed and we were unable to recover it. 00:39:27.836 [2024-11-07 13:44:35.636974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.836 [2024-11-07 13:44:35.636988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.836 qpair failed and we were unable to recover it. 00:39:27.836 [2024-11-07 13:44:35.637311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.836 [2024-11-07 13:44:35.637325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.836 qpair failed and we were unable to recover it. 00:39:27.836 [2024-11-07 13:44:35.637663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.836 [2024-11-07 13:44:35.637677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.836 qpair failed and we were unable to recover it. 00:39:27.836 [2024-11-07 13:44:35.637992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.836 [2024-11-07 13:44:35.638006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.836 qpair failed and we were unable to recover it. 00:39:27.836 [2024-11-07 13:44:35.638314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.836 [2024-11-07 13:44:35.638328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.836 qpair failed and we were unable to recover it. 00:39:27.836 [2024-11-07 13:44:35.638657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.836 [2024-11-07 13:44:35.638679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.836 qpair failed and we were unable to recover it. 00:39:27.836 [2024-11-07 13:44:35.638997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.836 [2024-11-07 13:44:35.639012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.836 qpair failed and we were unable to recover it. 00:39:27.836 [2024-11-07 13:44:35.639303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.836 [2024-11-07 13:44:35.639317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.836 qpair failed and we were unable to recover it. 00:39:27.836 [2024-11-07 13:44:35.639650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.836 [2024-11-07 13:44:35.639663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.836 qpair failed and we were unable to recover it. 00:39:27.836 [2024-11-07 13:44:35.640002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.836 [2024-11-07 13:44:35.640017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.836 qpair failed and we were unable to recover it. 00:39:27.836 [2024-11-07 13:44:35.640326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.836 [2024-11-07 13:44:35.640339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.836 qpair failed and we were unable to recover it. 00:39:27.836 [2024-11-07 13:44:35.640692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.836 [2024-11-07 13:44:35.640706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.836 qpair failed and we were unable to recover it. 00:39:27.836 [2024-11-07 13:44:35.641048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.836 [2024-11-07 13:44:35.641063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.836 qpair failed and we were unable to recover it. 00:39:27.836 [2024-11-07 13:44:35.641384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.836 [2024-11-07 13:44:35.641399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.836 qpair failed and we were unable to recover it. 00:39:27.837 [2024-11-07 13:44:35.641721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.837 [2024-11-07 13:44:35.641735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.837 qpair failed and we were unable to recover it. 00:39:27.837 [2024-11-07 13:44:35.641953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.837 [2024-11-07 13:44:35.641968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.837 qpair failed and we were unable to recover it. 00:39:27.837 [2024-11-07 13:44:35.642252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.837 [2024-11-07 13:44:35.642267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.837 qpair failed and we were unable to recover it. 00:39:27.837 [2024-11-07 13:44:35.642596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.837 [2024-11-07 13:44:35.642609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.837 qpair failed and we were unable to recover it. 00:39:27.837 [2024-11-07 13:44:35.642921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.837 [2024-11-07 13:44:35.642935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.837 qpair failed and we were unable to recover it. 00:39:27.837 [2024-11-07 13:44:35.643110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.837 [2024-11-07 13:44:35.643126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.837 qpair failed and we were unable to recover it. 00:39:27.837 [2024-11-07 13:44:35.643326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.837 [2024-11-07 13:44:35.643340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.837 qpair failed and we were unable to recover it. 00:39:27.837 [2024-11-07 13:44:35.643669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.837 [2024-11-07 13:44:35.643682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.837 qpair failed and we were unable to recover it. 00:39:27.837 [2024-11-07 13:44:35.643970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.837 [2024-11-07 13:44:35.643984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.837 qpair failed and we were unable to recover it. 00:39:27.837 [2024-11-07 13:44:35.644291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.837 [2024-11-07 13:44:35.644304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.837 qpair failed and we were unable to recover it. 00:39:27.837 [2024-11-07 13:44:35.644691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.837 [2024-11-07 13:44:35.644705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.837 qpair failed and we were unable to recover it. 00:39:27.837 [2024-11-07 13:44:35.645028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.837 [2024-11-07 13:44:35.645042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.837 qpair failed and we were unable to recover it. 00:39:27.837 [2024-11-07 13:44:35.645334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.837 [2024-11-07 13:44:35.645348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.837 qpair failed and we were unable to recover it. 00:39:27.837 [2024-11-07 13:44:35.645658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.837 [2024-11-07 13:44:35.645676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.837 qpair failed and we were unable to recover it. 00:39:27.837 [2024-11-07 13:44:35.646005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.837 [2024-11-07 13:44:35.646019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.837 qpair failed and we were unable to recover it. 00:39:27.837 [2024-11-07 13:44:35.646347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.837 [2024-11-07 13:44:35.646361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.837 qpair failed and we were unable to recover it. 00:39:27.837 [2024-11-07 13:44:35.646676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.837 [2024-11-07 13:44:35.646689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.837 qpair failed and we were unable to recover it. 00:39:27.837 [2024-11-07 13:44:35.647023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.837 [2024-11-07 13:44:35.647037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.837 qpair failed and we were unable to recover it. 00:39:27.837 [2024-11-07 13:44:35.647327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.837 [2024-11-07 13:44:35.647340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.837 qpair failed and we were unable to recover it. 00:39:27.837 [2024-11-07 13:44:35.647713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.837 [2024-11-07 13:44:35.647727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.837 qpair failed and we were unable to recover it. 00:39:27.837 [2024-11-07 13:44:35.648027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.837 [2024-11-07 13:44:35.648042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.837 qpair failed and we were unable to recover it. 00:39:27.837 [2024-11-07 13:44:35.648346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.837 [2024-11-07 13:44:35.648361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.837 qpair failed and we were unable to recover it. 00:39:27.837 [2024-11-07 13:44:35.648667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.837 [2024-11-07 13:44:35.648681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.837 qpair failed and we were unable to recover it. 00:39:27.837 [2024-11-07 13:44:35.649005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.837 [2024-11-07 13:44:35.649019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.837 qpair failed and we were unable to recover it. 00:39:27.837 [2024-11-07 13:44:35.649348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.837 [2024-11-07 13:44:35.649362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.837 qpair failed and we were unable to recover it. 00:39:27.837 [2024-11-07 13:44:35.649678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.837 [2024-11-07 13:44:35.649693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.837 qpair failed and we were unable to recover it. 00:39:27.837 [2024-11-07 13:44:35.650005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.837 [2024-11-07 13:44:35.650019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.837 qpair failed and we were unable to recover it. 00:39:27.837 [2024-11-07 13:44:35.650361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.837 [2024-11-07 13:44:35.650376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.837 qpair failed and we were unable to recover it. 00:39:27.837 [2024-11-07 13:44:35.650701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.837 [2024-11-07 13:44:35.650715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.837 qpair failed and we were unable to recover it. 00:39:27.837 [2024-11-07 13:44:35.651035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.837 [2024-11-07 13:44:35.651049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.837 qpair failed and we were unable to recover it. 00:39:27.837 [2024-11-07 13:44:35.651335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.837 [2024-11-07 13:44:35.651348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.837 qpair failed and we were unable to recover it. 00:39:27.837 [2024-11-07 13:44:35.651658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.837 [2024-11-07 13:44:35.651672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.837 qpair failed and we were unable to recover it. 00:39:27.837 [2024-11-07 13:44:35.651984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.837 [2024-11-07 13:44:35.651999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.837 qpair failed and we were unable to recover it. 00:39:27.837 [2024-11-07 13:44:35.652284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.837 [2024-11-07 13:44:35.652298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.837 qpair failed and we were unable to recover it. 00:39:27.837 [2024-11-07 13:44:35.652672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.837 [2024-11-07 13:44:35.652686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.837 qpair failed and we were unable to recover it. 00:39:27.837 [2024-11-07 13:44:35.652966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.837 [2024-11-07 13:44:35.652980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.837 qpair failed and we were unable to recover it. 00:39:27.837 [2024-11-07 13:44:35.653303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.837 [2024-11-07 13:44:35.653318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.837 qpair failed and we were unable to recover it. 00:39:27.837 [2024-11-07 13:44:35.653619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.837 [2024-11-07 13:44:35.653632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.837 qpair failed and we were unable to recover it. 00:39:27.838 [2024-11-07 13:44:35.653942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.838 [2024-11-07 13:44:35.653956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.838 qpair failed and we were unable to recover it. 00:39:27.838 [2024-11-07 13:44:35.654156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.838 [2024-11-07 13:44:35.654171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.838 qpair failed and we were unable to recover it. 00:39:27.838 [2024-11-07 13:44:35.654510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.838 [2024-11-07 13:44:35.654525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.838 qpair failed and we were unable to recover it. 00:39:27.838 [2024-11-07 13:44:35.654830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.838 [2024-11-07 13:44:35.654844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.838 qpair failed and we were unable to recover it. 00:39:27.838 [2024-11-07 13:44:35.655180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.838 [2024-11-07 13:44:35.655194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.838 qpair failed and we were unable to recover it. 00:39:27.838 [2024-11-07 13:44:35.655513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.838 [2024-11-07 13:44:35.655526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.838 qpair failed and we were unable to recover it. 00:39:27.838 [2024-11-07 13:44:35.655852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.838 [2024-11-07 13:44:35.655871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.838 qpair failed and we were unable to recover it. 00:39:27.838 [2024-11-07 13:44:35.656101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.838 [2024-11-07 13:44:35.656116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.838 qpair failed and we were unable to recover it. 00:39:27.838 [2024-11-07 13:44:35.656447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.838 [2024-11-07 13:44:35.656461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.838 qpair failed and we were unable to recover it. 00:39:27.838 [2024-11-07 13:44:35.656776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.838 [2024-11-07 13:44:35.656790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.838 qpair failed and we were unable to recover it. 00:39:27.838 [2024-11-07 13:44:35.657092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.838 [2024-11-07 13:44:35.657107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.838 qpair failed and we were unable to recover it. 00:39:27.838 [2024-11-07 13:44:35.657422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.838 [2024-11-07 13:44:35.657436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.838 qpair failed and we were unable to recover it. 00:39:27.838 [2024-11-07 13:44:35.657639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.838 [2024-11-07 13:44:35.657653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.838 qpair failed and we were unable to recover it. 00:39:27.838 [2024-11-07 13:44:35.657962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.838 [2024-11-07 13:44:35.657976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.838 qpair failed and we were unable to recover it. 00:39:27.838 [2024-11-07 13:44:35.658263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.838 [2024-11-07 13:44:35.658276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.838 qpair failed and we were unable to recover it. 00:39:27.838 [2024-11-07 13:44:35.658607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.838 [2024-11-07 13:44:35.658623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.838 qpair failed and we were unable to recover it. 00:39:27.838 [2024-11-07 13:44:35.658910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.838 [2024-11-07 13:44:35.658924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.838 qpair failed and we were unable to recover it. 00:39:27.838 [2024-11-07 13:44:35.659255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.838 [2024-11-07 13:44:35.659269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.838 qpair failed and we were unable to recover it. 00:39:27.838 [2024-11-07 13:44:35.659589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.838 [2024-11-07 13:44:35.659603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.838 qpair failed and we were unable to recover it. 00:39:27.838 [2024-11-07 13:44:35.659941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.838 [2024-11-07 13:44:35.659955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.838 qpair failed and we were unable to recover it. 00:39:27.838 [2024-11-07 13:44:35.660279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.838 [2024-11-07 13:44:35.660294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.838 qpair failed and we were unable to recover it. 00:39:27.838 [2024-11-07 13:44:35.660607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.838 [2024-11-07 13:44:35.660621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.838 qpair failed and we were unable to recover it. 00:39:27.838 [2024-11-07 13:44:35.660906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.838 [2024-11-07 13:44:35.660922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.838 qpair failed and we were unable to recover it. 00:39:27.838 [2024-11-07 13:44:35.661134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.838 [2024-11-07 13:44:35.661148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.838 qpair failed and we were unable to recover it. 00:39:27.838 [2024-11-07 13:44:35.661331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.838 [2024-11-07 13:44:35.661346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.838 qpair failed and we were unable to recover it. 00:39:27.838 [2024-11-07 13:44:35.661539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.838 [2024-11-07 13:44:35.661553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.838 qpair failed and we were unable to recover it. 00:39:27.838 [2024-11-07 13:44:35.661843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.838 [2024-11-07 13:44:35.661858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.838 qpair failed and we were unable to recover it. 00:39:27.838 [2024-11-07 13:44:35.662184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.838 [2024-11-07 13:44:35.662198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.838 qpair failed and we were unable to recover it. 00:39:27.838 [2024-11-07 13:44:35.662501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.838 [2024-11-07 13:44:35.662516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.838 qpair failed and we were unable to recover it. 00:39:27.838 [2024-11-07 13:44:35.662842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.838 [2024-11-07 13:44:35.662857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.838 qpair failed and we were unable to recover it. 00:39:27.838 [2024-11-07 13:44:35.663199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.838 [2024-11-07 13:44:35.663214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.838 qpair failed and we were unable to recover it. 00:39:27.838 [2024-11-07 13:44:35.663538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.838 [2024-11-07 13:44:35.663553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.838 qpair failed and we were unable to recover it. 00:39:27.838 [2024-11-07 13:44:35.663759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.838 [2024-11-07 13:44:35.663773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.838 qpair failed and we were unable to recover it. 00:39:27.838 [2024-11-07 13:44:35.664094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.838 [2024-11-07 13:44:35.664109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.838 qpair failed and we were unable to recover it. 00:39:27.838 [2024-11-07 13:44:35.664447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.838 [2024-11-07 13:44:35.664462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.838 qpair failed and we were unable to recover it. 00:39:27.838 [2024-11-07 13:44:35.664676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.838 [2024-11-07 13:44:35.664691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.838 qpair failed and we were unable to recover it. 00:39:27.838 [2024-11-07 13:44:35.665027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.838 [2024-11-07 13:44:35.665042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.838 qpair failed and we were unable to recover it. 00:39:27.838 [2024-11-07 13:44:35.665426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.838 [2024-11-07 13:44:35.665440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.838 qpair failed and we were unable to recover it. 00:39:27.838 [2024-11-07 13:44:35.665775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.839 [2024-11-07 13:44:35.665789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.839 qpair failed and we were unable to recover it. 00:39:27.839 [2024-11-07 13:44:35.666091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.839 [2024-11-07 13:44:35.666106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.839 qpair failed and we were unable to recover it. 00:39:27.839 [2024-11-07 13:44:35.666389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.839 [2024-11-07 13:44:35.666403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.839 qpair failed and we were unable to recover it. 00:39:27.839 [2024-11-07 13:44:35.666719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.839 [2024-11-07 13:44:35.666732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.839 qpair failed and we were unable to recover it. 00:39:27.839 [2024-11-07 13:44:35.667022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.839 [2024-11-07 13:44:35.667036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.839 qpair failed and we were unable to recover it. 00:39:27.839 [2024-11-07 13:44:35.667335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.839 [2024-11-07 13:44:35.667349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.839 qpair failed and we were unable to recover it. 00:39:27.839 [2024-11-07 13:44:35.667615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.839 [2024-11-07 13:44:35.667630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.839 qpair failed and we were unable to recover it. 00:39:27.839 [2024-11-07 13:44:35.667856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.839 [2024-11-07 13:44:35.667877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.839 qpair failed and we were unable to recover it. 00:39:27.839 [2024-11-07 13:44:35.668206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.839 [2024-11-07 13:44:35.668220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.839 qpair failed and we were unable to recover it. 00:39:27.839 [2024-11-07 13:44:35.668536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.839 [2024-11-07 13:44:35.668550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.839 qpair failed and we were unable to recover it. 00:39:27.839 [2024-11-07 13:44:35.668815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.839 [2024-11-07 13:44:35.668828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.839 qpair failed and we were unable to recover it. 00:39:27.839 [2024-11-07 13:44:35.669251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.839 [2024-11-07 13:44:35.669266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.839 qpair failed and we were unable to recover it. 00:39:27.839 [2024-11-07 13:44:35.669469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.839 [2024-11-07 13:44:35.669483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.839 qpair failed and we were unable to recover it. 00:39:27.839 [2024-11-07 13:44:35.669808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.839 [2024-11-07 13:44:35.669822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.839 qpair failed and we were unable to recover it. 00:39:27.839 [2024-11-07 13:44:35.670161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.839 [2024-11-07 13:44:35.670175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.839 qpair failed and we were unable to recover it. 00:39:27.839 [2024-11-07 13:44:35.670537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.839 [2024-11-07 13:44:35.670551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.839 qpair failed and we were unable to recover it. 00:39:27.839 [2024-11-07 13:44:35.670747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.839 [2024-11-07 13:44:35.670760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.839 qpair failed and we were unable to recover it. 00:39:27.839 [2024-11-07 13:44:35.671061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.839 [2024-11-07 13:44:35.671078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.839 qpair failed and we were unable to recover it. 00:39:27.839 [2024-11-07 13:44:35.671407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.839 [2024-11-07 13:44:35.671420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.839 qpair failed and we were unable to recover it. 00:39:27.839 [2024-11-07 13:44:35.671707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.839 [2024-11-07 13:44:35.671721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.839 qpair failed and we were unable to recover it. 00:39:27.839 [2024-11-07 13:44:35.672010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.839 [2024-11-07 13:44:35.672023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.839 qpair failed and we were unable to recover it. 00:39:27.839 [2024-11-07 13:44:35.672242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.839 [2024-11-07 13:44:35.672255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.839 qpair failed and we were unable to recover it. 00:39:27.839 [2024-11-07 13:44:35.672524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.839 [2024-11-07 13:44:35.672537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.839 qpair failed and we were unable to recover it. 00:39:27.839 [2024-11-07 13:44:35.672831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.839 [2024-11-07 13:44:35.672844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.839 qpair failed and we were unable to recover it. 00:39:27.839 [2024-11-07 13:44:35.673153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.839 [2024-11-07 13:44:35.673166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.839 qpair failed and we were unable to recover it. 00:39:27.839 [2024-11-07 13:44:35.673486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.839 [2024-11-07 13:44:35.673499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.839 qpair failed and we were unable to recover it. 00:39:27.839 [2024-11-07 13:44:35.673845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.839 [2024-11-07 13:44:35.673859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.839 qpair failed and we were unable to recover it. 00:39:27.839 [2024-11-07 13:44:35.674175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.839 [2024-11-07 13:44:35.674191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.839 qpair failed and we were unable to recover it. 00:39:27.839 [2024-11-07 13:44:35.674495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.839 [2024-11-07 13:44:35.674508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.839 qpair failed and we were unable to recover it. 00:39:27.839 [2024-11-07 13:44:35.674718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.839 [2024-11-07 13:44:35.674731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.839 qpair failed and we were unable to recover it. 00:39:27.839 [2024-11-07 13:44:35.675032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.839 [2024-11-07 13:44:35.675047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.839 qpair failed and we were unable to recover it. 00:39:27.839 [2024-11-07 13:44:35.675374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.839 [2024-11-07 13:44:35.675387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.839 qpair failed and we were unable to recover it. 00:39:27.839 [2024-11-07 13:44:35.675720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.839 [2024-11-07 13:44:35.675734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.839 qpair failed and we were unable to recover it. 00:39:27.839 [2024-11-07 13:44:35.676015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.839 [2024-11-07 13:44:35.676030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.839 qpair failed and we were unable to recover it. 00:39:27.839 [2024-11-07 13:44:35.676356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.839 [2024-11-07 13:44:35.676370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.839 qpair failed and we were unable to recover it. 00:39:27.839 [2024-11-07 13:44:35.676664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.839 [2024-11-07 13:44:35.676678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.839 qpair failed and we were unable to recover it. 00:39:27.839 [2024-11-07 13:44:35.676998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.839 [2024-11-07 13:44:35.677012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.839 qpair failed and we were unable to recover it. 00:39:27.839 [2024-11-07 13:44:35.677337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.839 [2024-11-07 13:44:35.677350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.839 qpair failed and we were unable to recover it. 00:39:27.839 [2024-11-07 13:44:35.677679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.839 [2024-11-07 13:44:35.677693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.840 qpair failed and we were unable to recover it. 00:39:27.840 [2024-11-07 13:44:35.678026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.840 [2024-11-07 13:44:35.678040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.840 qpair failed and we were unable to recover it. 00:39:27.840 [2024-11-07 13:44:35.678351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.840 [2024-11-07 13:44:35.678365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.840 qpair failed and we were unable to recover it. 00:39:27.840 [2024-11-07 13:44:35.678700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.840 [2024-11-07 13:44:35.678714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.840 qpair failed and we were unable to recover it. 00:39:27.840 [2024-11-07 13:44:35.679022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.840 [2024-11-07 13:44:35.679036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.840 qpair failed and we were unable to recover it. 00:39:27.840 [2024-11-07 13:44:35.679338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.840 [2024-11-07 13:44:35.679351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.840 qpair failed and we were unable to recover it. 00:39:27.840 [2024-11-07 13:44:35.679642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.840 [2024-11-07 13:44:35.679656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.840 qpair failed and we were unable to recover it. 00:39:27.840 [2024-11-07 13:44:35.679983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.840 [2024-11-07 13:44:35.679997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.840 qpair failed and we were unable to recover it. 00:39:27.840 [2024-11-07 13:44:35.680301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.840 [2024-11-07 13:44:35.680315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.840 qpair failed and we were unable to recover it. 00:39:27.840 [2024-11-07 13:44:35.680651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.840 [2024-11-07 13:44:35.680664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.840 qpair failed and we were unable to recover it. 00:39:27.840 [2024-11-07 13:44:35.680978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.840 [2024-11-07 13:44:35.680992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.840 qpair failed and we were unable to recover it. 00:39:27.840 [2024-11-07 13:44:35.681304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.840 [2024-11-07 13:44:35.681317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.840 qpair failed and we were unable to recover it. 00:39:27.840 [2024-11-07 13:44:35.681648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.840 [2024-11-07 13:44:35.681662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.840 qpair failed and we were unable to recover it. 00:39:27.840 [2024-11-07 13:44:35.681974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.840 [2024-11-07 13:44:35.681988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.840 qpair failed and we were unable to recover it. 00:39:27.840 [2024-11-07 13:44:35.682282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.840 [2024-11-07 13:44:35.682295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.840 qpair failed and we were unable to recover it. 00:39:27.840 [2024-11-07 13:44:35.682632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.840 [2024-11-07 13:44:35.682645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.840 qpair failed and we were unable to recover it. 00:39:27.840 [2024-11-07 13:44:35.682834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.840 [2024-11-07 13:44:35.682848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.840 qpair failed and we were unable to recover it. 00:39:27.840 [2024-11-07 13:44:35.683150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.840 [2024-11-07 13:44:35.683165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.840 qpair failed and we were unable to recover it. 00:39:27.840 [2024-11-07 13:44:35.683503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.840 [2024-11-07 13:44:35.683517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.840 qpair failed and we were unable to recover it. 00:39:27.840 [2024-11-07 13:44:35.683843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.840 [2024-11-07 13:44:35.683857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.840 qpair failed and we were unable to recover it. 00:39:27.840 [2024-11-07 13:44:35.684183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.840 [2024-11-07 13:44:35.684197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.840 qpair failed and we were unable to recover it. 00:39:27.840 [2024-11-07 13:44:35.684415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.840 [2024-11-07 13:44:35.684428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.840 qpair failed and we were unable to recover it. 00:39:27.840 [2024-11-07 13:44:35.684751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.840 [2024-11-07 13:44:35.684764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.840 qpair failed and we were unable to recover it. 00:39:27.840 [2024-11-07 13:44:35.685113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.840 [2024-11-07 13:44:35.685127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.840 qpair failed and we were unable to recover it. 00:39:27.840 [2024-11-07 13:44:35.685459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.840 [2024-11-07 13:44:35.685474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.840 qpair failed and we were unable to recover it. 00:39:27.840 [2024-11-07 13:44:35.685665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.840 [2024-11-07 13:44:35.685681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.840 qpair failed and we were unable to recover it. 00:39:27.840 [2024-11-07 13:44:35.685956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.840 [2024-11-07 13:44:35.685971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.840 qpair failed and we were unable to recover it. 00:39:27.840 [2024-11-07 13:44:35.686267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.840 [2024-11-07 13:44:35.686281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.840 qpair failed and we were unable to recover it. 00:39:27.840 [2024-11-07 13:44:35.686611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.840 [2024-11-07 13:44:35.686624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.840 qpair failed and we were unable to recover it. 00:39:27.840 [2024-11-07 13:44:35.686937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.840 [2024-11-07 13:44:35.686951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.840 qpair failed and we were unable to recover it. 00:39:27.840 [2024-11-07 13:44:35.687261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.840 [2024-11-07 13:44:35.687275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.840 qpair failed and we were unable to recover it. 00:39:27.840 [2024-11-07 13:44:35.687560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.840 [2024-11-07 13:44:35.687574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.840 qpair failed and we were unable to recover it. 00:39:27.840 [2024-11-07 13:44:35.687884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.840 [2024-11-07 13:44:35.687898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.840 qpair failed and we were unable to recover it. 00:39:27.840 [2024-11-07 13:44:35.688258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.840 [2024-11-07 13:44:35.688272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.841 qpair failed and we were unable to recover it. 00:39:27.841 [2024-11-07 13:44:35.688602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.841 [2024-11-07 13:44:35.688615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.841 qpair failed and we were unable to recover it. 00:39:27.841 [2024-11-07 13:44:35.688941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.841 [2024-11-07 13:44:35.688956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.841 qpair failed and we were unable to recover it. 00:39:27.841 [2024-11-07 13:44:35.689245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.841 [2024-11-07 13:44:35.689259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.841 qpair failed and we were unable to recover it. 00:39:27.841 [2024-11-07 13:44:35.689551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.841 [2024-11-07 13:44:35.689564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.841 qpair failed and we were unable to recover it. 00:39:27.841 [2024-11-07 13:44:35.689876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.841 [2024-11-07 13:44:35.689891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.841 qpair failed and we were unable to recover it. 00:39:27.841 [2024-11-07 13:44:35.690206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.841 [2024-11-07 13:44:35.690220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.841 qpair failed and we were unable to recover it. 00:39:27.841 [2024-11-07 13:44:35.690543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.841 [2024-11-07 13:44:35.690556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.841 qpair failed and we were unable to recover it. 00:39:27.841 [2024-11-07 13:44:35.690758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.841 [2024-11-07 13:44:35.690771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.841 qpair failed and we were unable to recover it. 00:39:27.841 [2024-11-07 13:44:35.690995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.841 [2024-11-07 13:44:35.691009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.841 qpair failed and we were unable to recover it. 00:39:27.841 [2024-11-07 13:44:35.691317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.841 [2024-11-07 13:44:35.691330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.841 qpair failed and we were unable to recover it. 00:39:27.841 [2024-11-07 13:44:35.691647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.841 [2024-11-07 13:44:35.691660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.841 qpair failed and we were unable to recover it. 00:39:27.841 [2024-11-07 13:44:35.691995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.841 [2024-11-07 13:44:35.692009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.841 qpair failed and we were unable to recover it. 00:39:27.841 [2024-11-07 13:44:35.692319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.841 [2024-11-07 13:44:35.692336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.841 qpair failed and we were unable to recover it. 00:39:27.841 [2024-11-07 13:44:35.692661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.841 [2024-11-07 13:44:35.692675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.841 qpair failed and we were unable to recover it. 00:39:27.841 [2024-11-07 13:44:35.692994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.841 [2024-11-07 13:44:35.693008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.841 qpair failed and we were unable to recover it. 00:39:27.841 [2024-11-07 13:44:35.693131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.841 [2024-11-07 13:44:35.693145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.841 qpair failed and we were unable to recover it. 00:39:27.841 [2024-11-07 13:44:35.693436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.841 [2024-11-07 13:44:35.693450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.841 qpair failed and we were unable to recover it. 00:39:27.841 [2024-11-07 13:44:35.693744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.841 [2024-11-07 13:44:35.693758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.841 qpair failed and we were unable to recover it. 00:39:27.841 [2024-11-07 13:44:35.694063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.841 [2024-11-07 13:44:35.694077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.841 qpair failed and we were unable to recover it. 00:39:27.841 [2024-11-07 13:44:35.694277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.841 [2024-11-07 13:44:35.694291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.841 qpair failed and we were unable to recover it. 00:39:27.841 [2024-11-07 13:44:35.694618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.841 [2024-11-07 13:44:35.694632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.841 qpair failed and we were unable to recover it. 00:39:27.841 [2024-11-07 13:44:35.694969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.841 [2024-11-07 13:44:35.694983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.841 qpair failed and we were unable to recover it. 00:39:27.841 [2024-11-07 13:44:35.695302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.841 [2024-11-07 13:44:35.695315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.841 qpair failed and we were unable to recover it. 00:39:27.841 [2024-11-07 13:44:35.695646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.841 [2024-11-07 13:44:35.695659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.841 qpair failed and we were unable to recover it. 00:39:27.841 [2024-11-07 13:44:35.695938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.841 [2024-11-07 13:44:35.695952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.841 qpair failed and we were unable to recover it. 00:39:27.841 [2024-11-07 13:44:35.696067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.841 [2024-11-07 13:44:35.696081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.841 qpair failed and we were unable to recover it. 00:39:27.841 [2024-11-07 13:44:35.696422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.841 [2024-11-07 13:44:35.696436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.841 qpair failed and we were unable to recover it. 00:39:27.841 [2024-11-07 13:44:35.696757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.841 [2024-11-07 13:44:35.696771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.841 qpair failed and we were unable to recover it. 00:39:27.841 [2024-11-07 13:44:35.697108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.841 [2024-11-07 13:44:35.697122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.841 qpair failed and we were unable to recover it. 00:39:27.841 [2024-11-07 13:44:35.697408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.841 [2024-11-07 13:44:35.697428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.841 qpair failed and we were unable to recover it. 00:39:27.841 [2024-11-07 13:44:35.697755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.841 [2024-11-07 13:44:35.697768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.841 qpair failed and we were unable to recover it. 00:39:27.841 [2024-11-07 13:44:35.697993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.841 [2024-11-07 13:44:35.698007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.841 qpair failed and we were unable to recover it. 00:39:27.841 [2024-11-07 13:44:35.698293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.841 [2024-11-07 13:44:35.698307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.841 qpair failed and we were unable to recover it. 00:39:27.841 [2024-11-07 13:44:35.698591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.841 [2024-11-07 13:44:35.698605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.841 qpair failed and we were unable to recover it. 00:39:27.841 [2024-11-07 13:44:35.698817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.841 [2024-11-07 13:44:35.698831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.841 qpair failed and we were unable to recover it. 00:39:27.841 [2024-11-07 13:44:35.699134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.841 [2024-11-07 13:44:35.699148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.841 qpair failed and we were unable to recover it. 00:39:27.841 [2024-11-07 13:44:35.699416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.841 [2024-11-07 13:44:35.699429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.841 qpair failed and we were unable to recover it. 00:39:27.841 [2024-11-07 13:44:35.699758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.841 [2024-11-07 13:44:35.699771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.841 qpair failed and we were unable to recover it. 00:39:27.842 [2024-11-07 13:44:35.700103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.842 [2024-11-07 13:44:35.700118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.842 qpair failed and we were unable to recover it. 00:39:27.842 [2024-11-07 13:44:35.700452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.842 [2024-11-07 13:44:35.700467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.842 qpair failed and we were unable to recover it. 00:39:27.842 [2024-11-07 13:44:35.700788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.842 [2024-11-07 13:44:35.700801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.842 qpair failed and we were unable to recover it. 00:39:27.842 [2024-11-07 13:44:35.701152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.842 [2024-11-07 13:44:35.701167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.842 qpair failed and we were unable to recover it. 00:39:27.842 [2024-11-07 13:44:35.701498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.842 [2024-11-07 13:44:35.701511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.842 qpair failed and we were unable to recover it. 00:39:27.842 [2024-11-07 13:44:35.701789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.842 [2024-11-07 13:44:35.701802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.842 qpair failed and we were unable to recover it. 00:39:27.842 [2024-11-07 13:44:35.702129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.842 [2024-11-07 13:44:35.702144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.842 qpair failed and we were unable to recover it. 00:39:27.842 [2024-11-07 13:44:35.702468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.842 [2024-11-07 13:44:35.702482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.842 qpair failed and we were unable to recover it. 00:39:27.842 [2024-11-07 13:44:35.702790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.842 [2024-11-07 13:44:35.702804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.842 qpair failed and we were unable to recover it. 00:39:27.842 [2024-11-07 13:44:35.703021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.842 [2024-11-07 13:44:35.703035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.842 qpair failed and we were unable to recover it. 00:39:27.842 [2024-11-07 13:44:35.703347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.842 [2024-11-07 13:44:35.703360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.842 qpair failed and we were unable to recover it. 00:39:27.842 [2024-11-07 13:44:35.703671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.842 [2024-11-07 13:44:35.703685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.842 qpair failed and we were unable to recover it. 00:39:27.842 [2024-11-07 13:44:35.703990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.842 [2024-11-07 13:44:35.704004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.842 qpair failed and we were unable to recover it. 00:39:27.842 [2024-11-07 13:44:35.704353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.842 [2024-11-07 13:44:35.704367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.842 qpair failed and we were unable to recover it. 00:39:27.842 [2024-11-07 13:44:35.704673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.842 [2024-11-07 13:44:35.704689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.842 qpair failed and we were unable to recover it. 00:39:27.842 [2024-11-07 13:44:35.705059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.842 [2024-11-07 13:44:35.705074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.842 qpair failed and we were unable to recover it. 00:39:27.842 [2024-11-07 13:44:35.705349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.842 [2024-11-07 13:44:35.705362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.842 qpair failed and we were unable to recover it. 00:39:27.842 [2024-11-07 13:44:35.705667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.842 [2024-11-07 13:44:35.705681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.842 qpair failed and we were unable to recover it. 00:39:27.842 [2024-11-07 13:44:35.705989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.842 [2024-11-07 13:44:35.706003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.842 qpair failed and we were unable to recover it. 00:39:27.842 [2024-11-07 13:44:35.706329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.842 [2024-11-07 13:44:35.706348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.842 qpair failed and we were unable to recover it. 00:39:27.842 [2024-11-07 13:44:35.706655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.842 [2024-11-07 13:44:35.706668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.842 qpair failed and we were unable to recover it. 00:39:27.842 [2024-11-07 13:44:35.707024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.842 [2024-11-07 13:44:35.707038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.842 qpair failed and we were unable to recover it. 00:39:27.842 [2024-11-07 13:44:35.707259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.842 [2024-11-07 13:44:35.707273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.842 qpair failed and we were unable to recover it. 00:39:27.842 [2024-11-07 13:44:35.707551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.842 [2024-11-07 13:44:35.707566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.842 qpair failed and we were unable to recover it. 00:39:27.842 [2024-11-07 13:44:35.707885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.842 [2024-11-07 13:44:35.707899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.842 qpair failed and we were unable to recover it. 00:39:27.842 [2024-11-07 13:44:35.708259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.842 [2024-11-07 13:44:35.708273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.842 qpair failed and we were unable to recover it. 00:39:27.842 [2024-11-07 13:44:35.708598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.842 [2024-11-07 13:44:35.708611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.842 qpair failed and we were unable to recover it. 00:39:27.842 [2024-11-07 13:44:35.708920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.842 [2024-11-07 13:44:35.708933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.842 qpair failed and we were unable to recover it. 00:39:27.842 [2024-11-07 13:44:35.709222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.842 [2024-11-07 13:44:35.709236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.842 qpair failed and we were unable to recover it. 00:39:27.842 [2024-11-07 13:44:35.709552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.842 [2024-11-07 13:44:35.709566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.842 qpair failed and we were unable to recover it. 00:39:27.842 [2024-11-07 13:44:35.709903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.842 [2024-11-07 13:44:35.709917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.842 qpair failed and we were unable to recover it. 00:39:27.842 [2024-11-07 13:44:35.710240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.842 [2024-11-07 13:44:35.710254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.842 qpair failed and we were unable to recover it. 00:39:27.842 [2024-11-07 13:44:35.710450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.842 [2024-11-07 13:44:35.710463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.842 qpair failed and we were unable to recover it. 00:39:27.842 [2024-11-07 13:44:35.710795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.842 [2024-11-07 13:44:35.710809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.842 qpair failed and we were unable to recover it. 00:39:27.842 [2024-11-07 13:44:35.711109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.842 [2024-11-07 13:44:35.711123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.842 qpair failed and we were unable to recover it. 00:39:27.842 [2024-11-07 13:44:35.711442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.842 [2024-11-07 13:44:35.711455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.842 qpair failed and we were unable to recover it. 00:39:27.842 [2024-11-07 13:44:35.711781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.842 [2024-11-07 13:44:35.711795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.842 qpair failed and we were unable to recover it. 00:39:27.842 [2024-11-07 13:44:35.712112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.842 [2024-11-07 13:44:35.712126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.842 qpair failed and we were unable to recover it. 00:39:27.842 [2024-11-07 13:44:35.712430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.843 [2024-11-07 13:44:35.712444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.843 qpair failed and we were unable to recover it. 00:39:27.843 [2024-11-07 13:44:35.712756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.843 [2024-11-07 13:44:35.712779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.843 qpair failed and we were unable to recover it. 00:39:27.843 [2024-11-07 13:44:35.713087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.843 [2024-11-07 13:44:35.713101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.843 qpair failed and we were unable to recover it. 00:39:27.843 [2024-11-07 13:44:35.713381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.843 [2024-11-07 13:44:35.713395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.843 qpair failed and we were unable to recover it. 00:39:27.843 [2024-11-07 13:44:35.713724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.843 [2024-11-07 13:44:35.713738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.843 qpair failed and we were unable to recover it. 00:39:27.843 [2024-11-07 13:44:35.714044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.843 [2024-11-07 13:44:35.714057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.843 qpair failed and we were unable to recover it. 00:39:27.843 [2024-11-07 13:44:35.714347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.843 [2024-11-07 13:44:35.714361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.843 qpair failed and we were unable to recover it. 00:39:27.843 [2024-11-07 13:44:35.714670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.843 [2024-11-07 13:44:35.714685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.843 qpair failed and we were unable to recover it. 00:39:27.843 [2024-11-07 13:44:35.715030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.843 [2024-11-07 13:44:35.715044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.843 qpair failed and we were unable to recover it. 00:39:27.843 [2024-11-07 13:44:35.715354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.843 [2024-11-07 13:44:35.715367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.843 qpair failed and we were unable to recover it. 00:39:27.843 [2024-11-07 13:44:35.715564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.843 [2024-11-07 13:44:35.715578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.843 qpair failed and we were unable to recover it. 00:39:27.843 [2024-11-07 13:44:35.715904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.843 [2024-11-07 13:44:35.715918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.843 qpair failed and we were unable to recover it. 00:39:27.843 [2024-11-07 13:44:35.716230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.843 [2024-11-07 13:44:35.716244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.843 qpair failed and we were unable to recover it. 00:39:27.843 [2024-11-07 13:44:35.716556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.843 [2024-11-07 13:44:35.716569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.843 qpair failed and we were unable to recover it. 00:39:27.843 [2024-11-07 13:44:35.716870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.843 [2024-11-07 13:44:35.716885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.843 qpair failed and we were unable to recover it. 00:39:27.843 [2024-11-07 13:44:35.717116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.843 [2024-11-07 13:44:35.717130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.843 qpair failed and we were unable to recover it. 00:39:27.843 [2024-11-07 13:44:35.717441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.843 [2024-11-07 13:44:35.717456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.843 qpair failed and we were unable to recover it. 00:39:27.843 [2024-11-07 13:44:35.717749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.843 [2024-11-07 13:44:35.717764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.843 qpair failed and we were unable to recover it. 00:39:27.843 [2024-11-07 13:44:35.718097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.843 [2024-11-07 13:44:35.718111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.843 qpair failed and we were unable to recover it. 00:39:27.843 [2024-11-07 13:44:35.718397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.843 [2024-11-07 13:44:35.718410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.843 qpair failed and we were unable to recover it. 00:39:27.843 [2024-11-07 13:44:35.718745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.843 [2024-11-07 13:44:35.718758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.843 qpair failed and we were unable to recover it. 00:39:27.843 [2024-11-07 13:44:35.719101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.843 [2024-11-07 13:44:35.719116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.843 qpair failed and we were unable to recover it. 00:39:27.843 [2024-11-07 13:44:35.719434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.843 [2024-11-07 13:44:35.719448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.843 qpair failed and we were unable to recover it. 00:39:27.843 [2024-11-07 13:44:35.719789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.843 [2024-11-07 13:44:35.719803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.843 qpair failed and we were unable to recover it. 00:39:27.843 [2024-11-07 13:44:35.720127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.843 [2024-11-07 13:44:35.720141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.843 qpair failed and we were unable to recover it. 00:39:27.843 [2024-11-07 13:44:35.720457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.843 [2024-11-07 13:44:35.720470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.843 qpair failed and we were unable to recover it. 00:39:27.843 [2024-11-07 13:44:35.720767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.843 [2024-11-07 13:44:35.720780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.843 qpair failed and we were unable to recover it. 00:39:27.843 [2024-11-07 13:44:35.721102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.843 [2024-11-07 13:44:35.721123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.843 qpair failed and we were unable to recover it. 00:39:27.843 [2024-11-07 13:44:35.721439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.843 [2024-11-07 13:44:35.721453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.843 qpair failed and we were unable to recover it. 00:39:27.843 [2024-11-07 13:44:35.721785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.843 [2024-11-07 13:44:35.721799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.843 qpair failed and we were unable to recover it. 00:39:27.843 [2024-11-07 13:44:35.722173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.843 [2024-11-07 13:44:35.722188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.843 qpair failed and we were unable to recover it. 00:39:27.843 [2024-11-07 13:44:35.722359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.843 [2024-11-07 13:44:35.722374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.843 qpair failed and we were unable to recover it. 00:39:27.843 [2024-11-07 13:44:35.722677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.843 [2024-11-07 13:44:35.722690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.843 qpair failed and we were unable to recover it. 00:39:27.843 [2024-11-07 13:44:35.723012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.843 [2024-11-07 13:44:35.723026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.843 qpair failed and we were unable to recover it. 00:39:27.843 [2024-11-07 13:44:35.723330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.843 [2024-11-07 13:44:35.723345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.843 qpair failed and we were unable to recover it. 00:39:27.843 [2024-11-07 13:44:35.723636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.843 [2024-11-07 13:44:35.723651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.843 qpair failed and we were unable to recover it. 00:39:27.843 [2024-11-07 13:44:35.723967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.843 [2024-11-07 13:44:35.723981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.843 qpair failed and we were unable to recover it. 00:39:27.843 [2024-11-07 13:44:35.724330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.843 [2024-11-07 13:44:35.724344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.843 qpair failed and we were unable to recover it. 00:39:27.843 [2024-11-07 13:44:35.724650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.843 [2024-11-07 13:44:35.724663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.844 qpair failed and we were unable to recover it. 00:39:27.844 [2024-11-07 13:44:35.724962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.844 [2024-11-07 13:44:35.724975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.844 qpair failed and we were unable to recover it. 00:39:27.844 [2024-11-07 13:44:35.725294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.844 [2024-11-07 13:44:35.725307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.844 qpair failed and we were unable to recover it. 00:39:27.844 [2024-11-07 13:44:35.725499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.844 [2024-11-07 13:44:35.725513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.844 qpair failed and we were unable to recover it. 00:39:27.844 [2024-11-07 13:44:35.725818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.844 [2024-11-07 13:44:35.725831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.844 qpair failed and we were unable to recover it. 00:39:27.844 [2024-11-07 13:44:35.726034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.844 [2024-11-07 13:44:35.726049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.844 qpair failed and we were unable to recover it. 00:39:27.844 [2024-11-07 13:44:35.726358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.844 [2024-11-07 13:44:35.726372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.844 qpair failed and we were unable to recover it. 00:39:27.844 [2024-11-07 13:44:35.726688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.844 [2024-11-07 13:44:35.726702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.844 qpair failed and we were unable to recover it. 00:39:27.844 [2024-11-07 13:44:35.727030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.844 [2024-11-07 13:44:35.727044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.844 qpair failed and we were unable to recover it. 00:39:27.844 [2024-11-07 13:44:35.727379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.844 [2024-11-07 13:44:35.727393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.844 qpair failed and we were unable to recover it. 00:39:27.844 [2024-11-07 13:44:35.727707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.844 [2024-11-07 13:44:35.727720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.844 qpair failed and we were unable to recover it. 00:39:27.844 [2024-11-07 13:44:35.728039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.844 [2024-11-07 13:44:35.728052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.844 qpair failed and we were unable to recover it. 00:39:27.844 [2024-11-07 13:44:35.728348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.844 [2024-11-07 13:44:35.728361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.844 qpair failed and we were unable to recover it. 00:39:27.844 [2024-11-07 13:44:35.728678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.844 [2024-11-07 13:44:35.728692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.844 qpair failed and we were unable to recover it. 00:39:27.844 [2024-11-07 13:44:35.729014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.844 [2024-11-07 13:44:35.729028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.844 qpair failed and we were unable to recover it. 00:39:27.844 [2024-11-07 13:44:35.729326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.844 [2024-11-07 13:44:35.729339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.844 qpair failed and we were unable to recover it. 00:39:27.844 [2024-11-07 13:44:35.729614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.844 [2024-11-07 13:44:35.729627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.844 qpair failed and we were unable to recover it. 00:39:27.844 [2024-11-07 13:44:35.729941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.844 [2024-11-07 13:44:35.729955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.844 qpair failed and we were unable to recover it. 00:39:27.844 [2024-11-07 13:44:35.730287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.844 [2024-11-07 13:44:35.730302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.844 qpair failed and we were unable to recover it. 00:39:27.844 [2024-11-07 13:44:35.730613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.844 [2024-11-07 13:44:35.730627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.844 qpair failed and we were unable to recover it. 00:39:27.844 [2024-11-07 13:44:35.730953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.844 [2024-11-07 13:44:35.730967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.844 qpair failed and we were unable to recover it. 00:39:27.844 [2024-11-07 13:44:35.731306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.844 [2024-11-07 13:44:35.731320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.844 qpair failed and we were unable to recover it. 00:39:27.844 [2024-11-07 13:44:35.731630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.844 [2024-11-07 13:44:35.731644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.844 qpair failed and we were unable to recover it. 00:39:27.844 [2024-11-07 13:44:35.731846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.844 [2024-11-07 13:44:35.731860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.844 qpair failed and we were unable to recover it. 00:39:27.844 [2024-11-07 13:44:35.732190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.844 [2024-11-07 13:44:35.732204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.844 qpair failed and we were unable to recover it. 00:39:27.844 [2024-11-07 13:44:35.732414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.844 [2024-11-07 13:44:35.732428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.844 qpair failed and we were unable to recover it. 00:39:27.844 [2024-11-07 13:44:35.732749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.844 [2024-11-07 13:44:35.732762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.844 qpair failed and we were unable to recover it. 00:39:27.844 [2024-11-07 13:44:35.733116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.844 [2024-11-07 13:44:35.733130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.844 qpair failed and we were unable to recover it. 00:39:27.844 [2024-11-07 13:44:35.733456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.844 [2024-11-07 13:44:35.733469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.844 qpair failed and we were unable to recover it. 00:39:27.844 [2024-11-07 13:44:35.733797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.844 [2024-11-07 13:44:35.733810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.844 qpair failed and we were unable to recover it. 00:39:27.844 [2024-11-07 13:44:35.734185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.844 [2024-11-07 13:44:35.734199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.844 qpair failed and we were unable to recover it. 00:39:27.844 [2024-11-07 13:44:35.734528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.844 [2024-11-07 13:44:35.734542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.844 qpair failed and we were unable to recover it. 00:39:27.844 [2024-11-07 13:44:35.734869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.844 [2024-11-07 13:44:35.734884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.844 qpair failed and we were unable to recover it. 00:39:27.844 [2024-11-07 13:44:35.735202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.844 [2024-11-07 13:44:35.735215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.844 qpair failed and we were unable to recover it. 00:39:27.844 [2024-11-07 13:44:35.735524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.845 [2024-11-07 13:44:35.735538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.845 qpair failed and we were unable to recover it. 00:39:27.845 [2024-11-07 13:44:35.735881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.845 [2024-11-07 13:44:35.735895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.845 qpair failed and we were unable to recover it. 00:39:27.845 [2024-11-07 13:44:35.736200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.845 [2024-11-07 13:44:35.736214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.845 qpair failed and we were unable to recover it. 00:39:27.845 [2024-11-07 13:44:35.736531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.845 [2024-11-07 13:44:35.736554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.845 qpair failed and we were unable to recover it. 00:39:27.845 [2024-11-07 13:44:35.736908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.845 [2024-11-07 13:44:35.736922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.845 qpair failed and we were unable to recover it. 00:39:27.845 [2024-11-07 13:44:35.737224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.845 [2024-11-07 13:44:35.737238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.845 qpair failed and we were unable to recover it. 00:39:27.845 [2024-11-07 13:44:35.737550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.845 [2024-11-07 13:44:35.737563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.845 qpair failed and we were unable to recover it. 00:39:27.845 [2024-11-07 13:44:35.737886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.845 [2024-11-07 13:44:35.737901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.845 qpair failed and we were unable to recover it. 00:39:27.845 [2024-11-07 13:44:35.738213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.845 [2024-11-07 13:44:35.738226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.845 qpair failed and we were unable to recover it. 00:39:27.845 [2024-11-07 13:44:35.738543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.845 [2024-11-07 13:44:35.738556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.845 qpair failed and we were unable to recover it. 00:39:27.845 [2024-11-07 13:44:35.738866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.845 [2024-11-07 13:44:35.738880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.845 qpair failed and we were unable to recover it. 00:39:27.845 [2024-11-07 13:44:35.739201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.845 [2024-11-07 13:44:35.739215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.845 qpair failed and we were unable to recover it. 00:39:27.845 [2024-11-07 13:44:35.739532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.845 [2024-11-07 13:44:35.739545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.845 qpair failed and we were unable to recover it. 00:39:27.845 [2024-11-07 13:44:35.739876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.845 [2024-11-07 13:44:35.739890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.845 qpair failed and we were unable to recover it. 00:39:27.845 [2024-11-07 13:44:35.740234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.845 [2024-11-07 13:44:35.740248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.845 qpair failed and we were unable to recover it. 00:39:27.845 [2024-11-07 13:44:35.740562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.845 [2024-11-07 13:44:35.740575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.845 qpair failed and we were unable to recover it. 00:39:27.845 [2024-11-07 13:44:35.740777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.845 [2024-11-07 13:44:35.740790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.845 qpair failed and we were unable to recover it. 00:39:27.845 [2024-11-07 13:44:35.741118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.845 [2024-11-07 13:44:35.741132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.845 qpair failed and we were unable to recover it. 00:39:27.845 [2024-11-07 13:44:35.741449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.845 [2024-11-07 13:44:35.741463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.845 qpair failed and we were unable to recover it. 00:39:27.845 [2024-11-07 13:44:35.741759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.845 [2024-11-07 13:44:35.741773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.845 qpair failed and we were unable to recover it. 00:39:27.845 [2024-11-07 13:44:35.742125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.845 [2024-11-07 13:44:35.742139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.845 qpair failed and we were unable to recover it. 00:39:27.845 [2024-11-07 13:44:35.742434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.845 [2024-11-07 13:44:35.742447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.845 qpair failed and we were unable to recover it. 00:39:27.845 [2024-11-07 13:44:35.742756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.845 [2024-11-07 13:44:35.742769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.845 qpair failed and we were unable to recover it. 00:39:27.845 [2024-11-07 13:44:35.743007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.845 [2024-11-07 13:44:35.743021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.845 qpair failed and we were unable to recover it. 00:39:27.845 [2024-11-07 13:44:35.743332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.845 [2024-11-07 13:44:35.743348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.845 qpair failed and we were unable to recover it. 00:39:27.845 [2024-11-07 13:44:35.743668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.845 [2024-11-07 13:44:35.743683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.845 qpair failed and we were unable to recover it. 00:39:27.845 [2024-11-07 13:44:35.744005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.845 [2024-11-07 13:44:35.744020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.845 qpair failed and we were unable to recover it. 00:39:27.845 [2024-11-07 13:44:35.744324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.845 [2024-11-07 13:44:35.744338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.845 qpair failed and we were unable to recover it. 00:39:27.845 [2024-11-07 13:44:35.744646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.845 [2024-11-07 13:44:35.744659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.845 qpair failed and we were unable to recover it. 00:39:27.845 [2024-11-07 13:44:35.744980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.845 [2024-11-07 13:44:35.744995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.845 qpair failed and we were unable to recover it. 00:39:27.845 [2024-11-07 13:44:35.745356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.845 [2024-11-07 13:44:35.745369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.845 qpair failed and we were unable to recover it. 00:39:27.845 [2024-11-07 13:44:35.745670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.845 [2024-11-07 13:44:35.745683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.845 qpair failed and we were unable to recover it. 00:39:27.845 [2024-11-07 13:44:35.746016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.845 [2024-11-07 13:44:35.746030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.845 qpair failed and we were unable to recover it. 00:39:27.845 [2024-11-07 13:44:35.746244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.845 [2024-11-07 13:44:35.746261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.845 qpair failed and we were unable to recover it. 00:39:27.845 [2024-11-07 13:44:35.746592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.845 [2024-11-07 13:44:35.746605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.845 qpair failed and we were unable to recover it. 00:39:27.845 [2024-11-07 13:44:35.746939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.845 [2024-11-07 13:44:35.746953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.845 qpair failed and we were unable to recover it. 00:39:27.845 [2024-11-07 13:44:35.747267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.845 [2024-11-07 13:44:35.747280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.845 qpair failed and we were unable to recover it. 00:39:27.845 [2024-11-07 13:44:35.747596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.845 [2024-11-07 13:44:35.747609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.845 qpair failed and we were unable to recover it. 00:39:27.845 [2024-11-07 13:44:35.747942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.845 [2024-11-07 13:44:35.747957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.846 qpair failed and we were unable to recover it. 00:39:27.846 [2024-11-07 13:44:35.748222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.846 [2024-11-07 13:44:35.748236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.846 qpair failed and we were unable to recover it. 00:39:27.846 [2024-11-07 13:44:35.748532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.846 [2024-11-07 13:44:35.748545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.846 qpair failed and we were unable to recover it. 00:39:27.846 [2024-11-07 13:44:35.748855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.846 [2024-11-07 13:44:35.748874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.846 qpair failed and we were unable to recover it. 00:39:27.846 [2024-11-07 13:44:35.749103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.846 [2024-11-07 13:44:35.749117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.846 qpair failed and we were unable to recover it. 00:39:27.846 [2024-11-07 13:44:35.749400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.846 [2024-11-07 13:44:35.749413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.846 qpair failed and we were unable to recover it. 00:39:27.846 [2024-11-07 13:44:35.749742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.846 [2024-11-07 13:44:35.749755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.846 qpair failed and we were unable to recover it. 00:39:27.846 [2024-11-07 13:44:35.750087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.846 [2024-11-07 13:44:35.750108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.846 qpair failed and we were unable to recover it. 00:39:27.846 [2024-11-07 13:44:35.750461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.846 [2024-11-07 13:44:35.750474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.846 qpair failed and we were unable to recover it. 00:39:27.846 [2024-11-07 13:44:35.750799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.846 [2024-11-07 13:44:35.750813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.846 qpair failed and we were unable to recover it. 00:39:27.846 [2024-11-07 13:44:35.751170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.846 [2024-11-07 13:44:35.751183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.846 qpair failed and we were unable to recover it. 00:39:27.846 [2024-11-07 13:44:35.751511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.846 [2024-11-07 13:44:35.751524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.846 qpair failed and we were unable to recover it. 00:39:27.846 [2024-11-07 13:44:35.751821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.846 [2024-11-07 13:44:35.751834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.846 qpair failed and we were unable to recover it. 00:39:27.846 [2024-11-07 13:44:35.752201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.846 [2024-11-07 13:44:35.752216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.846 qpair failed and we were unable to recover it. 00:39:27.846 [2024-11-07 13:44:35.752524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.846 [2024-11-07 13:44:35.752538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.846 qpair failed and we were unable to recover it. 00:39:27.846 [2024-11-07 13:44:35.752872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.846 [2024-11-07 13:44:35.752887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.846 qpair failed and we were unable to recover it. 00:39:27.846 [2024-11-07 13:44:35.753108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.846 [2024-11-07 13:44:35.753121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.846 qpair failed and we were unable to recover it. 00:39:27.846 [2024-11-07 13:44:35.753449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.846 [2024-11-07 13:44:35.753462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.846 qpair failed and we were unable to recover it. 00:39:27.846 [2024-11-07 13:44:35.753763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.846 [2024-11-07 13:44:35.753776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.846 qpair failed and we were unable to recover it. 00:39:27.846 [2024-11-07 13:44:35.754021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.846 [2024-11-07 13:44:35.754035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.846 qpair failed and we were unable to recover it. 00:39:27.846 [2024-11-07 13:44:35.754366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.846 [2024-11-07 13:44:35.754379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.846 qpair failed and we were unable to recover it. 00:39:27.846 [2024-11-07 13:44:35.754713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.846 [2024-11-07 13:44:35.754726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.846 qpair failed and we were unable to recover it. 00:39:27.846 [2024-11-07 13:44:35.754982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.846 [2024-11-07 13:44:35.754996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.846 qpair failed and we were unable to recover it. 00:39:27.846 [2024-11-07 13:44:35.755303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.846 [2024-11-07 13:44:35.755316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.846 qpair failed and we were unable to recover it. 00:39:27.846 [2024-11-07 13:44:35.755646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.846 [2024-11-07 13:44:35.755659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.846 qpair failed and we were unable to recover it. 00:39:27.846 [2024-11-07 13:44:35.755958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.846 [2024-11-07 13:44:35.755972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.846 qpair failed and we were unable to recover it. 00:39:27.846 [2024-11-07 13:44:35.756286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.846 [2024-11-07 13:44:35.756301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.846 qpair failed and we were unable to recover it. 00:39:27.846 [2024-11-07 13:44:35.756589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.846 [2024-11-07 13:44:35.756609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.846 qpair failed and we were unable to recover it. 00:39:27.846 [2024-11-07 13:44:35.756937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.846 [2024-11-07 13:44:35.756951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.846 qpair failed and we were unable to recover it. 00:39:27.846 [2024-11-07 13:44:35.757162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.846 [2024-11-07 13:44:35.757175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.846 qpair failed and we were unable to recover it. 00:39:27.846 [2024-11-07 13:44:35.757479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.846 [2024-11-07 13:44:35.757493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.846 qpair failed and we were unable to recover it. 00:39:27.846 [2024-11-07 13:44:35.757818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.846 [2024-11-07 13:44:35.757833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.846 qpair failed and we were unable to recover it. 00:39:27.846 [2024-11-07 13:44:35.758152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.846 [2024-11-07 13:44:35.758167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.846 qpair failed and we were unable to recover it. 00:39:27.846 [2024-11-07 13:44:35.758493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.846 [2024-11-07 13:44:35.758507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.846 qpair failed and we were unable to recover it. 00:39:27.846 [2024-11-07 13:44:35.758829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.846 [2024-11-07 13:44:35.758842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.846 qpair failed and we were unable to recover it. 00:39:27.846 [2024-11-07 13:44:35.759165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.846 [2024-11-07 13:44:35.759180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.846 qpair failed and we were unable to recover it. 00:39:27.846 [2024-11-07 13:44:35.759517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.846 [2024-11-07 13:44:35.759530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.846 qpair failed and we were unable to recover it. 00:39:27.846 [2024-11-07 13:44:35.759848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.846 [2024-11-07 13:44:35.759874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.846 qpair failed and we were unable to recover it. 00:39:27.846 [2024-11-07 13:44:35.760180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.846 [2024-11-07 13:44:35.760194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.846 qpair failed and we were unable to recover it. 00:39:27.847 [2024-11-07 13:44:35.760390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.847 [2024-11-07 13:44:35.760404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.847 qpair failed and we were unable to recover it. 00:39:27.847 [2024-11-07 13:44:35.760707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.847 [2024-11-07 13:44:35.760721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.847 qpair failed and we were unable to recover it. 00:39:27.847 [2024-11-07 13:44:35.761029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.847 [2024-11-07 13:44:35.761043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.847 qpair failed and we were unable to recover it. 00:39:27.847 [2024-11-07 13:44:35.761349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.847 [2024-11-07 13:44:35.761363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.847 qpair failed and we were unable to recover it. 00:39:27.847 [2024-11-07 13:44:35.761671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.847 [2024-11-07 13:44:35.761684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.847 qpair failed and we were unable to recover it. 00:39:27.847 [2024-11-07 13:44:35.761976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.847 [2024-11-07 13:44:35.761990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.847 qpair failed and we were unable to recover it. 00:39:27.847 [2024-11-07 13:44:35.762294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.847 [2024-11-07 13:44:35.762308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.847 qpair failed and we were unable to recover it. 00:39:27.847 [2024-11-07 13:44:35.762621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.847 [2024-11-07 13:44:35.762635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.847 qpair failed and we were unable to recover it. 00:39:27.847 [2024-11-07 13:44:35.762943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.847 [2024-11-07 13:44:35.762957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.847 qpair failed and we were unable to recover it. 00:39:27.847 [2024-11-07 13:44:35.763293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.847 [2024-11-07 13:44:35.763307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.847 qpair failed and we were unable to recover it. 00:39:27.847 [2024-11-07 13:44:35.763619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.847 [2024-11-07 13:44:35.763632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.847 qpair failed and we were unable to recover it. 00:39:27.847 [2024-11-07 13:44:35.764006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.847 [2024-11-07 13:44:35.764020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.847 qpair failed and we were unable to recover it. 00:39:27.847 [2024-11-07 13:44:35.764316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.847 [2024-11-07 13:44:35.764329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.847 qpair failed and we were unable to recover it. 00:39:27.847 [2024-11-07 13:44:35.764640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.847 [2024-11-07 13:44:35.764654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.847 qpair failed and we were unable to recover it. 00:39:27.847 [2024-11-07 13:44:35.764964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.847 [2024-11-07 13:44:35.764978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.847 qpair failed and we were unable to recover it. 00:39:27.847 [2024-11-07 13:44:35.765263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.847 [2024-11-07 13:44:35.765277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.847 qpair failed and we were unable to recover it. 00:39:27.847 [2024-11-07 13:44:35.765585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.847 [2024-11-07 13:44:35.765598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.847 qpair failed and we were unable to recover it. 00:39:27.847 [2024-11-07 13:44:35.765912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.847 [2024-11-07 13:44:35.765927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.847 qpair failed and we were unable to recover it. 00:39:27.847 [2024-11-07 13:44:35.766124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.847 [2024-11-07 13:44:35.766137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.847 qpair failed and we were unable to recover it. 00:39:27.847 [2024-11-07 13:44:35.766443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.847 [2024-11-07 13:44:35.766456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.847 qpair failed and we were unable to recover it. 00:39:27.847 [2024-11-07 13:44:35.766755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.847 [2024-11-07 13:44:35.766769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.847 qpair failed and we were unable to recover it. 00:39:27.847 [2024-11-07 13:44:35.766965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.847 [2024-11-07 13:44:35.766981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.847 qpair failed and we were unable to recover it. 00:39:27.847 [2024-11-07 13:44:35.767263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.847 [2024-11-07 13:44:35.767276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.847 qpair failed and we were unable to recover it. 00:39:27.847 [2024-11-07 13:44:35.767583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.847 [2024-11-07 13:44:35.767596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.847 qpair failed and we were unable to recover it. 00:39:27.847 [2024-11-07 13:44:35.767931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.847 [2024-11-07 13:44:35.767945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.847 qpair failed and we were unable to recover it. 00:39:27.847 [2024-11-07 13:44:35.768262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.847 [2024-11-07 13:44:35.768276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.847 qpair failed and we were unable to recover it. 00:39:27.847 [2024-11-07 13:44:35.768578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.847 [2024-11-07 13:44:35.768592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.847 qpair failed and we were unable to recover it. 00:39:27.847 [2024-11-07 13:44:35.768982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.847 [2024-11-07 13:44:35.768998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.847 qpair failed and we were unable to recover it. 00:39:27.847 [2024-11-07 13:44:35.769282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.847 [2024-11-07 13:44:35.769296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.847 qpair failed and we were unable to recover it. 00:39:27.847 [2024-11-07 13:44:35.769511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.847 [2024-11-07 13:44:35.769524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.847 qpair failed and we were unable to recover it. 00:39:27.847 [2024-11-07 13:44:35.769876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.847 [2024-11-07 13:44:35.769890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.847 qpair failed and we were unable to recover it. 00:39:27.847 [2024-11-07 13:44:35.770203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.847 [2024-11-07 13:44:35.770216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.847 qpair failed and we were unable to recover it. 00:39:27.847 [2024-11-07 13:44:35.770533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.847 [2024-11-07 13:44:35.770546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.847 qpair failed and we were unable to recover it. 00:39:27.847 [2024-11-07 13:44:35.770897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.847 [2024-11-07 13:44:35.770910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.847 qpair failed and we were unable to recover it. 00:39:27.847 [2024-11-07 13:44:35.771211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.847 [2024-11-07 13:44:35.771225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.847 qpair failed and we were unable to recover it. 00:39:27.847 [2024-11-07 13:44:35.771545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.847 [2024-11-07 13:44:35.771558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.847 qpair failed and we were unable to recover it. 00:39:27.847 [2024-11-07 13:44:35.771906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.847 [2024-11-07 13:44:35.771919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.847 qpair failed and we were unable to recover it. 00:39:27.847 [2024-11-07 13:44:35.772224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.847 [2024-11-07 13:44:35.772238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.847 qpair failed and we were unable to recover it. 00:39:27.848 [2024-11-07 13:44:35.772443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.848 [2024-11-07 13:44:35.772456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.848 qpair failed and we were unable to recover it. 00:39:27.848 [2024-11-07 13:44:35.772740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.848 [2024-11-07 13:44:35.772753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.848 qpair failed and we were unable to recover it. 00:39:27.848 [2024-11-07 13:44:35.773050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.848 [2024-11-07 13:44:35.773064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.848 qpair failed and we were unable to recover it. 00:39:27.848 [2024-11-07 13:44:35.773286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.848 [2024-11-07 13:44:35.773300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.848 qpair failed and we were unable to recover it. 00:39:27.848 [2024-11-07 13:44:35.773608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.848 [2024-11-07 13:44:35.773622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.848 qpair failed and we were unable to recover it. 00:39:27.848 [2024-11-07 13:44:35.773934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.848 [2024-11-07 13:44:35.773948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.848 qpair failed and we were unable to recover it. 00:39:27.848 [2024-11-07 13:44:35.774262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.848 [2024-11-07 13:44:35.774275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.848 qpair failed and we were unable to recover it. 00:39:27.848 [2024-11-07 13:44:35.774567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.848 [2024-11-07 13:44:35.774580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.848 qpair failed and we were unable to recover it. 00:39:27.848 [2024-11-07 13:44:35.774848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.848 [2024-11-07 13:44:35.774866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.848 qpair failed and we were unable to recover it. 00:39:27.848 [2024-11-07 13:44:35.775174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.848 [2024-11-07 13:44:35.775188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.848 qpair failed and we were unable to recover it. 00:39:27.848 [2024-11-07 13:44:35.775478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.848 [2024-11-07 13:44:35.775492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.848 qpair failed and we were unable to recover it. 00:39:27.848 [2024-11-07 13:44:35.775806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.848 [2024-11-07 13:44:35.775819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.848 qpair failed and we were unable to recover it. 00:39:27.848 [2024-11-07 13:44:35.776104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.848 [2024-11-07 13:44:35.776118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.848 qpair failed and we were unable to recover it. 00:39:27.848 [2024-11-07 13:44:35.776313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.848 [2024-11-07 13:44:35.776328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.848 qpair failed and we were unable to recover it. 00:39:27.848 [2024-11-07 13:44:35.776647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.848 [2024-11-07 13:44:35.776661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.848 qpair failed and we were unable to recover it. 00:39:27.848 [2024-11-07 13:44:35.776956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.848 [2024-11-07 13:44:35.776970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.848 qpair failed and we were unable to recover it. 00:39:27.848 [2024-11-07 13:44:35.777274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.848 [2024-11-07 13:44:35.777287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.848 qpair failed and we were unable to recover it. 00:39:27.848 [2024-11-07 13:44:35.777602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.848 [2024-11-07 13:44:35.777616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.848 qpair failed and we were unable to recover it. 00:39:27.848 [2024-11-07 13:44:35.777917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.848 [2024-11-07 13:44:35.777931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.848 qpair failed and we were unable to recover it. 00:39:27.848 [2024-11-07 13:44:35.778237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.848 [2024-11-07 13:44:35.778250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.848 qpair failed and we were unable to recover it. 00:39:27.848 [2024-11-07 13:44:35.778645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.848 [2024-11-07 13:44:35.778658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.848 qpair failed and we were unable to recover it. 00:39:27.848 [2024-11-07 13:44:35.778940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.848 [2024-11-07 13:44:35.778955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.848 qpair failed and we were unable to recover it. 00:39:27.848 [2024-11-07 13:44:35.779128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.848 [2024-11-07 13:44:35.779143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.848 qpair failed and we were unable to recover it. 00:39:27.848 [2024-11-07 13:44:35.779446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.848 [2024-11-07 13:44:35.779460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.848 qpair failed and we were unable to recover it. 00:39:27.848 [2024-11-07 13:44:35.779781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.848 [2024-11-07 13:44:35.779794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.848 qpair failed and we were unable to recover it. 00:39:27.848 [2024-11-07 13:44:35.780111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.848 [2024-11-07 13:44:35.780125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.848 qpair failed and we were unable to recover it. 00:39:27.848 [2024-11-07 13:44:35.780443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.848 [2024-11-07 13:44:35.780456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.848 qpair failed and we were unable to recover it. 00:39:27.848 [2024-11-07 13:44:35.780634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.848 [2024-11-07 13:44:35.780647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.848 qpair failed and we were unable to recover it. 00:39:27.848 [2024-11-07 13:44:35.781048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.848 [2024-11-07 13:44:35.781061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.848 qpair failed and we were unable to recover it. 00:39:27.848 [2024-11-07 13:44:35.781352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.848 [2024-11-07 13:44:35.781368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.848 qpair failed and we were unable to recover it. 00:39:27.848 [2024-11-07 13:44:35.781540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.848 [2024-11-07 13:44:35.781556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.848 qpair failed and we were unable to recover it. 00:39:27.848 [2024-11-07 13:44:35.781905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.848 [2024-11-07 13:44:35.781919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.848 qpair failed and we were unable to recover it. 00:39:27.848 [2024-11-07 13:44:35.782164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.848 [2024-11-07 13:44:35.782178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.848 qpair failed and we were unable to recover it. 00:39:27.848 [2024-11-07 13:44:35.782516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.848 [2024-11-07 13:44:35.782530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.848 qpair failed and we were unable to recover it. 00:39:27.848 [2024-11-07 13:44:35.782825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.848 [2024-11-07 13:44:35.782839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.848 qpair failed and we were unable to recover it. 00:39:27.848 [2024-11-07 13:44:35.783133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.848 [2024-11-07 13:44:35.783146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.848 qpair failed and we were unable to recover it. 00:39:27.848 [2024-11-07 13:44:35.783462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.848 [2024-11-07 13:44:35.783475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.848 qpair failed and we were unable to recover it. 00:39:27.848 [2024-11-07 13:44:35.783706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.848 [2024-11-07 13:44:35.783719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.848 qpair failed and we were unable to recover it. 00:39:27.848 [2024-11-07 13:44:35.784078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.849 [2024-11-07 13:44:35.784092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.849 qpair failed and we were unable to recover it. 00:39:27.849 [2024-11-07 13:44:35.784378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.849 [2024-11-07 13:44:35.784392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.849 qpair failed and we were unable to recover it. 00:39:27.849 [2024-11-07 13:44:35.784728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.849 [2024-11-07 13:44:35.784742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.849 qpair failed and we were unable to recover it. 00:39:27.849 [2024-11-07 13:44:35.785082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.849 [2024-11-07 13:44:35.785096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.849 qpair failed and we were unable to recover it. 00:39:27.849 [2024-11-07 13:44:35.785407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.849 [2024-11-07 13:44:35.785421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.849 qpair failed and we were unable to recover it. 00:39:27.849 [2024-11-07 13:44:35.785641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.849 [2024-11-07 13:44:35.785655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.849 qpair failed and we were unable to recover it. 00:39:27.849 [2024-11-07 13:44:35.785974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.849 [2024-11-07 13:44:35.785989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.849 qpair failed and we were unable to recover it. 00:39:27.849 [2024-11-07 13:44:35.786305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.849 [2024-11-07 13:44:35.786318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.849 qpair failed and we were unable to recover it. 00:39:27.849 [2024-11-07 13:44:35.786677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.849 [2024-11-07 13:44:35.786694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.849 qpair failed and we were unable to recover it. 00:39:27.849 [2024-11-07 13:44:35.787003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.849 [2024-11-07 13:44:35.787017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.849 qpair failed and we were unable to recover it. 00:39:27.849 [2024-11-07 13:44:35.787320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.849 [2024-11-07 13:44:35.787333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.849 qpair failed and we were unable to recover it. 00:39:27.849 [2024-11-07 13:44:35.787673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.849 [2024-11-07 13:44:35.787686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.849 qpair failed and we were unable to recover it. 00:39:27.849 [2024-11-07 13:44:35.788087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.849 [2024-11-07 13:44:35.788101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.849 qpair failed and we were unable to recover it. 00:39:27.849 [2024-11-07 13:44:35.788383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.849 [2024-11-07 13:44:35.788396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.849 qpair failed and we were unable to recover it. 00:39:27.849 [2024-11-07 13:44:35.788682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.849 [2024-11-07 13:44:35.788695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.849 qpair failed and we were unable to recover it. 00:39:27.849 [2024-11-07 13:44:35.789028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.849 [2024-11-07 13:44:35.789042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.849 qpair failed and we were unable to recover it. 00:39:27.849 [2024-11-07 13:44:35.789363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.849 [2024-11-07 13:44:35.789376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.849 qpair failed and we were unable to recover it. 00:39:27.849 [2024-11-07 13:44:35.789730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.849 [2024-11-07 13:44:35.789743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.849 qpair failed and we were unable to recover it. 00:39:27.849 [2024-11-07 13:44:35.789944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.849 [2024-11-07 13:44:35.789958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.849 qpair failed and we were unable to recover it. 00:39:27.849 [2024-11-07 13:44:35.790186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.849 [2024-11-07 13:44:35.790199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.849 qpair failed and we were unable to recover it. 00:39:27.849 [2024-11-07 13:44:35.790519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.849 [2024-11-07 13:44:35.790532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.849 qpair failed and we were unable to recover it. 00:39:27.849 [2024-11-07 13:44:35.790855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.849 [2024-11-07 13:44:35.790872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.849 qpair failed and we were unable to recover it. 00:39:27.849 [2024-11-07 13:44:35.791198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.849 [2024-11-07 13:44:35.791211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.849 qpair failed and we were unable to recover it. 00:39:27.849 [2024-11-07 13:44:35.791547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.849 [2024-11-07 13:44:35.791561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.849 qpair failed and we were unable to recover it. 00:39:27.849 [2024-11-07 13:44:35.791875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.849 [2024-11-07 13:44:35.791889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.849 qpair failed and we were unable to recover it. 00:39:27.849 [2024-11-07 13:44:35.792249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.849 [2024-11-07 13:44:35.792262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.849 qpair failed and we were unable to recover it. 00:39:27.849 [2024-11-07 13:44:35.792589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.849 [2024-11-07 13:44:35.792602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.849 qpair failed and we were unable to recover it. 00:39:27.849 [2024-11-07 13:44:35.792923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.849 [2024-11-07 13:44:35.792936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.849 qpair failed and we were unable to recover it. 00:39:27.849 [2024-11-07 13:44:35.793173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.849 [2024-11-07 13:44:35.793187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.849 qpair failed and we were unable to recover it. 00:39:27.849 [2024-11-07 13:44:35.793477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.849 [2024-11-07 13:44:35.793491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.849 qpair failed and we were unable to recover it. 00:39:27.849 [2024-11-07 13:44:35.793815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.849 [2024-11-07 13:44:35.793829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.849 qpair failed and we were unable to recover it. 00:39:27.849 [2024-11-07 13:44:35.794121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.849 [2024-11-07 13:44:35.794138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.849 qpair failed and we were unable to recover it. 00:39:27.849 [2024-11-07 13:44:35.794453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.849 [2024-11-07 13:44:35.794467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.849 qpair failed and we were unable to recover it. 00:39:27.849 [2024-11-07 13:44:35.794792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.849 [2024-11-07 13:44:35.794806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.849 qpair failed and we were unable to recover it. 00:39:27.850 [2024-11-07 13:44:35.795174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.850 [2024-11-07 13:44:35.795188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-11-07 13:44:35.795478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.850 [2024-11-07 13:44:35.795492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-11-07 13:44:35.795779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.850 [2024-11-07 13:44:35.795792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-11-07 13:44:35.796155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.850 [2024-11-07 13:44:35.796169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-11-07 13:44:35.796453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.850 [2024-11-07 13:44:35.796466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-11-07 13:44:35.796783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.850 [2024-11-07 13:44:35.796796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-11-07 13:44:35.797122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.850 [2024-11-07 13:44:35.797136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-11-07 13:44:35.797474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.850 [2024-11-07 13:44:35.797488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-11-07 13:44:35.797817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.850 [2024-11-07 13:44:35.797831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-11-07 13:44:35.798119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.850 [2024-11-07 13:44:35.798134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-11-07 13:44:35.798476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.850 [2024-11-07 13:44:35.798489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-11-07 13:44:35.798806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.850 [2024-11-07 13:44:35.798820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-11-07 13:44:35.799110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.850 [2024-11-07 13:44:35.799124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-11-07 13:44:35.799400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.850 [2024-11-07 13:44:35.799413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-11-07 13:44:35.799698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.850 [2024-11-07 13:44:35.799711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-11-07 13:44:35.799987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.850 [2024-11-07 13:44:35.800001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-11-07 13:44:35.800324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.850 [2024-11-07 13:44:35.800338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-11-07 13:44:35.800651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.850 [2024-11-07 13:44:35.800665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-11-07 13:44:35.800974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.850 [2024-11-07 13:44:35.800989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-11-07 13:44:35.801329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.850 [2024-11-07 13:44:35.801343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-11-07 13:44:35.801668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.850 [2024-11-07 13:44:35.801682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-11-07 13:44:35.802013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.850 [2024-11-07 13:44:35.802027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-11-07 13:44:35.802406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.850 [2024-11-07 13:44:35.802419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-11-07 13:44:35.802738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.850 [2024-11-07 13:44:35.802752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-11-07 13:44:35.803040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.850 [2024-11-07 13:44:35.803055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-11-07 13:44:35.803358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.850 [2024-11-07 13:44:35.803372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-11-07 13:44:35.803693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.850 [2024-11-07 13:44:35.803706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-11-07 13:44:35.804025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.850 [2024-11-07 13:44:35.804039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-11-07 13:44:35.804233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.850 [2024-11-07 13:44:35.804246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-11-07 13:44:35.804602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.850 [2024-11-07 13:44:35.804617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-11-07 13:44:35.804819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.850 [2024-11-07 13:44:35.804833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-11-07 13:44:35.805157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.850 [2024-11-07 13:44:35.805172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-11-07 13:44:35.805483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.850 [2024-11-07 13:44:35.805497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-11-07 13:44:35.805824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.850 [2024-11-07 13:44:35.805837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-11-07 13:44:35.806171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.850 [2024-11-07 13:44:35.806185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-11-07 13:44:35.806386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.850 [2024-11-07 13:44:35.806401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-11-07 13:44:35.806726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.850 [2024-11-07 13:44:35.806739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-11-07 13:44:35.807053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.850 [2024-11-07 13:44:35.807070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-11-07 13:44:35.807393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.851 [2024-11-07 13:44:35.807406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.851 qpair failed and we were unable to recover it. 00:39:27.851 [2024-11-07 13:44:35.807766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.851 [2024-11-07 13:44:35.807779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.851 qpair failed and we were unable to recover it. 00:39:27.851 [2024-11-07 13:44:35.808083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.851 [2024-11-07 13:44:35.808097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.851 qpair failed and we were unable to recover it. 00:39:27.851 [2024-11-07 13:44:35.808412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.851 [2024-11-07 13:44:35.808425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.851 qpair failed and we were unable to recover it. 00:39:27.851 [2024-11-07 13:44:35.808724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.851 [2024-11-07 13:44:35.808737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.851 qpair failed and we were unable to recover it. 00:39:27.851 [2024-11-07 13:44:35.809153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.851 [2024-11-07 13:44:35.809167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.851 qpair failed and we were unable to recover it. 00:39:27.851 [2024-11-07 13:44:35.809476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.851 [2024-11-07 13:44:35.809490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.851 qpair failed and we were unable to recover it. 00:39:27.851 [2024-11-07 13:44:35.809823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.851 [2024-11-07 13:44:35.809837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.851 qpair failed and we were unable to recover it. 00:39:27.851 [2024-11-07 13:44:35.810184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.851 [2024-11-07 13:44:35.810198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.851 qpair failed and we were unable to recover it. 00:39:27.851 [2024-11-07 13:44:35.810422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.851 [2024-11-07 13:44:35.810435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.851 qpair failed and we were unable to recover it. 00:39:27.851 [2024-11-07 13:44:35.810740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.851 [2024-11-07 13:44:35.810753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.851 qpair failed and we were unable to recover it. 00:39:27.851 [2024-11-07 13:44:35.811083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.851 [2024-11-07 13:44:35.811097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.851 qpair failed and we were unable to recover it. 00:39:27.851 [2024-11-07 13:44:35.811407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.851 [2024-11-07 13:44:35.811420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.851 qpair failed and we were unable to recover it. 00:39:27.851 [2024-11-07 13:44:35.811701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.851 [2024-11-07 13:44:35.811715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.851 qpair failed and we were unable to recover it. 00:39:27.851 [2024-11-07 13:44:35.812040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.851 [2024-11-07 13:44:35.812054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.851 qpair failed and we were unable to recover it. 00:39:27.851 [2024-11-07 13:44:35.812351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.851 [2024-11-07 13:44:35.812365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.851 qpair failed and we were unable to recover it. 00:39:27.851 [2024-11-07 13:44:35.812675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.851 [2024-11-07 13:44:35.812688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.851 qpair failed and we were unable to recover it. 00:39:27.851 [2024-11-07 13:44:35.813026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.851 [2024-11-07 13:44:35.813040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.851 qpair failed and we were unable to recover it. 00:39:27.851 [2024-11-07 13:44:35.813368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.851 [2024-11-07 13:44:35.813382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.851 qpair failed and we were unable to recover it. 00:39:27.851 [2024-11-07 13:44:35.813692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.851 [2024-11-07 13:44:35.813706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.851 qpair failed and we were unable to recover it. 00:39:27.851 [2024-11-07 13:44:35.814034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.851 [2024-11-07 13:44:35.814048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.851 qpair failed and we were unable to recover it. 00:39:27.851 [2024-11-07 13:44:35.814396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.851 [2024-11-07 13:44:35.814410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.851 qpair failed and we were unable to recover it. 00:39:27.851 [2024-11-07 13:44:35.814734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.851 [2024-11-07 13:44:35.814747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.851 qpair failed and we were unable to recover it. 00:39:27.851 [2024-11-07 13:44:35.815073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.851 [2024-11-07 13:44:35.815087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.851 qpair failed and we were unable to recover it. 00:39:27.851 [2024-11-07 13:44:35.815400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.851 [2024-11-07 13:44:35.815413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.851 qpair failed and we were unable to recover it. 00:39:27.851 [2024-11-07 13:44:35.815728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.851 [2024-11-07 13:44:35.815742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.851 qpair failed and we were unable to recover it. 00:39:27.851 [2024-11-07 13:44:35.816080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.851 [2024-11-07 13:44:35.816095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.851 qpair failed and we were unable to recover it. 00:39:27.851 [2024-11-07 13:44:35.816424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.851 [2024-11-07 13:44:35.816439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.851 qpair failed and we were unable to recover it. 00:39:27.851 [2024-11-07 13:44:35.816758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.851 [2024-11-07 13:44:35.816771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:27.851 qpair failed and we were unable to recover it. 00:39:28.126 [2024-11-07 13:44:35.817085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.126 [2024-11-07 13:44:35.817100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.126 qpair failed and we were unable to recover it. 00:39:28.126 [2024-11-07 13:44:35.817408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.126 [2024-11-07 13:44:35.817422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.126 qpair failed and we were unable to recover it. 00:39:28.126 [2024-11-07 13:44:35.817701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.126 [2024-11-07 13:44:35.817714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.126 qpair failed and we were unable to recover it. 00:39:28.126 [2024-11-07 13:44:35.818028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.126 [2024-11-07 13:44:35.818042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.126 qpair failed and we were unable to recover it. 00:39:28.126 [2024-11-07 13:44:35.818358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.126 [2024-11-07 13:44:35.818372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.126 qpair failed and we were unable to recover it. 00:39:28.126 [2024-11-07 13:44:35.818674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.126 [2024-11-07 13:44:35.818688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.126 qpair failed and we were unable to recover it. 00:39:28.126 [2024-11-07 13:44:35.819026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.126 [2024-11-07 13:44:35.819041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.126 qpair failed and we were unable to recover it. 00:39:28.126 [2024-11-07 13:44:35.819248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.126 [2024-11-07 13:44:35.819262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.126 qpair failed and we were unable to recover it. 00:39:28.126 [2024-11-07 13:44:35.819564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.126 [2024-11-07 13:44:35.819578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.126 qpair failed and we were unable to recover it. 00:39:28.126 [2024-11-07 13:44:35.819876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.126 [2024-11-07 13:44:35.819891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.126 qpair failed and we were unable to recover it. 00:39:28.126 [2024-11-07 13:44:35.820200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.126 [2024-11-07 13:44:35.820216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.126 qpair failed and we were unable to recover it. 00:39:28.126 [2024-11-07 13:44:35.820528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.126 [2024-11-07 13:44:35.820542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.126 qpair failed and we were unable to recover it. 00:39:28.126 [2024-11-07 13:44:35.820820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.126 [2024-11-07 13:44:35.820833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.126 qpair failed and we were unable to recover it. 00:39:28.126 [2024-11-07 13:44:35.821130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.126 [2024-11-07 13:44:35.821144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.126 qpair failed and we were unable to recover it. 00:39:28.126 [2024-11-07 13:44:35.821445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.126 [2024-11-07 13:44:35.821458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.126 qpair failed and we were unable to recover it. 00:39:28.126 [2024-11-07 13:44:35.821779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.126 [2024-11-07 13:44:35.821792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.126 qpair failed and we were unable to recover it. 00:39:28.126 [2024-11-07 13:44:35.822098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.126 [2024-11-07 13:44:35.822112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.126 qpair failed and we were unable to recover it. 00:39:28.126 [2024-11-07 13:44:35.822287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.126 [2024-11-07 13:44:35.822302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.126 qpair failed and we were unable to recover it. 00:39:28.126 [2024-11-07 13:44:35.822620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.126 [2024-11-07 13:44:35.822634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.126 qpair failed and we were unable to recover it. 00:39:28.126 [2024-11-07 13:44:35.822914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.126 [2024-11-07 13:44:35.822928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.126 qpair failed and we were unable to recover it. 00:39:28.126 [2024-11-07 13:44:35.823160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.126 [2024-11-07 13:44:35.823173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.126 qpair failed and we were unable to recover it. 00:39:28.126 [2024-11-07 13:44:35.823472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.126 [2024-11-07 13:44:35.823485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.126 qpair failed and we were unable to recover it. 00:39:28.126 [2024-11-07 13:44:35.823803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.126 [2024-11-07 13:44:35.823816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.126 qpair failed and we were unable to recover it. 00:39:28.126 [2024-11-07 13:44:35.824129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.126 [2024-11-07 13:44:35.824143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.126 qpair failed and we were unable to recover it. 00:39:28.126 [2024-11-07 13:44:35.824465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.126 [2024-11-07 13:44:35.824479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.126 qpair failed and we were unable to recover it. 00:39:28.126 [2024-11-07 13:44:35.824804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.126 [2024-11-07 13:44:35.824818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.126 qpair failed and we were unable to recover it. 00:39:28.126 [2024-11-07 13:44:35.825147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.126 [2024-11-07 13:44:35.825161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.126 qpair failed and we were unable to recover it. 00:39:28.126 [2024-11-07 13:44:35.825494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.126 [2024-11-07 13:44:35.825507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.126 qpair failed and we were unable to recover it. 00:39:28.126 [2024-11-07 13:44:35.825821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.126 [2024-11-07 13:44:35.825835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.126 qpair failed and we were unable to recover it. 00:39:28.126 [2024-11-07 13:44:35.826152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.126 [2024-11-07 13:44:35.826166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.126 qpair failed and we were unable to recover it. 00:39:28.126 [2024-11-07 13:44:35.826527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.127 [2024-11-07 13:44:35.826541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.127 qpair failed and we were unable to recover it. 00:39:28.127 [2024-11-07 13:44:35.826839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.127 [2024-11-07 13:44:35.826852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.127 qpair failed and we were unable to recover it. 00:39:28.127 [2024-11-07 13:44:35.827041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.127 [2024-11-07 13:44:35.827056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.127 qpair failed and we were unable to recover it. 00:39:28.127 [2024-11-07 13:44:35.827437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.127 [2024-11-07 13:44:35.827450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.127 qpair failed and we were unable to recover it. 00:39:28.127 [2024-11-07 13:44:35.827661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.127 [2024-11-07 13:44:35.827678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.127 qpair failed and we were unable to recover it. 00:39:28.127 [2024-11-07 13:44:35.827991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.127 [2024-11-07 13:44:35.828005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.127 qpair failed and we were unable to recover it. 00:39:28.127 [2024-11-07 13:44:35.828288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.127 [2024-11-07 13:44:35.828301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.127 qpair failed and we were unable to recover it. 00:39:28.127 [2024-11-07 13:44:35.828618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.127 [2024-11-07 13:44:35.828632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.127 qpair failed and we were unable to recover it. 00:39:28.127 [2024-11-07 13:44:35.828974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.127 [2024-11-07 13:44:35.828988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.127 qpair failed and we were unable to recover it. 00:39:28.127 [2024-11-07 13:44:35.829362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.127 [2024-11-07 13:44:35.829375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.127 qpair failed and we were unable to recover it. 00:39:28.127 [2024-11-07 13:44:35.829694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.127 [2024-11-07 13:44:35.829707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.127 qpair failed and we were unable to recover it. 00:39:28.127 [2024-11-07 13:44:35.830021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.127 [2024-11-07 13:44:35.830034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.127 qpair failed and we were unable to recover it. 00:39:28.127 [2024-11-07 13:44:35.830332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.127 [2024-11-07 13:44:35.830346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.127 qpair failed and we were unable to recover it. 00:39:28.127 [2024-11-07 13:44:35.830672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.127 [2024-11-07 13:44:35.830685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.127 qpair failed and we were unable to recover it. 00:39:28.127 [2024-11-07 13:44:35.831030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.127 [2024-11-07 13:44:35.831044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.127 qpair failed and we were unable to recover it. 00:39:28.127 [2024-11-07 13:44:35.831417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.127 [2024-11-07 13:44:35.831431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.127 qpair failed and we were unable to recover it. 00:39:28.127 [2024-11-07 13:44:35.831610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.127 [2024-11-07 13:44:35.831632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.127 qpair failed and we were unable to recover it. 00:39:28.127 [2024-11-07 13:44:35.831941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.127 [2024-11-07 13:44:35.831956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.127 qpair failed and we were unable to recover it. 00:39:28.127 [2024-11-07 13:44:35.832250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.127 [2024-11-07 13:44:35.832264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.127 qpair failed and we were unable to recover it. 00:39:28.127 [2024-11-07 13:44:35.832529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.127 [2024-11-07 13:44:35.832543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.127 qpair failed and we were unable to recover it. 00:39:28.127 [2024-11-07 13:44:35.832817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.127 [2024-11-07 13:44:35.832832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.127 qpair failed and we were unable to recover it. 00:39:28.127 [2024-11-07 13:44:35.833135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.127 [2024-11-07 13:44:35.833150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.127 qpair failed and we were unable to recover it. 00:39:28.127 [2024-11-07 13:44:35.833456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.127 [2024-11-07 13:44:35.833470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.127 qpair failed and we were unable to recover it. 00:39:28.127 [2024-11-07 13:44:35.833840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.127 [2024-11-07 13:44:35.833854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.127 qpair failed and we were unable to recover it. 00:39:28.127 [2024-11-07 13:44:35.834186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.127 [2024-11-07 13:44:35.834200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.127 qpair failed and we were unable to recover it. 00:39:28.127 [2024-11-07 13:44:35.834518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.127 [2024-11-07 13:44:35.834531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.127 qpair failed and we were unable to recover it. 00:39:28.127 [2024-11-07 13:44:35.834889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.127 [2024-11-07 13:44:35.834903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.127 qpair failed and we were unable to recover it. 00:39:28.127 [2024-11-07 13:44:35.835201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.127 [2024-11-07 13:44:35.835214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.127 qpair failed and we were unable to recover it. 00:39:28.127 [2024-11-07 13:44:35.835504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.127 [2024-11-07 13:44:35.835517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.127 qpair failed and we were unable to recover it. 00:39:28.127 [2024-11-07 13:44:35.835829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.127 [2024-11-07 13:44:35.835842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.127 qpair failed and we were unable to recover it. 00:39:28.127 [2024-11-07 13:44:35.836143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.127 [2024-11-07 13:44:35.836156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.127 qpair failed and we were unable to recover it. 00:39:28.127 [2024-11-07 13:44:35.836472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.127 [2024-11-07 13:44:35.836485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.127 qpair failed and we were unable to recover it. 00:39:28.127 [2024-11-07 13:44:35.836798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.127 [2024-11-07 13:44:35.836812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.127 qpair failed and we were unable to recover it. 00:39:28.127 [2024-11-07 13:44:35.837111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.127 [2024-11-07 13:44:35.837126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.127 qpair failed and we were unable to recover it. 00:39:28.127 [2024-11-07 13:44:35.837461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.127 [2024-11-07 13:44:35.837475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.127 qpair failed and we were unable to recover it. 00:39:28.127 [2024-11-07 13:44:35.837778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.127 [2024-11-07 13:44:35.837792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.127 qpair failed and we were unable to recover it. 00:39:28.127 [2024-11-07 13:44:35.838110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.127 [2024-11-07 13:44:35.838125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.127 qpair failed and we were unable to recover it. 00:39:28.127 [2024-11-07 13:44:35.838459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.127 [2024-11-07 13:44:35.838474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.127 qpair failed and we were unable to recover it. 00:39:28.127 [2024-11-07 13:44:35.838818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.128 [2024-11-07 13:44:35.838831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.128 qpair failed and we were unable to recover it. 00:39:28.128 [2024-11-07 13:44:35.839193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.128 [2024-11-07 13:44:35.839207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.128 qpair failed and we were unable to recover it. 00:39:28.128 [2024-11-07 13:44:35.839493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.128 [2024-11-07 13:44:35.839508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.128 qpair failed and we were unable to recover it. 00:39:28.128 [2024-11-07 13:44:35.839817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.128 [2024-11-07 13:44:35.839830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.128 qpair failed and we were unable to recover it. 00:39:28.128 [2024-11-07 13:44:35.840114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.128 [2024-11-07 13:44:35.840129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.128 qpair failed and we were unable to recover it. 00:39:28.128 [2024-11-07 13:44:35.840444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.128 [2024-11-07 13:44:35.840458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.128 qpair failed and we were unable to recover it. 00:39:28.128 [2024-11-07 13:44:35.840773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.128 [2024-11-07 13:44:35.840787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.128 qpair failed and we were unable to recover it. 00:39:28.128 [2024-11-07 13:44:35.841101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.128 [2024-11-07 13:44:35.841116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.128 qpair failed and we were unable to recover it. 00:39:28.128 [2024-11-07 13:44:35.841328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.128 [2024-11-07 13:44:35.841341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.128 qpair failed and we were unable to recover it. 00:39:28.128 [2024-11-07 13:44:35.841666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.128 [2024-11-07 13:44:35.841681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.128 qpair failed and we were unable to recover it. 00:39:28.128 [2024-11-07 13:44:35.841910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.128 [2024-11-07 13:44:35.841924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.128 qpair failed and we were unable to recover it. 00:39:28.128 [2024-11-07 13:44:35.842226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.128 [2024-11-07 13:44:35.842239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.128 qpair failed and we were unable to recover it. 00:39:28.128 [2024-11-07 13:44:35.842546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.128 [2024-11-07 13:44:35.842560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.128 qpair failed and we were unable to recover it. 00:39:28.128 [2024-11-07 13:44:35.842848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.128 [2024-11-07 13:44:35.842865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.128 qpair failed and we were unable to recover it. 00:39:28.128 [2024-11-07 13:44:35.843214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.128 [2024-11-07 13:44:35.843228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.128 qpair failed and we were unable to recover it. 00:39:28.128 [2024-11-07 13:44:35.843543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.128 [2024-11-07 13:44:35.843556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.128 qpair failed and we were unable to recover it. 00:39:28.128 [2024-11-07 13:44:35.843971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.128 [2024-11-07 13:44:35.843985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.128 qpair failed and we were unable to recover it. 00:39:28.128 [2024-11-07 13:44:35.844260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.128 [2024-11-07 13:44:35.844274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.128 qpair failed and we were unable to recover it. 00:39:28.128 [2024-11-07 13:44:35.844586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.128 [2024-11-07 13:44:35.844599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.128 qpair failed and we were unable to recover it. 00:39:28.128 [2024-11-07 13:44:35.844882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.128 [2024-11-07 13:44:35.844898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.128 qpair failed and we were unable to recover it. 00:39:28.128 [2024-11-07 13:44:35.845191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.128 [2024-11-07 13:44:35.845205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.128 qpair failed and we were unable to recover it. 00:39:28.128 [2024-11-07 13:44:35.845523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.128 [2024-11-07 13:44:35.845537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.128 qpair failed and we were unable to recover it. 00:39:28.128 [2024-11-07 13:44:35.845852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.128 [2024-11-07 13:44:35.845870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.128 qpair failed and we were unable to recover it. 00:39:28.128 [2024-11-07 13:44:35.846154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.128 [2024-11-07 13:44:35.846168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.128 qpair failed and we were unable to recover it. 00:39:28.128 [2024-11-07 13:44:35.846377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.128 [2024-11-07 13:44:35.846401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.128 qpair failed and we were unable to recover it. 00:39:28.128 [2024-11-07 13:44:35.846735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.128 [2024-11-07 13:44:35.846749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.128 qpair failed and we were unable to recover it. 00:39:28.128 [2024-11-07 13:44:35.847085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.128 [2024-11-07 13:44:35.847100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.128 qpair failed and we were unable to recover it. 00:39:28.128 [2024-11-07 13:44:35.847332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.128 [2024-11-07 13:44:35.847346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.128 qpair failed and we were unable to recover it. 00:39:28.128 [2024-11-07 13:44:35.847666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.128 [2024-11-07 13:44:35.847680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.128 qpair failed and we were unable to recover it. 00:39:28.128 [2024-11-07 13:44:35.847996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.128 [2024-11-07 13:44:35.848010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.128 qpair failed and we were unable to recover it. 00:39:28.128 [2024-11-07 13:44:35.848374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.128 [2024-11-07 13:44:35.848389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.128 qpair failed and we were unable to recover it. 00:39:28.128 [2024-11-07 13:44:35.848716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.128 [2024-11-07 13:44:35.848729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.128 qpair failed and we were unable to recover it. 00:39:28.128 [2024-11-07 13:44:35.849062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.128 [2024-11-07 13:44:35.849077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.128 qpair failed and we were unable to recover it. 00:39:28.128 [2024-11-07 13:44:35.849391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.128 [2024-11-07 13:44:35.849405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.128 qpair failed and we were unable to recover it. 00:39:28.128 [2024-11-07 13:44:35.849738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.128 [2024-11-07 13:44:35.849752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.128 qpair failed and we were unable to recover it. 00:39:28.128 [2024-11-07 13:44:35.850036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.128 [2024-11-07 13:44:35.850049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.128 qpair failed and we were unable to recover it. 00:39:28.128 [2024-11-07 13:44:35.850369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.128 [2024-11-07 13:44:35.850384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.128 qpair failed and we were unable to recover it. 00:39:28.128 [2024-11-07 13:44:35.850564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.128 [2024-11-07 13:44:35.850579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.128 qpair failed and we were unable to recover it. 00:39:28.128 [2024-11-07 13:44:35.850905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.129 [2024-11-07 13:44:35.850920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.129 qpair failed and we were unable to recover it. 00:39:28.129 [2024-11-07 13:44:35.851291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.129 [2024-11-07 13:44:35.851306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.129 qpair failed and we were unable to recover it. 00:39:28.129 [2024-11-07 13:44:35.851641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.129 [2024-11-07 13:44:35.851655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.129 qpair failed and we were unable to recover it. 00:39:28.129 [2024-11-07 13:44:35.851968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.129 [2024-11-07 13:44:35.851982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.129 qpair failed and we were unable to recover it. 00:39:28.129 [2024-11-07 13:44:35.852282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.129 [2024-11-07 13:44:35.852295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.129 qpair failed and we were unable to recover it. 00:39:28.129 [2024-11-07 13:44:35.852663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.129 [2024-11-07 13:44:35.852677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.129 qpair failed and we were unable to recover it. 00:39:28.129 [2024-11-07 13:44:35.853017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.129 [2024-11-07 13:44:35.853031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.129 qpair failed and we were unable to recover it. 00:39:28.129 [2024-11-07 13:44:35.853344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.129 [2024-11-07 13:44:35.853358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.129 qpair failed and we were unable to recover it. 00:39:28.129 [2024-11-07 13:44:35.853649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.129 [2024-11-07 13:44:35.853663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.129 qpair failed and we were unable to recover it. 00:39:28.129 [2024-11-07 13:44:35.853967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.129 [2024-11-07 13:44:35.853982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.129 qpair failed and we were unable to recover it. 00:39:28.129 [2024-11-07 13:44:35.854299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.129 [2024-11-07 13:44:35.854313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.129 qpair failed and we were unable to recover it. 00:39:28.129 [2024-11-07 13:44:35.854653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.129 [2024-11-07 13:44:35.854669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.129 qpair failed and we were unable to recover it. 00:39:28.129 [2024-11-07 13:44:35.854983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.129 [2024-11-07 13:44:35.854998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.129 qpair failed and we were unable to recover it. 00:39:28.129 [2024-11-07 13:44:35.855305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.129 [2024-11-07 13:44:35.855319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.129 qpair failed and we were unable to recover it. 00:39:28.129 [2024-11-07 13:44:35.855692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.129 [2024-11-07 13:44:35.855706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.129 qpair failed and we were unable to recover it. 00:39:28.129 [2024-11-07 13:44:35.856003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.129 [2024-11-07 13:44:35.856017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.129 qpair failed and we were unable to recover it. 00:39:28.129 [2024-11-07 13:44:35.856297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.129 [2024-11-07 13:44:35.856311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.129 qpair failed and we were unable to recover it. 00:39:28.129 [2024-11-07 13:44:35.856641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.129 [2024-11-07 13:44:35.856655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.129 qpair failed and we were unable to recover it. 00:39:28.129 [2024-11-07 13:44:35.857504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.129 [2024-11-07 13:44:35.857533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.129 qpair failed and we were unable to recover it. 00:39:28.129 [2024-11-07 13:44:35.857875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.129 [2024-11-07 13:44:35.857892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.129 qpair failed and we were unable to recover it. 00:39:28.129 [2024-11-07 13:44:35.858668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.129 [2024-11-07 13:44:35.858693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.129 qpair failed and we were unable to recover it. 00:39:28.129 [2024-11-07 13:44:35.859027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.129 [2024-11-07 13:44:35.859043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.129 qpair failed and we were unable to recover it. 00:39:28.129 [2024-11-07 13:44:35.860104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.129 [2024-11-07 13:44:35.860135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.129 qpair failed and we were unable to recover it. 00:39:28.129 [2024-11-07 13:44:35.860468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.129 [2024-11-07 13:44:35.860484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.129 qpair failed and we were unable to recover it. 00:39:28.129 [2024-11-07 13:44:35.860794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.129 [2024-11-07 13:44:35.860808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.129 qpair failed and we were unable to recover it. 00:39:28.129 [2024-11-07 13:44:35.861123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.129 [2024-11-07 13:44:35.861138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.129 qpair failed and we were unable to recover it. 00:39:28.129 [2024-11-07 13:44:35.861466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.129 [2024-11-07 13:44:35.861479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.129 qpair failed and we were unable to recover it. 00:39:28.129 [2024-11-07 13:44:35.861803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.129 [2024-11-07 13:44:35.861816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.129 qpair failed and we were unable to recover it. 00:39:28.129 [2024-11-07 13:44:35.862192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.129 [2024-11-07 13:44:35.862208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.129 qpair failed and we were unable to recover it. 00:39:28.129 [2024-11-07 13:44:35.862573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.129 [2024-11-07 13:44:35.862588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.129 qpair failed and we were unable to recover it. 00:39:28.129 [2024-11-07 13:44:35.862905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.129 [2024-11-07 13:44:35.862921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.129 qpair failed and we were unable to recover it. 00:39:28.129 [2024-11-07 13:44:35.863139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.129 [2024-11-07 13:44:35.863153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.129 qpair failed and we were unable to recover it. 00:39:28.129 [2024-11-07 13:44:35.863356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.129 [2024-11-07 13:44:35.863370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.129 qpair failed and we were unable to recover it. 00:39:28.129 [2024-11-07 13:44:35.863705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.129 [2024-11-07 13:44:35.863718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.129 qpair failed and we were unable to recover it. 00:39:28.129 [2024-11-07 13:44:35.864028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.129 [2024-11-07 13:44:35.864043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.129 qpair failed and we were unable to recover it. 00:39:28.129 [2024-11-07 13:44:35.864341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.129 [2024-11-07 13:44:35.864355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.129 qpair failed and we were unable to recover it. 00:39:28.129 [2024-11-07 13:44:35.864670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.129 [2024-11-07 13:44:35.864683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.129 qpair failed and we were unable to recover it. 00:39:28.129 [2024-11-07 13:44:35.864993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.129 [2024-11-07 13:44:35.865007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.129 qpair failed and we were unable to recover it. 00:39:28.129 [2024-11-07 13:44:35.865390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.129 [2024-11-07 13:44:35.865404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.130 qpair failed and we were unable to recover it. 00:39:28.130 [2024-11-07 13:44:35.865701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.130 [2024-11-07 13:44:35.865715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.130 qpair failed and we were unable to recover it. 00:39:28.130 [2024-11-07 13:44:35.866025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.130 [2024-11-07 13:44:35.866039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.130 qpair failed and we were unable to recover it. 00:39:28.130 [2024-11-07 13:44:35.866314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.130 [2024-11-07 13:44:35.866329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.130 qpair failed and we were unable to recover it. 00:39:28.130 [2024-11-07 13:44:35.866615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.130 [2024-11-07 13:44:35.866629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.130 qpair failed and we were unable to recover it. 00:39:28.130 [2024-11-07 13:44:35.866957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.130 [2024-11-07 13:44:35.866973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.130 qpair failed and we were unable to recover it. 00:39:28.130 [2024-11-07 13:44:35.867273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.130 [2024-11-07 13:44:35.867288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.130 qpair failed and we were unable to recover it. 00:39:28.130 [2024-11-07 13:44:35.867663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.130 [2024-11-07 13:44:35.867676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.130 qpair failed and we were unable to recover it. 00:39:28.130 [2024-11-07 13:44:35.867961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.130 [2024-11-07 13:44:35.867975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.130 qpair failed and we were unable to recover it. 00:39:28.130 [2024-11-07 13:44:35.868308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.130 [2024-11-07 13:44:35.868321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.130 qpair failed and we were unable to recover it. 00:39:28.130 [2024-11-07 13:44:35.868712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.130 [2024-11-07 13:44:35.868726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.130 qpair failed and we were unable to recover it. 00:39:28.130 [2024-11-07 13:44:35.869032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.130 [2024-11-07 13:44:35.869046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.130 qpair failed and we were unable to recover it. 00:39:28.130 [2024-11-07 13:44:35.869341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.130 [2024-11-07 13:44:35.869355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.130 qpair failed and we were unable to recover it. 00:39:28.130 [2024-11-07 13:44:35.869754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.130 [2024-11-07 13:44:35.869770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.130 qpair failed and we were unable to recover it. 00:39:28.130 [2024-11-07 13:44:35.870089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.130 [2024-11-07 13:44:35.870104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.130 qpair failed and we were unable to recover it. 00:39:28.130 [2024-11-07 13:44:35.870460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.130 [2024-11-07 13:44:35.870475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.130 qpair failed and we were unable to recover it. 00:39:28.130 [2024-11-07 13:44:35.870799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.130 [2024-11-07 13:44:35.870812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.130 qpair failed and we were unable to recover it. 00:39:28.130 [2024-11-07 13:44:35.871201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.130 [2024-11-07 13:44:35.871216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.130 qpair failed and we were unable to recover it. 00:39:28.130 [2024-11-07 13:44:35.871506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.130 [2024-11-07 13:44:35.871523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.130 qpair failed and we were unable to recover it. 00:39:28.130 [2024-11-07 13:44:35.871837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.130 [2024-11-07 13:44:35.871851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.130 qpair failed and we were unable to recover it. 00:39:28.130 [2024-11-07 13:44:35.872153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.130 [2024-11-07 13:44:35.872167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.130 qpair failed and we were unable to recover it. 00:39:28.130 [2024-11-07 13:44:35.872503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.130 [2024-11-07 13:44:35.872517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.130 qpair failed and we were unable to recover it. 00:39:28.130 [2024-11-07 13:44:35.872834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.130 [2024-11-07 13:44:35.872848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.130 qpair failed and we were unable to recover it. 00:39:28.130 [2024-11-07 13:44:35.873237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.130 [2024-11-07 13:44:35.873252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.130 qpair failed and we were unable to recover it. 00:39:28.130 [2024-11-07 13:44:35.873543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.130 [2024-11-07 13:44:35.873557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.130 qpair failed and we were unable to recover it. 00:39:28.130 [2024-11-07 13:44:35.873874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.130 [2024-11-07 13:44:35.873888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.130 qpair failed and we were unable to recover it. 00:39:28.130 [2024-11-07 13:44:35.874231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.130 [2024-11-07 13:44:35.874246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.130 qpair failed and we were unable to recover it. 00:39:28.130 [2024-11-07 13:44:35.874534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.130 [2024-11-07 13:44:35.874547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.130 qpair failed and we were unable to recover it. 00:39:28.130 [2024-11-07 13:44:35.874742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.130 [2024-11-07 13:44:35.874757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.130 qpair failed and we were unable to recover it. 00:39:28.130 [2024-11-07 13:44:35.875101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.130 [2024-11-07 13:44:35.875116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.130 qpair failed and we were unable to recover it. 00:39:28.130 [2024-11-07 13:44:35.875399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.130 [2024-11-07 13:44:35.875412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.130 qpair failed and we were unable to recover it. 00:39:28.130 [2024-11-07 13:44:35.875692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.130 [2024-11-07 13:44:35.875705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.130 qpair failed and we were unable to recover it. 00:39:28.130 [2024-11-07 13:44:35.876006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.130 [2024-11-07 13:44:35.876019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.130 qpair failed and we were unable to recover it. 00:39:28.130 [2024-11-07 13:44:35.876252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.131 [2024-11-07 13:44:35.876266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.131 qpair failed and we were unable to recover it. 00:39:28.131 [2024-11-07 13:44:35.876589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.131 [2024-11-07 13:44:35.876602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.131 qpair failed and we were unable to recover it. 00:39:28.131 [2024-11-07 13:44:35.876881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.131 [2024-11-07 13:44:35.876895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.131 qpair failed and we were unable to recover it. 00:39:28.131 [2024-11-07 13:44:35.877211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.131 [2024-11-07 13:44:35.877225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.131 qpair failed and we were unable to recover it. 00:39:28.131 [2024-11-07 13:44:35.877547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.131 [2024-11-07 13:44:35.877561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.131 qpair failed and we were unable to recover it. 00:39:28.131 [2024-11-07 13:44:35.877845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.131 [2024-11-07 13:44:35.877858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.131 qpair failed and we were unable to recover it. 00:39:28.131 [2024-11-07 13:44:35.878183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.131 [2024-11-07 13:44:35.878196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.131 qpair failed and we were unable to recover it. 00:39:28.131 [2024-11-07 13:44:35.878512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.131 [2024-11-07 13:44:35.878525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.131 qpair failed and we were unable to recover it. 00:39:28.131 [2024-11-07 13:44:35.878719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.131 [2024-11-07 13:44:35.878732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.131 qpair failed and we were unable to recover it. 00:39:28.131 [2024-11-07 13:44:35.879031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.131 [2024-11-07 13:44:35.879047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.131 qpair failed and we were unable to recover it. 00:39:28.131 [2024-11-07 13:44:35.879390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.131 [2024-11-07 13:44:35.879403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.131 qpair failed and we were unable to recover it. 00:39:28.131 [2024-11-07 13:44:35.879609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.131 [2024-11-07 13:44:35.879623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.131 qpair failed and we were unable to recover it. 00:39:28.131 [2024-11-07 13:44:35.879925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.131 [2024-11-07 13:44:35.879939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.131 qpair failed and we were unable to recover it. 00:39:28.131 [2024-11-07 13:44:35.880303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.131 [2024-11-07 13:44:35.880316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.131 qpair failed and we were unable to recover it. 00:39:28.131 [2024-11-07 13:44:35.880600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.131 [2024-11-07 13:44:35.880613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.131 qpair failed and we were unable to recover it. 00:39:28.131 [2024-11-07 13:44:35.880944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.131 [2024-11-07 13:44:35.880958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.131 qpair failed and we were unable to recover it. 00:39:28.131 [2024-11-07 13:44:35.881271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.131 [2024-11-07 13:44:35.881285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.131 qpair failed and we were unable to recover it. 00:39:28.131 [2024-11-07 13:44:35.881566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.131 [2024-11-07 13:44:35.881580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.131 qpair failed and we were unable to recover it. 00:39:28.131 [2024-11-07 13:44:35.881907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.131 [2024-11-07 13:44:35.881921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.131 qpair failed and we were unable to recover it. 00:39:28.131 [2024-11-07 13:44:35.882243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.131 [2024-11-07 13:44:35.882256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.131 qpair failed and we were unable to recover it. 00:39:28.131 [2024-11-07 13:44:35.882572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.131 [2024-11-07 13:44:35.882589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.131 qpair failed and we were unable to recover it. 00:39:28.131 [2024-11-07 13:44:35.882921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.131 [2024-11-07 13:44:35.882935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.131 qpair failed and we were unable to recover it. 00:39:28.131 [2024-11-07 13:44:35.883282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.131 [2024-11-07 13:44:35.883296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.131 qpair failed and we were unable to recover it. 00:39:28.131 [2024-11-07 13:44:35.883584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.131 [2024-11-07 13:44:35.883598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.131 qpair failed and we were unable to recover it. 00:39:28.131 [2024-11-07 13:44:35.883912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.131 [2024-11-07 13:44:35.883926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.131 qpair failed and we were unable to recover it. 00:39:28.131 [2024-11-07 13:44:35.884330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.131 [2024-11-07 13:44:35.884343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.131 qpair failed and we were unable to recover it. 00:39:28.131 [2024-11-07 13:44:35.884628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.131 [2024-11-07 13:44:35.884641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.131 qpair failed and we were unable to recover it. 00:39:28.131 [2024-11-07 13:44:35.884953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.131 [2024-11-07 13:44:35.884967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.131 qpair failed and we were unable to recover it. 00:39:28.131 [2024-11-07 13:44:35.885254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.131 [2024-11-07 13:44:35.885267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.131 qpair failed and we were unable to recover it. 00:39:28.131 [2024-11-07 13:44:35.885584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.131 [2024-11-07 13:44:35.885597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.131 qpair failed and we were unable to recover it. 00:39:28.131 [2024-11-07 13:44:35.885921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.131 [2024-11-07 13:44:35.885935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.131 qpair failed and we were unable to recover it. 00:39:28.131 [2024-11-07 13:44:35.886278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.131 [2024-11-07 13:44:35.886291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.131 qpair failed and we were unable to recover it. 00:39:28.131 [2024-11-07 13:44:35.886608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.131 [2024-11-07 13:44:35.886622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.131 qpair failed and we were unable to recover it. 00:39:28.131 [2024-11-07 13:44:35.886953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.131 [2024-11-07 13:44:35.886968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.131 qpair failed and we were unable to recover it. 00:39:28.131 [2024-11-07 13:44:35.887286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.131 [2024-11-07 13:44:35.887300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.131 qpair failed and we were unable to recover it. 00:39:28.131 [2024-11-07 13:44:35.887634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.131 [2024-11-07 13:44:35.887647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.131 qpair failed and we were unable to recover it. 00:39:28.131 [2024-11-07 13:44:35.888010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.131 [2024-11-07 13:44:35.888025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.131 qpair failed and we were unable to recover it. 00:39:28.131 [2024-11-07 13:44:35.888313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.132 [2024-11-07 13:44:35.888327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.132 qpair failed and we were unable to recover it. 00:39:28.132 [2024-11-07 13:44:35.888633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.132 [2024-11-07 13:44:35.888646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.132 qpair failed and we were unable to recover it. 00:39:28.132 [2024-11-07 13:44:35.888856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.132 [2024-11-07 13:44:35.888874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.132 qpair failed and we were unable to recover it. 00:39:28.132 [2024-11-07 13:44:35.889090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.132 [2024-11-07 13:44:35.889105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.132 qpair failed and we were unable to recover it. 00:39:28.132 [2024-11-07 13:44:35.889421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.132 [2024-11-07 13:44:35.889435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.132 qpair failed and we were unable to recover it. 00:39:28.132 [2024-11-07 13:44:35.889780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.132 [2024-11-07 13:44:35.889793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.132 qpair failed and we were unable to recover it. 00:39:28.132 [2024-11-07 13:44:35.890099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.132 [2024-11-07 13:44:35.890113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.132 qpair failed and we were unable to recover it. 00:39:28.132 [2024-11-07 13:44:35.890413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.132 [2024-11-07 13:44:35.890427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.132 qpair failed and we were unable to recover it. 00:39:28.132 [2024-11-07 13:44:35.890741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.132 [2024-11-07 13:44:35.890755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.132 qpair failed and we were unable to recover it. 00:39:28.132 [2024-11-07 13:44:35.891087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.132 [2024-11-07 13:44:35.891101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.132 qpair failed and we were unable to recover it. 00:39:28.132 [2024-11-07 13:44:35.891256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.132 [2024-11-07 13:44:35.891269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.132 qpair failed and we were unable to recover it. 00:39:28.132 [2024-11-07 13:44:35.891449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.132 [2024-11-07 13:44:35.891464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.132 qpair failed and we were unable to recover it. 00:39:28.132 [2024-11-07 13:44:35.891788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.132 [2024-11-07 13:44:35.891802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.132 qpair failed and we were unable to recover it. 00:39:28.132 [2024-11-07 13:44:35.892080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.132 [2024-11-07 13:44:35.892095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.132 qpair failed and we were unable to recover it. 00:39:28.132 [2024-11-07 13:44:35.892322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.132 [2024-11-07 13:44:35.892335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.132 qpair failed and we were unable to recover it. 00:39:28.132 [2024-11-07 13:44:35.892648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.132 [2024-11-07 13:44:35.892662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.132 qpair failed and we were unable to recover it. 00:39:28.132 [2024-11-07 13:44:35.892960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.132 [2024-11-07 13:44:35.892974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.132 qpair failed and we were unable to recover it. 00:39:28.132 [2024-11-07 13:44:35.893308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.132 [2024-11-07 13:44:35.893321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.132 qpair failed and we were unable to recover it. 00:39:28.132 [2024-11-07 13:44:35.893634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.132 [2024-11-07 13:44:35.893647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.132 qpair failed and we were unable to recover it. 00:39:28.132 [2024-11-07 13:44:35.894007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.132 [2024-11-07 13:44:35.894021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.132 qpair failed and we were unable to recover it. 00:39:28.132 [2024-11-07 13:44:35.894306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.132 [2024-11-07 13:44:35.894319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.132 qpair failed and we were unable to recover it. 00:39:28.132 [2024-11-07 13:44:35.894635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.132 [2024-11-07 13:44:35.894648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.132 qpair failed and we were unable to recover it. 00:39:28.132 [2024-11-07 13:44:35.894970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.132 [2024-11-07 13:44:35.894984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.132 qpair failed and we were unable to recover it. 00:39:28.132 [2024-11-07 13:44:35.895285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.132 [2024-11-07 13:44:35.895301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.132 qpair failed and we were unable to recover it. 00:39:28.132 [2024-11-07 13:44:35.895627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.132 [2024-11-07 13:44:35.895641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.132 qpair failed and we were unable to recover it. 00:39:28.132 [2024-11-07 13:44:35.895967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.132 [2024-11-07 13:44:35.895981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.132 qpair failed and we were unable to recover it. 00:39:28.132 [2024-11-07 13:44:35.896306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.132 [2024-11-07 13:44:35.896319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.132 qpair failed and we were unable to recover it. 00:39:28.132 [2024-11-07 13:44:35.896643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.132 [2024-11-07 13:44:35.896656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.132 qpair failed and we were unable to recover it. 00:39:28.132 [2024-11-07 13:44:35.896968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.132 [2024-11-07 13:44:35.896982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.132 qpair failed and we were unable to recover it. 00:39:28.132 [2024-11-07 13:44:35.897282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.132 [2024-11-07 13:44:35.897296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.132 qpair failed and we were unable to recover it. 00:39:28.132 [2024-11-07 13:44:35.897686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.132 [2024-11-07 13:44:35.897699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.132 qpair failed and we were unable to recover it. 00:39:28.132 [2024-11-07 13:44:35.898559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.132 [2024-11-07 13:44:35.898586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.132 qpair failed and we were unable to recover it. 00:39:28.132 [2024-11-07 13:44:35.898940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.132 [2024-11-07 13:44:35.898955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.132 qpair failed and we were unable to recover it. 00:39:28.132 [2024-11-07 13:44:35.899246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.132 [2024-11-07 13:44:35.899261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.132 qpair failed and we were unable to recover it. 00:39:28.132 [2024-11-07 13:44:35.899549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.132 [2024-11-07 13:44:35.899563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.132 qpair failed and we were unable to recover it. 00:39:28.132 [2024-11-07 13:44:35.899895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.132 [2024-11-07 13:44:35.899908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.132 qpair failed and we were unable to recover it. 00:39:28.132 [2024-11-07 13:44:35.900208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.132 [2024-11-07 13:44:35.900221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.132 qpair failed and we were unable to recover it. 00:39:28.132 [2024-11-07 13:44:35.900563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.132 [2024-11-07 13:44:35.900577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.132 qpair failed and we were unable to recover it. 00:39:28.132 [2024-11-07 13:44:35.900893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.133 [2024-11-07 13:44:35.900907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.133 qpair failed and we were unable to recover it. 00:39:28.133 [2024-11-07 13:44:35.901081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.133 [2024-11-07 13:44:35.901096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.133 qpair failed and we were unable to recover it. 00:39:28.133 [2024-11-07 13:44:35.901442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.133 [2024-11-07 13:44:35.901456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.133 qpair failed and we were unable to recover it. 00:39:28.133 [2024-11-07 13:44:35.901794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.133 [2024-11-07 13:44:35.901807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.133 qpair failed and we were unable to recover it. 00:39:28.133 [2024-11-07 13:44:35.902124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.133 [2024-11-07 13:44:35.902138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.133 qpair failed and we were unable to recover it. 00:39:28.133 [2024-11-07 13:44:35.902348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.133 [2024-11-07 13:44:35.902362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.133 qpair failed and we were unable to recover it. 00:39:28.133 [2024-11-07 13:44:35.902611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.133 [2024-11-07 13:44:35.902624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.133 qpair failed and we were unable to recover it. 00:39:28.133 [2024-11-07 13:44:35.902942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.133 [2024-11-07 13:44:35.902956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.133 qpair failed and we were unable to recover it. 00:39:28.133 [2024-11-07 13:44:35.903275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.133 [2024-11-07 13:44:35.903289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.133 qpair failed and we were unable to recover it. 00:39:28.133 [2024-11-07 13:44:35.903625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.133 [2024-11-07 13:44:35.903638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.133 qpair failed and we were unable to recover it. 00:39:28.133 [2024-11-07 13:44:35.903843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.133 [2024-11-07 13:44:35.903857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.133 qpair failed and we were unable to recover it. 00:39:28.133 [2024-11-07 13:44:35.904178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.133 [2024-11-07 13:44:35.904192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.133 qpair failed and we were unable to recover it. 00:39:28.133 [2024-11-07 13:44:35.904546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.133 [2024-11-07 13:44:35.904559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.133 qpair failed and we were unable to recover it. 00:39:28.133 [2024-11-07 13:44:35.904875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.133 [2024-11-07 13:44:35.904889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.133 qpair failed and we were unable to recover it. 00:39:28.133 [2024-11-07 13:44:35.905187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.133 [2024-11-07 13:44:35.905201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.133 qpair failed and we were unable to recover it. 00:39:28.133 [2024-11-07 13:44:35.905582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.133 [2024-11-07 13:44:35.905597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.133 qpair failed and we were unable to recover it. 00:39:28.133 [2024-11-07 13:44:35.905907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.133 [2024-11-07 13:44:35.905921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.133 qpair failed and we were unable to recover it. 00:39:28.133 [2024-11-07 13:44:35.906194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.133 [2024-11-07 13:44:35.906208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.133 qpair failed and we were unable to recover it. 00:39:28.133 [2024-11-07 13:44:35.906518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.133 [2024-11-07 13:44:35.906531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.133 qpair failed and we were unable to recover it. 00:39:28.133 [2024-11-07 13:44:35.906867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.133 [2024-11-07 13:44:35.906882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.133 qpair failed and we were unable to recover it. 00:39:28.133 [2024-11-07 13:44:35.907184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.133 [2024-11-07 13:44:35.907198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.133 qpair failed and we were unable to recover it. 00:39:28.133 [2024-11-07 13:44:35.907526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.133 [2024-11-07 13:44:35.907539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.133 qpair failed and we were unable to recover it. 00:39:28.133 [2024-11-07 13:44:35.907854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.133 [2024-11-07 13:44:35.907872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.133 qpair failed and we were unable to recover it. 00:39:28.133 [2024-11-07 13:44:35.908062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.133 [2024-11-07 13:44:35.908085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.133 qpair failed and we were unable to recover it. 00:39:28.133 [2024-11-07 13:44:35.908424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.133 [2024-11-07 13:44:35.908438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.133 qpair failed and we were unable to recover it. 00:39:28.133 [2024-11-07 13:44:35.908743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.133 [2024-11-07 13:44:35.908759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.133 qpair failed and we were unable to recover it. 00:39:28.133 [2024-11-07 13:44:35.909041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.133 [2024-11-07 13:44:35.909055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.133 qpair failed and we were unable to recover it. 00:39:28.133 [2024-11-07 13:44:35.909420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.133 [2024-11-07 13:44:35.909434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.133 qpair failed and we were unable to recover it. 00:39:28.133 [2024-11-07 13:44:35.909725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.133 [2024-11-07 13:44:35.909739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.133 qpair failed and we were unable to recover it. 00:39:28.133 [2024-11-07 13:44:35.909975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.133 [2024-11-07 13:44:35.909989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.133 qpair failed and we were unable to recover it. 00:39:28.133 [2024-11-07 13:44:35.910305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.133 [2024-11-07 13:44:35.910318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.133 qpair failed and we were unable to recover it. 00:39:28.133 [2024-11-07 13:44:35.910672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.133 [2024-11-07 13:44:35.910685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.133 qpair failed and we were unable to recover it. 00:39:28.133 [2024-11-07 13:44:35.911002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.133 [2024-11-07 13:44:35.911017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.133 qpair failed and we were unable to recover it. 00:39:28.133 [2024-11-07 13:44:35.911311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.133 [2024-11-07 13:44:35.911324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.133 qpair failed and we were unable to recover it. 00:39:28.133 [2024-11-07 13:44:35.911639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.133 [2024-11-07 13:44:35.911653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.133 qpair failed and we were unable to recover it. 00:39:28.133 [2024-11-07 13:44:35.911967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.133 [2024-11-07 13:44:35.911981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.133 qpair failed and we were unable to recover it. 00:39:28.133 [2024-11-07 13:44:35.912313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.133 [2024-11-07 13:44:35.912327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.133 qpair failed and we were unable to recover it. 00:39:28.133 [2024-11-07 13:44:35.912620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.134 [2024-11-07 13:44:35.912634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.134 qpair failed and we were unable to recover it. 00:39:28.134 [2024-11-07 13:44:35.912952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.134 [2024-11-07 13:44:35.912968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.134 qpair failed and we were unable to recover it. 00:39:28.134 [2024-11-07 13:44:35.913312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.134 [2024-11-07 13:44:35.913326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.134 qpair failed and we were unable to recover it. 00:39:28.134 [2024-11-07 13:44:35.913647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.134 [2024-11-07 13:44:35.913661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.134 qpair failed and we were unable to recover it. 00:39:28.134 [2024-11-07 13:44:35.913879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.134 [2024-11-07 13:44:35.913893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.134 qpair failed and we were unable to recover it. 00:39:28.134 [2024-11-07 13:44:35.914077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.134 [2024-11-07 13:44:35.914097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.134 qpair failed and we were unable to recover it. 00:39:28.134 [2024-11-07 13:44:35.914447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.134 [2024-11-07 13:44:35.914460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.134 qpair failed and we were unable to recover it. 00:39:28.134 [2024-11-07 13:44:35.914768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.134 [2024-11-07 13:44:35.914781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.134 qpair failed and we were unable to recover it. 00:39:28.134 [2024-11-07 13:44:35.915138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.134 [2024-11-07 13:44:35.915151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.134 qpair failed and we were unable to recover it. 00:39:28.134 [2024-11-07 13:44:35.915470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.134 [2024-11-07 13:44:35.915484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.134 qpair failed and we were unable to recover it. 00:39:28.134 [2024-11-07 13:44:35.915650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.134 [2024-11-07 13:44:35.915665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.134 qpair failed and we were unable to recover it. 00:39:28.134 [2024-11-07 13:44:35.915980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.134 [2024-11-07 13:44:35.915993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.134 qpair failed and we were unable to recover it. 00:39:28.134 [2024-11-07 13:44:35.916277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.134 [2024-11-07 13:44:35.916291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.134 qpair failed and we were unable to recover it. 00:39:28.134 [2024-11-07 13:44:35.916634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.134 [2024-11-07 13:44:35.916647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.134 qpair failed and we were unable to recover it. 00:39:28.134 [2024-11-07 13:44:35.917051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.134 [2024-11-07 13:44:35.917066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.134 qpair failed and we were unable to recover it. 00:39:28.134 [2024-11-07 13:44:35.917378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.134 [2024-11-07 13:44:35.917391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.134 qpair failed and we were unable to recover it. 00:39:28.134 [2024-11-07 13:44:35.917670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.134 [2024-11-07 13:44:35.917683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.134 qpair failed and we were unable to recover it. 00:39:28.134 [2024-11-07 13:44:35.918014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.134 [2024-11-07 13:44:35.918028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.134 qpair failed and we were unable to recover it. 00:39:28.134 [2024-11-07 13:44:35.918378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.134 [2024-11-07 13:44:35.918392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.134 qpair failed and we were unable to recover it. 00:39:28.134 [2024-11-07 13:44:35.918700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.134 [2024-11-07 13:44:35.918715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.134 qpair failed and we were unable to recover it. 00:39:28.134 [2024-11-07 13:44:35.918899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.134 [2024-11-07 13:44:35.918913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.134 qpair failed and we were unable to recover it. 00:39:28.134 [2024-11-07 13:44:35.919188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.134 [2024-11-07 13:44:35.919201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.134 qpair failed and we were unable to recover it. 00:39:28.134 [2024-11-07 13:44:35.919412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.134 [2024-11-07 13:44:35.919426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.134 qpair failed and we were unable to recover it. 00:39:28.134 [2024-11-07 13:44:35.919645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.134 [2024-11-07 13:44:35.919659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.134 qpair failed and we were unable to recover it. 00:39:28.134 [2024-11-07 13:44:35.919993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.134 [2024-11-07 13:44:35.920007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.134 qpair failed and we were unable to recover it. 00:39:28.134 [2024-11-07 13:44:35.920331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.134 [2024-11-07 13:44:35.920344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.134 qpair failed and we were unable to recover it. 00:39:28.134 [2024-11-07 13:44:35.920671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.134 [2024-11-07 13:44:35.920684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.134 qpair failed and we were unable to recover it. 00:39:28.134 [2024-11-07 13:44:35.921007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.134 [2024-11-07 13:44:35.921021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.134 qpair failed and we were unable to recover it. 00:39:28.134 [2024-11-07 13:44:35.921251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.134 [2024-11-07 13:44:35.921267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.134 qpair failed and we were unable to recover it. 00:39:28.134 [2024-11-07 13:44:35.921600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.134 [2024-11-07 13:44:35.921613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.134 qpair failed and we were unable to recover it. 00:39:28.134 [2024-11-07 13:44:35.921929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.134 [2024-11-07 13:44:35.921943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.134 qpair failed and we were unable to recover it. 00:39:28.134 [2024-11-07 13:44:35.922262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.134 [2024-11-07 13:44:35.922275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.134 qpair failed and we were unable to recover it. 00:39:28.134 [2024-11-07 13:44:35.922653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.134 [2024-11-07 13:44:35.922666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.134 qpair failed and we were unable to recover it. 00:39:28.134 [2024-11-07 13:44:35.922954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.134 [2024-11-07 13:44:35.922968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.134 qpair failed and we were unable to recover it. 00:39:28.134 [2024-11-07 13:44:35.923352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.134 [2024-11-07 13:44:35.923365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.134 qpair failed and we were unable to recover it. 00:39:28.134 [2024-11-07 13:44:35.923648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.134 [2024-11-07 13:44:35.923661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.134 qpair failed and we were unable to recover it. 00:39:28.134 [2024-11-07 13:44:35.923985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.134 [2024-11-07 13:44:35.923999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.134 qpair failed and we were unable to recover it. 00:39:28.134 [2024-11-07 13:44:35.924318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.134 [2024-11-07 13:44:35.924331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.134 qpair failed and we were unable to recover it. 00:39:28.134 [2024-11-07 13:44:35.924652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.135 [2024-11-07 13:44:35.924666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.135 qpair failed and we were unable to recover it. 00:39:28.135 [2024-11-07 13:44:35.924994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.135 [2024-11-07 13:44:35.925009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.135 qpair failed and we were unable to recover it. 00:39:28.135 [2024-11-07 13:44:35.925315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.135 [2024-11-07 13:44:35.925328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.135 qpair failed and we were unable to recover it. 00:39:28.135 [2024-11-07 13:44:35.925626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.135 [2024-11-07 13:44:35.925641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.135 qpair failed and we were unable to recover it. 00:39:28.135 [2024-11-07 13:44:35.926009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.135 [2024-11-07 13:44:35.926023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.135 qpair failed and we were unable to recover it. 00:39:28.135 [2024-11-07 13:44:35.926306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.135 [2024-11-07 13:44:35.926320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.135 qpair failed and we were unable to recover it. 00:39:28.135 [2024-11-07 13:44:35.926665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.135 [2024-11-07 13:44:35.926679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.135 qpair failed and we were unable to recover it. 00:39:28.135 [2024-11-07 13:44:35.927002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.135 [2024-11-07 13:44:35.927016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.135 qpair failed and we were unable to recover it. 00:39:28.135 [2024-11-07 13:44:35.927323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.135 [2024-11-07 13:44:35.927337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.135 qpair failed and we were unable to recover it. 00:39:28.135 [2024-11-07 13:44:35.927666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.135 [2024-11-07 13:44:35.927680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.135 qpair failed and we were unable to recover it. 00:39:28.135 [2024-11-07 13:44:35.927997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.135 [2024-11-07 13:44:35.928011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.135 qpair failed and we were unable to recover it. 00:39:28.135 [2024-11-07 13:44:35.928325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.135 [2024-11-07 13:44:35.928338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.135 qpair failed and we were unable to recover it. 00:39:28.135 [2024-11-07 13:44:35.928523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.135 [2024-11-07 13:44:35.928541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.135 qpair failed and we were unable to recover it. 00:39:28.135 [2024-11-07 13:44:35.928817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.135 [2024-11-07 13:44:35.928830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.135 qpair failed and we were unable to recover it. 00:39:28.135 [2024-11-07 13:44:35.929125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.135 [2024-11-07 13:44:35.929139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.135 qpair failed and we were unable to recover it. 00:39:28.135 [2024-11-07 13:44:35.929469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.135 [2024-11-07 13:44:35.929482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.135 qpair failed and we were unable to recover it. 00:39:28.135 [2024-11-07 13:44:35.929801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.135 [2024-11-07 13:44:35.929815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.135 qpair failed and we were unable to recover it. 00:39:28.135 [2024-11-07 13:44:35.930184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.135 [2024-11-07 13:44:35.930198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.135 qpair failed and we were unable to recover it. 00:39:28.135 [2024-11-07 13:44:35.930526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.135 [2024-11-07 13:44:35.930540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.135 qpair failed and we were unable to recover it. 00:39:28.135 [2024-11-07 13:44:35.930841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.135 [2024-11-07 13:44:35.930855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.135 qpair failed and we were unable to recover it. 00:39:28.135 [2024-11-07 13:44:35.931164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.135 [2024-11-07 13:44:35.931178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.135 qpair failed and we were unable to recover it. 00:39:28.135 [2024-11-07 13:44:35.931514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.135 [2024-11-07 13:44:35.931528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.135 qpair failed and we were unable to recover it. 00:39:28.135 [2024-11-07 13:44:35.931820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.135 [2024-11-07 13:44:35.931834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.135 qpair failed and we were unable to recover it. 00:39:28.135 [2024-11-07 13:44:35.932143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.135 [2024-11-07 13:44:35.932158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.135 qpair failed and we were unable to recover it. 00:39:28.135 [2024-11-07 13:44:35.932491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.135 [2024-11-07 13:44:35.932505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.135 qpair failed and we were unable to recover it. 00:39:28.135 [2024-11-07 13:44:35.932821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.135 [2024-11-07 13:44:35.932835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.135 qpair failed and we were unable to recover it. 00:39:28.135 [2024-11-07 13:44:35.933166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.135 [2024-11-07 13:44:35.933181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.135 qpair failed and we were unable to recover it. 00:39:28.135 [2024-11-07 13:44:35.933395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.135 [2024-11-07 13:44:35.933409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.135 qpair failed and we were unable to recover it. 00:39:28.135 [2024-11-07 13:44:35.933725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.135 [2024-11-07 13:44:35.933739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.135 qpair failed and we were unable to recover it. 00:39:28.135 [2024-11-07 13:44:35.934078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.135 [2024-11-07 13:44:35.934092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.135 qpair failed and we were unable to recover it. 00:39:28.135 [2024-11-07 13:44:35.934414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.135 [2024-11-07 13:44:35.934429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.135 qpair failed and we were unable to recover it. 00:39:28.135 [2024-11-07 13:44:35.934761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.135 [2024-11-07 13:44:35.934776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.135 qpair failed and we were unable to recover it. 00:39:28.135 [2024-11-07 13:44:35.935094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.135 [2024-11-07 13:44:35.935109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.135 qpair failed and we were unable to recover it. 00:39:28.135 [2024-11-07 13:44:35.935448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.136 [2024-11-07 13:44:35.935461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.136 qpair failed and we were unable to recover it. 00:39:28.136 [2024-11-07 13:44:35.935740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.136 [2024-11-07 13:44:35.935753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.136 qpair failed and we were unable to recover it. 00:39:28.136 [2024-11-07 13:44:35.936033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.136 [2024-11-07 13:44:35.936047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.136 qpair failed and we were unable to recover it. 00:39:28.136 [2024-11-07 13:44:35.936330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.136 [2024-11-07 13:44:35.936344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.136 qpair failed and we were unable to recover it. 00:39:28.136 [2024-11-07 13:44:35.936531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.136 [2024-11-07 13:44:35.936546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.136 qpair failed and we were unable to recover it. 00:39:28.136 [2024-11-07 13:44:35.936888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.136 [2024-11-07 13:44:35.936902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.136 qpair failed and we were unable to recover it. 00:39:28.136 [2024-11-07 13:44:35.937286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.136 [2024-11-07 13:44:35.937300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.136 qpair failed and we were unable to recover it. 00:39:28.136 [2024-11-07 13:44:35.937615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.136 [2024-11-07 13:44:35.937628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.136 qpair failed and we were unable to recover it. 00:39:28.136 [2024-11-07 13:44:35.937895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.136 [2024-11-07 13:44:35.937910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.136 qpair failed and we were unable to recover it. 00:39:28.136 [2024-11-07 13:44:35.938241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.136 [2024-11-07 13:44:35.938255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.136 qpair failed and we were unable to recover it. 00:39:28.136 [2024-11-07 13:44:35.938543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.136 [2024-11-07 13:44:35.938556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.136 qpair failed and we were unable to recover it. 00:39:28.136 [2024-11-07 13:44:35.938755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.136 [2024-11-07 13:44:35.938769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.136 qpair failed and we were unable to recover it. 00:39:28.136 [2024-11-07 13:44:35.939077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.136 [2024-11-07 13:44:35.939091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.136 qpair failed and we were unable to recover it. 00:39:28.136 [2024-11-07 13:44:35.939478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.136 [2024-11-07 13:44:35.939491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.136 qpair failed and we were unable to recover it. 00:39:28.136 [2024-11-07 13:44:35.939728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.136 [2024-11-07 13:44:35.939741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.136 qpair failed and we were unable to recover it. 00:39:28.136 [2024-11-07 13:44:35.940039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.136 [2024-11-07 13:44:35.940052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.136 qpair failed and we were unable to recover it. 00:39:28.136 [2024-11-07 13:44:35.940361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.136 [2024-11-07 13:44:35.940375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.136 qpair failed and we were unable to recover it. 00:39:28.136 [2024-11-07 13:44:35.940679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.136 [2024-11-07 13:44:35.940692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.136 qpair failed and we were unable to recover it. 00:39:28.136 [2024-11-07 13:44:35.940883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.136 [2024-11-07 13:44:35.940897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.136 qpair failed and we were unable to recover it. 00:39:28.136 [2024-11-07 13:44:35.941199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.136 [2024-11-07 13:44:35.941213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.136 qpair failed and we were unable to recover it. 00:39:28.136 [2024-11-07 13:44:35.941530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.136 [2024-11-07 13:44:35.941544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.136 qpair failed and we were unable to recover it. 00:39:28.136 [2024-11-07 13:44:35.941875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.136 [2024-11-07 13:44:35.941890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.136 qpair failed and we were unable to recover it. 00:39:28.136 [2024-11-07 13:44:35.942198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.136 [2024-11-07 13:44:35.942212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.136 qpair failed and we were unable to recover it. 00:39:28.136 [2024-11-07 13:44:35.942495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.136 [2024-11-07 13:44:35.942508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.136 qpair failed and we were unable to recover it. 00:39:28.136 [2024-11-07 13:44:35.942846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.136 [2024-11-07 13:44:35.942859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.136 qpair failed and we were unable to recover it. 00:39:28.136 [2024-11-07 13:44:35.943177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.136 [2024-11-07 13:44:35.943190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.136 qpair failed and we were unable to recover it. 00:39:28.136 [2024-11-07 13:44:35.943507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.136 [2024-11-07 13:44:35.943521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.136 qpair failed and we were unable to recover it. 00:39:28.136 [2024-11-07 13:44:35.943907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.136 [2024-11-07 13:44:35.943922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.136 qpair failed and we were unable to recover it. 00:39:28.136 [2024-11-07 13:44:35.944231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.136 [2024-11-07 13:44:35.944244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.136 qpair failed and we were unable to recover it. 00:39:28.136 [2024-11-07 13:44:35.944551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.136 [2024-11-07 13:44:35.944564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.136 qpair failed and we were unable to recover it. 00:39:28.136 [2024-11-07 13:44:35.944848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.136 [2024-11-07 13:44:35.944866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.136 qpair failed and we were unable to recover it. 00:39:28.136 [2024-11-07 13:44:35.945228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.136 [2024-11-07 13:44:35.945242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.136 qpair failed and we were unable to recover it. 00:39:28.136 [2024-11-07 13:44:35.945542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.136 [2024-11-07 13:44:35.945557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.136 qpair failed and we were unable to recover it. 00:39:28.136 [2024-11-07 13:44:35.945777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.136 [2024-11-07 13:44:35.945790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.136 qpair failed and we were unable to recover it. 00:39:28.136 [2024-11-07 13:44:35.946118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.136 [2024-11-07 13:44:35.946133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.136 qpair failed and we were unable to recover it. 00:39:28.136 [2024-11-07 13:44:35.946463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.136 [2024-11-07 13:44:35.946477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.136 qpair failed and we were unable to recover it. 00:39:28.136 [2024-11-07 13:44:35.946849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.136 [2024-11-07 13:44:35.946866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.136 qpair failed and we were unable to recover it. 00:39:28.136 [2024-11-07 13:44:35.947182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.136 [2024-11-07 13:44:35.947198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.136 qpair failed and we were unable to recover it. 00:39:28.136 [2024-11-07 13:44:35.947510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.137 [2024-11-07 13:44:35.947524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.137 qpair failed and we were unable to recover it. 00:39:28.137 [2024-11-07 13:44:35.947842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.137 [2024-11-07 13:44:35.947855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.137 qpair failed and we were unable to recover it. 00:39:28.137 [2024-11-07 13:44:35.948042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.137 [2024-11-07 13:44:35.948058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.137 qpair failed and we were unable to recover it. 00:39:28.137 [2024-11-07 13:44:35.948338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.137 [2024-11-07 13:44:35.948352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.137 qpair failed and we were unable to recover it. 00:39:28.137 [2024-11-07 13:44:35.948561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.137 [2024-11-07 13:44:35.948575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.137 qpair failed and we were unable to recover it. 00:39:28.137 [2024-11-07 13:44:35.948889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.137 [2024-11-07 13:44:35.948904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.137 qpair failed and we were unable to recover it. 00:39:28.137 [2024-11-07 13:44:35.949220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.137 [2024-11-07 13:44:35.949234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.137 qpair failed and we were unable to recover it. 00:39:28.137 [2024-11-07 13:44:35.949563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.137 [2024-11-07 13:44:35.949577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.137 qpair failed and we were unable to recover it. 00:39:28.137 [2024-11-07 13:44:35.949893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.137 [2024-11-07 13:44:35.949907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.137 qpair failed and we were unable to recover it. 00:39:28.137 [2024-11-07 13:44:35.950211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.137 [2024-11-07 13:44:35.950225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.137 qpair failed and we were unable to recover it. 00:39:28.137 [2024-11-07 13:44:35.950508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.137 [2024-11-07 13:44:35.950522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.137 qpair failed and we were unable to recover it. 00:39:28.137 [2024-11-07 13:44:35.950843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.137 [2024-11-07 13:44:35.950857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.137 qpair failed and we were unable to recover it. 00:39:28.137 [2024-11-07 13:44:35.951145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.137 [2024-11-07 13:44:35.951159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.137 qpair failed and we were unable to recover it. 00:39:28.137 [2024-11-07 13:44:35.951449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.137 [2024-11-07 13:44:35.951463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.137 qpair failed and we were unable to recover it. 00:39:28.137 [2024-11-07 13:44:35.951705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.137 [2024-11-07 13:44:35.951720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.137 qpair failed and we were unable to recover it. 00:39:28.137 [2024-11-07 13:44:35.952032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.137 [2024-11-07 13:44:35.952046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.137 qpair failed and we were unable to recover it. 00:39:28.137 [2024-11-07 13:44:35.952331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.137 [2024-11-07 13:44:35.952345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.137 qpair failed and we were unable to recover it. 00:39:28.137 [2024-11-07 13:44:35.952707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.137 [2024-11-07 13:44:35.952721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.137 qpair failed and we were unable to recover it. 00:39:28.137 [2024-11-07 13:44:35.953041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.137 [2024-11-07 13:44:35.953055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.137 qpair failed and we were unable to recover it. 00:39:28.137 [2024-11-07 13:44:35.953338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.137 [2024-11-07 13:44:35.953352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.137 qpair failed and we were unable to recover it. 00:39:28.137 [2024-11-07 13:44:35.953649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.137 [2024-11-07 13:44:35.953663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.137 qpair failed and we were unable to recover it. 00:39:28.137 [2024-11-07 13:44:35.953966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.137 [2024-11-07 13:44:35.953980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.137 qpair failed and we were unable to recover it. 00:39:28.137 [2024-11-07 13:44:35.954292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.137 [2024-11-07 13:44:35.954306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.137 qpair failed and we were unable to recover it. 00:39:28.137 [2024-11-07 13:44:35.954504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.137 [2024-11-07 13:44:35.954519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.137 qpair failed and we were unable to recover it. 00:39:28.137 [2024-11-07 13:44:35.954874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.137 [2024-11-07 13:44:35.954888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.137 qpair failed and we were unable to recover it. 00:39:28.137 [2024-11-07 13:44:35.955208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.137 [2024-11-07 13:44:35.955221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.137 qpair failed and we were unable to recover it. 00:39:28.137 [2024-11-07 13:44:35.955514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.137 [2024-11-07 13:44:35.955528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.137 qpair failed and we were unable to recover it. 00:39:28.137 [2024-11-07 13:44:35.955695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.137 [2024-11-07 13:44:35.955710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.137 qpair failed and we were unable to recover it. 00:39:28.137 [2024-11-07 13:44:35.956020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.137 [2024-11-07 13:44:35.956035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.137 qpair failed and we were unable to recover it. 00:39:28.137 [2024-11-07 13:44:35.956309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.137 [2024-11-07 13:44:35.956322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.137 qpair failed and we were unable to recover it. 00:39:28.137 [2024-11-07 13:44:35.956663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.137 [2024-11-07 13:44:35.956676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.137 qpair failed and we were unable to recover it. 00:39:28.137 [2024-11-07 13:44:35.956998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.137 [2024-11-07 13:44:35.957011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.137 qpair failed and we were unable to recover it. 00:39:28.137 [2024-11-07 13:44:35.957323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.137 [2024-11-07 13:44:35.957337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.137 qpair failed and we were unable to recover it. 00:39:28.137 [2024-11-07 13:44:35.957653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.137 [2024-11-07 13:44:35.957666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.137 qpair failed and we were unable to recover it. 00:39:28.137 [2024-11-07 13:44:35.957952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.137 [2024-11-07 13:44:35.957966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.137 qpair failed and we were unable to recover it. 00:39:28.137 [2024-11-07 13:44:35.958282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.137 [2024-11-07 13:44:35.958296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.137 qpair failed and we were unable to recover it. 00:39:28.137 [2024-11-07 13:44:35.958625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.137 [2024-11-07 13:44:35.958639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.137 qpair failed and we were unable to recover it. 00:39:28.137 [2024-11-07 13:44:35.958955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.137 [2024-11-07 13:44:35.958970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.137 qpair failed and we were unable to recover it. 00:39:28.137 [2024-11-07 13:44:35.959288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.138 [2024-11-07 13:44:35.959302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.138 qpair failed and we were unable to recover it. 00:39:28.138 [2024-11-07 13:44:35.959630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.138 [2024-11-07 13:44:35.959645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.138 qpair failed and we were unable to recover it. 00:39:28.138 [2024-11-07 13:44:35.959930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.138 [2024-11-07 13:44:35.959945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.138 qpair failed and we were unable to recover it. 00:39:28.138 [2024-11-07 13:44:35.960227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.138 [2024-11-07 13:44:35.960240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.138 qpair failed and we were unable to recover it. 00:39:28.138 [2024-11-07 13:44:35.960545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.138 [2024-11-07 13:44:35.960558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.138 qpair failed and we were unable to recover it. 00:39:28.138 [2024-11-07 13:44:35.960857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.138 [2024-11-07 13:44:35.960874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.138 qpair failed and we were unable to recover it. 00:39:28.138 [2024-11-07 13:44:35.961180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.138 [2024-11-07 13:44:35.961194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.138 qpair failed and we were unable to recover it. 00:39:28.138 [2024-11-07 13:44:35.961498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.138 [2024-11-07 13:44:35.961512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.138 qpair failed and we were unable to recover it. 00:39:28.138 [2024-11-07 13:44:35.961851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.138 [2024-11-07 13:44:35.961872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.138 qpair failed and we were unable to recover it. 00:39:28.138 [2024-11-07 13:44:35.962174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.138 [2024-11-07 13:44:35.962188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.138 qpair failed and we were unable to recover it. 00:39:28.138 [2024-11-07 13:44:35.962570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.138 [2024-11-07 13:44:35.962583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.138 qpair failed and we were unable to recover it. 00:39:28.138 [2024-11-07 13:44:35.962924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.138 [2024-11-07 13:44:35.962938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.138 qpair failed and we were unable to recover it. 00:39:28.138 [2024-11-07 13:44:35.963161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.138 [2024-11-07 13:44:35.963175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.138 qpair failed and we were unable to recover it. 00:39:28.138 [2024-11-07 13:44:35.963381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.138 [2024-11-07 13:44:35.963396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.138 qpair failed and we were unable to recover it. 00:39:28.138 [2024-11-07 13:44:35.963780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.138 [2024-11-07 13:44:35.963794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.138 qpair failed and we were unable to recover it. 00:39:28.138 [2024-11-07 13:44:35.964087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.138 [2024-11-07 13:44:35.964101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.138 qpair failed and we were unable to recover it. 00:39:28.138 [2024-11-07 13:44:35.964331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.138 [2024-11-07 13:44:35.964345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.138 qpair failed and we were unable to recover it. 00:39:28.138 [2024-11-07 13:44:35.964627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.138 [2024-11-07 13:44:35.964640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.138 qpair failed and we were unable to recover it. 00:39:28.138 [2024-11-07 13:44:35.964925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.138 [2024-11-07 13:44:35.964940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.138 qpair failed and we were unable to recover it. 00:39:28.138 [2024-11-07 13:44:35.965257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.138 [2024-11-07 13:44:35.965271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.138 qpair failed and we were unable to recover it. 00:39:28.138 [2024-11-07 13:44:35.965574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.138 [2024-11-07 13:44:35.965587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.138 qpair failed and we were unable to recover it. 00:39:28.138 [2024-11-07 13:44:35.965873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.138 [2024-11-07 13:44:35.965886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.138 qpair failed and we were unable to recover it. 00:39:28.138 [2024-11-07 13:44:35.966171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.138 [2024-11-07 13:44:35.966185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.138 qpair failed and we were unable to recover it. 00:39:28.138 [2024-11-07 13:44:35.966499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.138 [2024-11-07 13:44:35.966513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.138 qpair failed and we were unable to recover it. 00:39:28.138 [2024-11-07 13:44:35.966828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.138 [2024-11-07 13:44:35.966841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.138 qpair failed and we were unable to recover it. 00:39:28.138 [2024-11-07 13:44:35.967268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.138 [2024-11-07 13:44:35.967283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.138 qpair failed and we were unable to recover it. 00:39:28.138 [2024-11-07 13:44:35.967562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.138 [2024-11-07 13:44:35.967576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.138 qpair failed and we were unable to recover it. 00:39:28.138 [2024-11-07 13:44:35.967854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.138 [2024-11-07 13:44:35.967875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.138 qpair failed and we were unable to recover it. 00:39:28.138 [2024-11-07 13:44:35.968165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.138 [2024-11-07 13:44:35.968178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.138 qpair failed and we were unable to recover it. 00:39:28.138 [2024-11-07 13:44:35.968375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.138 [2024-11-07 13:44:35.968390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.138 qpair failed and we were unable to recover it. 00:39:28.138 [2024-11-07 13:44:35.968589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.138 [2024-11-07 13:44:35.968603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.138 qpair failed and we were unable to recover it. 00:39:28.138 [2024-11-07 13:44:35.968898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.138 [2024-11-07 13:44:35.968912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.138 qpair failed and we were unable to recover it. 00:39:28.138 [2024-11-07 13:44:35.969280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.138 [2024-11-07 13:44:35.969293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.138 qpair failed and we were unable to recover it. 00:39:28.138 [2024-11-07 13:44:35.969684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.138 [2024-11-07 13:44:35.969697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.138 qpair failed and we were unable to recover it. 00:39:28.138 [2024-11-07 13:44:35.969979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.138 [2024-11-07 13:44:35.969992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.138 qpair failed and we were unable to recover it. 00:39:28.138 [2024-11-07 13:44:35.970272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.138 [2024-11-07 13:44:35.970285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.138 qpair failed and we were unable to recover it. 00:39:28.138 [2024-11-07 13:44:35.970609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.138 [2024-11-07 13:44:35.970622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.138 qpair failed and we were unable to recover it. 00:39:28.138 [2024-11-07 13:44:35.970926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.138 [2024-11-07 13:44:35.970940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.138 qpair failed and we were unable to recover it. 00:39:28.138 [2024-11-07 13:44:35.971223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.138 [2024-11-07 13:44:35.971236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.138 qpair failed and we were unable to recover it. 00:39:28.139 [2024-11-07 13:44:35.971552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.139 [2024-11-07 13:44:35.971565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.139 qpair failed and we were unable to recover it. 00:39:28.139 [2024-11-07 13:44:35.971899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.139 [2024-11-07 13:44:35.971913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.139 qpair failed and we were unable to recover it. 00:39:28.139 [2024-11-07 13:44:35.972223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.139 [2024-11-07 13:44:35.972238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.139 qpair failed and we were unable to recover it. 00:39:28.139 [2024-11-07 13:44:35.972547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.139 [2024-11-07 13:44:35.972561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.139 qpair failed and we were unable to recover it. 00:39:28.139 [2024-11-07 13:44:35.972879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.139 [2024-11-07 13:44:35.972893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.139 qpair failed and we were unable to recover it. 00:39:28.139 [2024-11-07 13:44:35.973186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.139 [2024-11-07 13:44:35.973199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.139 qpair failed and we were unable to recover it. 00:39:28.139 [2024-11-07 13:44:35.973408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.139 [2024-11-07 13:44:35.973421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.139 qpair failed and we were unable to recover it. 00:39:28.139 [2024-11-07 13:44:35.973641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.139 [2024-11-07 13:44:35.973654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.139 qpair failed and we were unable to recover it. 00:39:28.139 [2024-11-07 13:44:35.973961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.139 [2024-11-07 13:44:35.973975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.139 qpair failed and we were unable to recover it. 00:39:28.139 [2024-11-07 13:44:35.974171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.139 [2024-11-07 13:44:35.974184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.139 qpair failed and we were unable to recover it. 00:39:28.139 [2024-11-07 13:44:35.974505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.139 [2024-11-07 13:44:35.974518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.139 qpair failed and we were unable to recover it. 00:39:28.139 [2024-11-07 13:44:35.974806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.139 [2024-11-07 13:44:35.974819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.139 qpair failed and we were unable to recover it. 00:39:28.139 [2024-11-07 13:44:35.975127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.139 [2024-11-07 13:44:35.975141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.139 qpair failed and we were unable to recover it. 00:39:28.139 [2024-11-07 13:44:35.975455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.139 [2024-11-07 13:44:35.975469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.139 qpair failed and we were unable to recover it. 00:39:28.139 [2024-11-07 13:44:35.975802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.139 [2024-11-07 13:44:35.975816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.139 qpair failed and we were unable to recover it. 00:39:28.139 [2024-11-07 13:44:35.976196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.139 [2024-11-07 13:44:35.976210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.139 qpair failed and we were unable to recover it. 00:39:28.139 [2024-11-07 13:44:35.976534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.139 [2024-11-07 13:44:35.976548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.139 qpair failed and we were unable to recover it. 00:39:28.139 [2024-11-07 13:44:35.976938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.139 [2024-11-07 13:44:35.976954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.139 qpair failed and we were unable to recover it. 00:39:28.139 [2024-11-07 13:44:35.977174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.139 [2024-11-07 13:44:35.977187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.139 qpair failed and we were unable to recover it. 00:39:28.139 [2024-11-07 13:44:35.977399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.139 [2024-11-07 13:44:35.977413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.139 qpair failed and we were unable to recover it. 00:39:28.139 [2024-11-07 13:44:35.977610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.139 [2024-11-07 13:44:35.977623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.139 qpair failed and we were unable to recover it. 00:39:28.139 [2024-11-07 13:44:35.977944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.139 [2024-11-07 13:44:35.977959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.139 qpair failed and we were unable to recover it. 00:39:28.139 [2024-11-07 13:44:35.978277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.139 [2024-11-07 13:44:35.978291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.139 qpair failed and we were unable to recover it. 00:39:28.139 [2024-11-07 13:44:35.978704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.139 [2024-11-07 13:44:35.978717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.139 qpair failed and we were unable to recover it. 00:39:28.139 [2024-11-07 13:44:35.978998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.139 [2024-11-07 13:44:35.979012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.139 qpair failed and we were unable to recover it. 00:39:28.139 [2024-11-07 13:44:35.979336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.139 [2024-11-07 13:44:35.979349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.139 qpair failed and we were unable to recover it. 00:39:28.139 [2024-11-07 13:44:35.979682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.139 [2024-11-07 13:44:35.979695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.139 qpair failed and we were unable to recover it. 00:39:28.139 [2024-11-07 13:44:35.979901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.139 [2024-11-07 13:44:35.979915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.139 qpair failed and we were unable to recover it. 00:39:28.139 [2024-11-07 13:44:35.980222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.139 [2024-11-07 13:44:35.980235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.139 qpair failed and we were unable to recover it. 00:39:28.139 [2024-11-07 13:44:35.980560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.139 [2024-11-07 13:44:35.980575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.139 qpair failed and we were unable to recover it. 00:39:28.139 [2024-11-07 13:44:35.980890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.139 [2024-11-07 13:44:35.980904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.139 qpair failed and we were unable to recover it. 00:39:28.139 [2024-11-07 13:44:35.981213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.139 [2024-11-07 13:44:35.981227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.139 qpair failed and we were unable to recover it. 00:39:28.139 [2024-11-07 13:44:35.981577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.139 [2024-11-07 13:44:35.981590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.139 qpair failed and we were unable to recover it. 00:39:28.139 [2024-11-07 13:44:35.981899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.139 [2024-11-07 13:44:35.981913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.139 qpair failed and we were unable to recover it. 00:39:28.140 [2024-11-07 13:44:35.982226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-11-07 13:44:35.982239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-11-07 13:44:35.982570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-11-07 13:44:35.982583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-11-07 13:44:35.982802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-11-07 13:44:35.982815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-11-07 13:44:35.983141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-11-07 13:44:35.983155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-11-07 13:44:35.983484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-11-07 13:44:35.983497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-11-07 13:44:35.983811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-11-07 13:44:35.983824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-11-07 13:44:35.984232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-11-07 13:44:35.984246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-11-07 13:44:35.984549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-11-07 13:44:35.984563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-11-07 13:44:35.984881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-11-07 13:44:35.984897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-11-07 13:44:35.985209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-11-07 13:44:35.985223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-11-07 13:44:35.985561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-11-07 13:44:35.985574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-11-07 13:44:35.985899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-11-07 13:44:35.985914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-11-07 13:44:35.986241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-11-07 13:44:35.986254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-11-07 13:44:35.986585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-11-07 13:44:35.986599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-11-07 13:44:35.986914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-11-07 13:44:35.986927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-11-07 13:44:35.987243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-11-07 13:44:35.987256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-11-07 13:44:35.987602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-11-07 13:44:35.987614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-11-07 13:44:35.987917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-11-07 13:44:35.987931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-11-07 13:44:35.988142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-11-07 13:44:35.988156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-11-07 13:44:35.988480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-11-07 13:44:35.988493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-11-07 13:44:35.988806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-11-07 13:44:35.988819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-11-07 13:44:35.989135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-11-07 13:44:35.989149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-11-07 13:44:35.989480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-11-07 13:44:35.989495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-11-07 13:44:35.989864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-11-07 13:44:35.989879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-11-07 13:44:35.990288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-11-07 13:44:35.990302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-11-07 13:44:35.990584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-11-07 13:44:35.990597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-11-07 13:44:35.990931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-11-07 13:44:35.990945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-11-07 13:44:35.991241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-11-07 13:44:35.991254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-11-07 13:44:35.991573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-11-07 13:44:35.991586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-11-07 13:44:35.991889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-11-07 13:44:35.991903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-11-07 13:44:35.992224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-11-07 13:44:35.992237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-11-07 13:44:35.992521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-11-07 13:44:35.992542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-11-07 13:44:35.992905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-11-07 13:44:35.992919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-11-07 13:44:35.993240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-11-07 13:44:35.993253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-11-07 13:44:35.993539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-11-07 13:44:35.993560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-11-07 13:44:35.993759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-11-07 13:44:35.993772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-11-07 13:44:35.994059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-11-07 13:44:35.994073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-11-07 13:44:35.994447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.140 [2024-11-07 13:44:35.994460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.140 qpair failed and we were unable to recover it. 00:39:28.140 [2024-11-07 13:44:35.994753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-11-07 13:44:35.994767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-11-07 13:44:35.994950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-11-07 13:44:35.994965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-11-07 13:44:35.995259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-11-07 13:44:35.995273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-11-07 13:44:35.995591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-11-07 13:44:35.995605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-11-07 13:44:35.995886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-11-07 13:44:35.995900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-11-07 13:44:35.996188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-11-07 13:44:35.996201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-11-07 13:44:35.996516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-11-07 13:44:35.996529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-11-07 13:44:35.996844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-11-07 13:44:35.996857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-11-07 13:44:35.997146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-11-07 13:44:35.997165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-11-07 13:44:35.997521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-11-07 13:44:35.997534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-11-07 13:44:35.997842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-11-07 13:44:35.997859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-11-07 13:44:35.998188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-11-07 13:44:35.998202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-11-07 13:44:35.998530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-11-07 13:44:35.998544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-11-07 13:44:35.998881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-11-07 13:44:35.998896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-11-07 13:44:35.999253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-11-07 13:44:35.999267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-11-07 13:44:35.999590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-11-07 13:44:35.999604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-11-07 13:44:35.999920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-11-07 13:44:35.999934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-11-07 13:44:36.000223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-11-07 13:44:36.000244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-11-07 13:44:36.000553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-11-07 13:44:36.000567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-11-07 13:44:36.000882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-11-07 13:44:36.000896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-11-07 13:44:36.001206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-11-07 13:44:36.001220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-11-07 13:44:36.001533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-11-07 13:44:36.001546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-11-07 13:44:36.001869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-11-07 13:44:36.001882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-11-07 13:44:36.002207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-11-07 13:44:36.002220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-11-07 13:44:36.002536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-11-07 13:44:36.002549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-11-07 13:44:36.002839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-11-07 13:44:36.002852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-11-07 13:44:36.003186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-11-07 13:44:36.003200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-11-07 13:44:36.003513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-11-07 13:44:36.003527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-11-07 13:44:36.003852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-11-07 13:44:36.003868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-11-07 13:44:36.004213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-11-07 13:44:36.004227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-11-07 13:44:36.004535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-11-07 13:44:36.004548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-11-07 13:44:36.004893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-11-07 13:44:36.004907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-11-07 13:44:36.005224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-11-07 13:44:36.005237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-11-07 13:44:36.005431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-11-07 13:44:36.005446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-11-07 13:44:36.005784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-11-07 13:44:36.005797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-11-07 13:44:36.006085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-11-07 13:44:36.006099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-11-07 13:44:36.006420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-11-07 13:44:36.006433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-11-07 13:44:36.006744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.141 [2024-11-07 13:44:36.006758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.141 qpair failed and we were unable to recover it. 00:39:28.141 [2024-11-07 13:44:36.007080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-11-07 13:44:36.007094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-11-07 13:44:36.007414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-11-07 13:44:36.007427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-11-07 13:44:36.007775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-11-07 13:44:36.007788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-11-07 13:44:36.007977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-11-07 13:44:36.007992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-11-07 13:44:36.008282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-11-07 13:44:36.008295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-11-07 13:44:36.008613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-11-07 13:44:36.008627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-11-07 13:44:36.008918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-11-07 13:44:36.008932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-11-07 13:44:36.009152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-11-07 13:44:36.009164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-11-07 13:44:36.009487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-11-07 13:44:36.009506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-11-07 13:44:36.009823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-11-07 13:44:36.009836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-11-07 13:44:36.010151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-11-07 13:44:36.010164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-11-07 13:44:36.010480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-11-07 13:44:36.010493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-11-07 13:44:36.010777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-11-07 13:44:36.010790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-11-07 13:44:36.011111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-11-07 13:44:36.011125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-11-07 13:44:36.011402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-11-07 13:44:36.011416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-11-07 13:44:36.011752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-11-07 13:44:36.011766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-11-07 13:44:36.011951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-11-07 13:44:36.011966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-11-07 13:44:36.012275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-11-07 13:44:36.012289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-11-07 13:44:36.012571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-11-07 13:44:36.012584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-11-07 13:44:36.012871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-11-07 13:44:36.012885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-11-07 13:44:36.013195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-11-07 13:44:36.013208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-11-07 13:44:36.013535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-11-07 13:44:36.013548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-11-07 13:44:36.013824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-11-07 13:44:36.013837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-11-07 13:44:36.014005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-11-07 13:44:36.014021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-11-07 13:44:36.014322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-11-07 13:44:36.014335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-11-07 13:44:36.014650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-11-07 13:44:36.014663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-11-07 13:44:36.014951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-11-07 13:44:36.014965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-11-07 13:44:36.015294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-11-07 13:44:36.015307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-11-07 13:44:36.015624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-11-07 13:44:36.015637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-11-07 13:44:36.016027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-11-07 13:44:36.016041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-11-07 13:44:36.016340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-11-07 13:44:36.016354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-11-07 13:44:36.016715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-11-07 13:44:36.016728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-11-07 13:44:36.017031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-11-07 13:44:36.017045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-11-07 13:44:36.017344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-11-07 13:44:36.017357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-11-07 13:44:36.017672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-11-07 13:44:36.017686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-11-07 13:44:36.017884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-11-07 13:44:36.017898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-11-07 13:44:36.018217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-11-07 13:44:36.018230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-11-07 13:44:36.018550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-11-07 13:44:36.018570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.142 qpair failed and we were unable to recover it. 00:39:28.142 [2024-11-07 13:44:36.018884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.142 [2024-11-07 13:44:36.018898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-11-07 13:44:36.019215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-11-07 13:44:36.019231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-11-07 13:44:36.019566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-11-07 13:44:36.019580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-11-07 13:44:36.019799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-11-07 13:44:36.019812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-11-07 13:44:36.020134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-11-07 13:44:36.020148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-11-07 13:44:36.020451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-11-07 13:44:36.020464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-11-07 13:44:36.020797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-11-07 13:44:36.020811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-11-07 13:44:36.021049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-11-07 13:44:36.021063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-11-07 13:44:36.021426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-11-07 13:44:36.021439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-11-07 13:44:36.021754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-11-07 13:44:36.021767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-11-07 13:44:36.022086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-11-07 13:44:36.022100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-11-07 13:44:36.022422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-11-07 13:44:36.022443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-11-07 13:44:36.022798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-11-07 13:44:36.022812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-11-07 13:44:36.023095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-11-07 13:44:36.023109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-11-07 13:44:36.023281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-11-07 13:44:36.023296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-11-07 13:44:36.023608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-11-07 13:44:36.023622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-11-07 13:44:36.023954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-11-07 13:44:36.023968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-11-07 13:44:36.024281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-11-07 13:44:36.024294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-11-07 13:44:36.024600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-11-07 13:44:36.024613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-11-07 13:44:36.024955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-11-07 13:44:36.024969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-11-07 13:44:36.025282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-11-07 13:44:36.025296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-11-07 13:44:36.025507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-11-07 13:44:36.025520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-11-07 13:44:36.025822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-11-07 13:44:36.025835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-11-07 13:44:36.026185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-11-07 13:44:36.026200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-11-07 13:44:36.026511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-11-07 13:44:36.026524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-11-07 13:44:36.026893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-11-07 13:44:36.026908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-11-07 13:44:36.027225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-11-07 13:44:36.027239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-11-07 13:44:36.027553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-11-07 13:44:36.027566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-11-07 13:44:36.027899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-11-07 13:44:36.027913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-11-07 13:44:36.028230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-11-07 13:44:36.028243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-11-07 13:44:36.028551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-11-07 13:44:36.028564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-11-07 13:44:36.028886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-11-07 13:44:36.028900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-11-07 13:44:36.029216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-11-07 13:44:36.029230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-11-07 13:44:36.029544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-11-07 13:44:36.029557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-11-07 13:44:36.029927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-11-07 13:44:36.029940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-11-07 13:44:36.030268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-11-07 13:44:36.030281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-11-07 13:44:36.030607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-11-07 13:44:36.030622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-11-07 13:44:36.030905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-11-07 13:44:36.030919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.143 [2024-11-07 13:44:36.031203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.143 [2024-11-07 13:44:36.031216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.143 qpair failed and we were unable to recover it. 00:39:28.144 [2024-11-07 13:44:36.031524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-11-07 13:44:36.031538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-11-07 13:44:36.031826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-11-07 13:44:36.031846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-11-07 13:44:36.032140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-11-07 13:44:36.032156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-11-07 13:44:36.032467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-11-07 13:44:36.032480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-11-07 13:44:36.032775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-11-07 13:44:36.032788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-11-07 13:44:36.033097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-11-07 13:44:36.033111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-11-07 13:44:36.033332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-11-07 13:44:36.033345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-11-07 13:44:36.033675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-11-07 13:44:36.033689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-11-07 13:44:36.033913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-11-07 13:44:36.033927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-11-07 13:44:36.034208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-11-07 13:44:36.034222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-11-07 13:44:36.034511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-11-07 13:44:36.034532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-11-07 13:44:36.034843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-11-07 13:44:36.034856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-11-07 13:44:36.035208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-11-07 13:44:36.035222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-11-07 13:44:36.035405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-11-07 13:44:36.035417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-11-07 13:44:36.035807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-11-07 13:44:36.035820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-11-07 13:44:36.036130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-11-07 13:44:36.036144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-11-07 13:44:36.036483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-11-07 13:44:36.036497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-11-07 13:44:36.036814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-11-07 13:44:36.036828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-11-07 13:44:36.037153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-11-07 13:44:36.037167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-11-07 13:44:36.037456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-11-07 13:44:36.037469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-11-07 13:44:36.037789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-11-07 13:44:36.037802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-11-07 13:44:36.037999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-11-07 13:44:36.038014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-11-07 13:44:36.038273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-11-07 13:44:36.038287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-11-07 13:44:36.038594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-11-07 13:44:36.038608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-11-07 13:44:36.038915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-11-07 13:44:36.038929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-11-07 13:44:36.039128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-11-07 13:44:36.039142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-11-07 13:44:36.039457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-11-07 13:44:36.039470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-11-07 13:44:36.039783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-11-07 13:44:36.039797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-11-07 13:44:36.040101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-11-07 13:44:36.040115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-11-07 13:44:36.040432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-11-07 13:44:36.040446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-11-07 13:44:36.040670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-11-07 13:44:36.040684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-11-07 13:44:36.040986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-11-07 13:44:36.041000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-11-07 13:44:36.041308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.144 [2024-11-07 13:44:36.041322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.144 qpair failed and we were unable to recover it. 00:39:28.144 [2024-11-07 13:44:36.041617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-11-07 13:44:36.041630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-11-07 13:44:36.041928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-11-07 13:44:36.041943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-11-07 13:44:36.042252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-11-07 13:44:36.042266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-11-07 13:44:36.042575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-11-07 13:44:36.042588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-11-07 13:44:36.042964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-11-07 13:44:36.042979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-11-07 13:44:36.043293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-11-07 13:44:36.043307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-11-07 13:44:36.043630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-11-07 13:44:36.043644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-11-07 13:44:36.043874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-11-07 13:44:36.043889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-11-07 13:44:36.044056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-11-07 13:44:36.044071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-11-07 13:44:36.044391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-11-07 13:44:36.044410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-11-07 13:44:36.044579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-11-07 13:44:36.044594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-11-07 13:44:36.044880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-11-07 13:44:36.044894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-11-07 13:44:36.045208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-11-07 13:44:36.045222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-11-07 13:44:36.045562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-11-07 13:44:36.045576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-11-07 13:44:36.045858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-11-07 13:44:36.045884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-11-07 13:44:36.046251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-11-07 13:44:36.046266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-11-07 13:44:36.046566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-11-07 13:44:36.046580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-11-07 13:44:36.046972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-11-07 13:44:36.046987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-11-07 13:44:36.047306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-11-07 13:44:36.047320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-11-07 13:44:36.047601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-11-07 13:44:36.047615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-11-07 13:44:36.047945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-11-07 13:44:36.047959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-11-07 13:44:36.048266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-11-07 13:44:36.048280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-11-07 13:44:36.048557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-11-07 13:44:36.048570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-11-07 13:44:36.048846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-11-07 13:44:36.048860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-11-07 13:44:36.049173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-11-07 13:44:36.049187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-11-07 13:44:36.049522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-11-07 13:44:36.049536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-11-07 13:44:36.049867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-11-07 13:44:36.049881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-11-07 13:44:36.050263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-11-07 13:44:36.050277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-11-07 13:44:36.050610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-11-07 13:44:36.050624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-11-07 13:44:36.050954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-11-07 13:44:36.050968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-11-07 13:44:36.051281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-11-07 13:44:36.051294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-11-07 13:44:36.051626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-11-07 13:44:36.051639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-11-07 13:44:36.051952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-11-07 13:44:36.051966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-11-07 13:44:36.052315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-11-07 13:44:36.052328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-11-07 13:44:36.052651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-11-07 13:44:36.052665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-11-07 13:44:36.052990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-11-07 13:44:36.053004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-11-07 13:44:36.053325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-11-07 13:44:36.053338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-11-07 13:44:36.053654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-11-07 13:44:36.053668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.145 qpair failed and we were unable to recover it. 00:39:28.145 [2024-11-07 13:44:36.053985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.145 [2024-11-07 13:44:36.053999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-11-07 13:44:36.054273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-11-07 13:44:36.054287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-11-07 13:44:36.054608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-11-07 13:44:36.054621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-11-07 13:44:36.054928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-11-07 13:44:36.054942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-11-07 13:44:36.055259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-11-07 13:44:36.055272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-11-07 13:44:36.055602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-11-07 13:44:36.055615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-11-07 13:44:36.055916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-11-07 13:44:36.055930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-11-07 13:44:36.056251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-11-07 13:44:36.056264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-11-07 13:44:36.056601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-11-07 13:44:36.056614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-11-07 13:44:36.056892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-11-07 13:44:36.056907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-11-07 13:44:36.057230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-11-07 13:44:36.057243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-11-07 13:44:36.057578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-11-07 13:44:36.057595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-11-07 13:44:36.057907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-11-07 13:44:36.057921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-11-07 13:44:36.058306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-11-07 13:44:36.058319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-11-07 13:44:36.058647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-11-07 13:44:36.058660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-11-07 13:44:36.058957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-11-07 13:44:36.058971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-11-07 13:44:36.059274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-11-07 13:44:36.059287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-11-07 13:44:36.059601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-11-07 13:44:36.059615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-11-07 13:44:36.059893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-11-07 13:44:36.059907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-11-07 13:44:36.060273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-11-07 13:44:36.060287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-11-07 13:44:36.060610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-11-07 13:44:36.060629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-11-07 13:44:36.060953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-11-07 13:44:36.060967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-11-07 13:44:36.061282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-11-07 13:44:36.061295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-11-07 13:44:36.061633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-11-07 13:44:36.061646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-11-07 13:44:36.061971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-11-07 13:44:36.061986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-11-07 13:44:36.062300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-11-07 13:44:36.062313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-11-07 13:44:36.062647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-11-07 13:44:36.062661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-11-07 13:44:36.062886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-11-07 13:44:36.062900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-11-07 13:44:36.063188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-11-07 13:44:36.063201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-11-07 13:44:36.063522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-11-07 13:44:36.063535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-11-07 13:44:36.063846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-11-07 13:44:36.063859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-11-07 13:44:36.064172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-11-07 13:44:36.064185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-11-07 13:44:36.064394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-11-07 13:44:36.064408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-11-07 13:44:36.064747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-11-07 13:44:36.064760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-11-07 13:44:36.064940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-11-07 13:44:36.064955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-11-07 13:44:36.065264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-11-07 13:44:36.065278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-11-07 13:44:36.065598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-11-07 13:44:36.065612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-11-07 13:44:36.065922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-11-07 13:44:36.065936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-11-07 13:44:36.066269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.146 [2024-11-07 13:44:36.066283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.146 qpair failed and we were unable to recover it. 00:39:28.146 [2024-11-07 13:44:36.066667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-11-07 13:44:36.066681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-11-07 13:44:36.066968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-11-07 13:44:36.066982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-11-07 13:44:36.067266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-11-07 13:44:36.067279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-11-07 13:44:36.067662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-11-07 13:44:36.067675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-11-07 13:44:36.067980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-11-07 13:44:36.067994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-11-07 13:44:36.068305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-11-07 13:44:36.068318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-11-07 13:44:36.068632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-11-07 13:44:36.068645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-11-07 13:44:36.068974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-11-07 13:44:36.068989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-11-07 13:44:36.069259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-11-07 13:44:36.069273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-11-07 13:44:36.069586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-11-07 13:44:36.069600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-11-07 13:44:36.069911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-11-07 13:44:36.069925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-11-07 13:44:36.070215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-11-07 13:44:36.070228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-11-07 13:44:36.070530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-11-07 13:44:36.070545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-11-07 13:44:36.070867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-11-07 13:44:36.070882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-11-07 13:44:36.071221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-11-07 13:44:36.071234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-11-07 13:44:36.071546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-11-07 13:44:36.071560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-11-07 13:44:36.071887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-11-07 13:44:36.071900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-11-07 13:44:36.072195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-11-07 13:44:36.072209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-11-07 13:44:36.072519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-11-07 13:44:36.072532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-11-07 13:44:36.072833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-11-07 13:44:36.072846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-11-07 13:44:36.073084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-11-07 13:44:36.073098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-11-07 13:44:36.073462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-11-07 13:44:36.073475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-11-07 13:44:36.073757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-11-07 13:44:36.073770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-11-07 13:44:36.074084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-11-07 13:44:36.074097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-11-07 13:44:36.074414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-11-07 13:44:36.074429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-11-07 13:44:36.074795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-11-07 13:44:36.074809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-11-07 13:44:36.075157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-11-07 13:44:36.075171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-11-07 13:44:36.075372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-11-07 13:44:36.075386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-11-07 13:44:36.075653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-11-07 13:44:36.075670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-11-07 13:44:36.075906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-11-07 13:44:36.075919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-11-07 13:44:36.076250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-11-07 13:44:36.076264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-11-07 13:44:36.076440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-11-07 13:44:36.076454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-11-07 13:44:36.076786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-11-07 13:44:36.076799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-11-07 13:44:36.077120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-11-07 13:44:36.077140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-11-07 13:44:36.077460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-11-07 13:44:36.077473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-11-07 13:44:36.077818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-11-07 13:44:36.077832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-11-07 13:44:36.078174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-11-07 13:44:36.078188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-11-07 13:44:36.078493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.147 [2024-11-07 13:44:36.078507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.147 qpair failed and we were unable to recover it. 00:39:28.147 [2024-11-07 13:44:36.078795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-11-07 13:44:36.078809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-11-07 13:44:36.079115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-11-07 13:44:36.079129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-11-07 13:44:36.079315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-11-07 13:44:36.079330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-11-07 13:44:36.079654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-11-07 13:44:36.079668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-11-07 13:44:36.079931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-11-07 13:44:36.079945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-11-07 13:44:36.080272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-11-07 13:44:36.080285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-11-07 13:44:36.080617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-11-07 13:44:36.080630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-11-07 13:44:36.080943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-11-07 13:44:36.080958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-11-07 13:44:36.081153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-11-07 13:44:36.081168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-11-07 13:44:36.081472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-11-07 13:44:36.081486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-11-07 13:44:36.081821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-11-07 13:44:36.081834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-11-07 13:44:36.082154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-11-07 13:44:36.082173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-11-07 13:44:36.082466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-11-07 13:44:36.082479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-11-07 13:44:36.082762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-11-07 13:44:36.082776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-11-07 13:44:36.083103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-11-07 13:44:36.083120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-11-07 13:44:36.083459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-11-07 13:44:36.083472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-11-07 13:44:36.083785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-11-07 13:44:36.083798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-11-07 13:44:36.084180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-11-07 13:44:36.084194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-11-07 13:44:36.084546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-11-07 13:44:36.084559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-11-07 13:44:36.084875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-11-07 13:44:36.084889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-11-07 13:44:36.085217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-11-07 13:44:36.085231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-11-07 13:44:36.085443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-11-07 13:44:36.085456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-11-07 13:44:36.085775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-11-07 13:44:36.085788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-11-07 13:44:36.086097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-11-07 13:44:36.086110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-11-07 13:44:36.086447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-11-07 13:44:36.086461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-11-07 13:44:36.086741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-11-07 13:44:36.086754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-11-07 13:44:36.087082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-11-07 13:44:36.087096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-11-07 13:44:36.087430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-11-07 13:44:36.087444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-11-07 13:44:36.087770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-11-07 13:44:36.087784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-11-07 13:44:36.088105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-11-07 13:44:36.088120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-11-07 13:44:36.088467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-11-07 13:44:36.088481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-11-07 13:44:36.088794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-11-07 13:44:36.088807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-11-07 13:44:36.089183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-11-07 13:44:36.089197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.148 [2024-11-07 13:44:36.089493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.148 [2024-11-07 13:44:36.089507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.148 qpair failed and we were unable to recover it. 00:39:28.149 [2024-11-07 13:44:36.089814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-11-07 13:44:36.089828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-11-07 13:44:36.090156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-11-07 13:44:36.090170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-11-07 13:44:36.090490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-11-07 13:44:36.090504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-11-07 13:44:36.090822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-11-07 13:44:36.090837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-11-07 13:44:36.091164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-11-07 13:44:36.091179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-11-07 13:44:36.091520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-11-07 13:44:36.091534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-11-07 13:44:36.091842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-11-07 13:44:36.091856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-11-07 13:44:36.092159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-11-07 13:44:36.092173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-11-07 13:44:36.092515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-11-07 13:44:36.092529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-11-07 13:44:36.092834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-11-07 13:44:36.092848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-11-07 13:44:36.093067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-11-07 13:44:36.093081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-11-07 13:44:36.093303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-11-07 13:44:36.093317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-11-07 13:44:36.093512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-11-07 13:44:36.093526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-11-07 13:44:36.093855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-11-07 13:44:36.093873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-11-07 13:44:36.094197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-11-07 13:44:36.094211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-11-07 13:44:36.094539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-11-07 13:44:36.094556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-11-07 13:44:36.094849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-11-07 13:44:36.094875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-11-07 13:44:36.095204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-11-07 13:44:36.095217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-11-07 13:44:36.095534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-11-07 13:44:36.095547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-11-07 13:44:36.095934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-11-07 13:44:36.095948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-11-07 13:44:36.096142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-11-07 13:44:36.096159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-11-07 13:44:36.096459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-11-07 13:44:36.096472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-11-07 13:44:36.096799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-11-07 13:44:36.096812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-11-07 13:44:36.097105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-11-07 13:44:36.097120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-11-07 13:44:36.097342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-11-07 13:44:36.097355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-11-07 13:44:36.097676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-11-07 13:44:36.097689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-11-07 13:44:36.097996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-11-07 13:44:36.098011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-11-07 13:44:36.098318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-11-07 13:44:36.098332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-11-07 13:44:36.098684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-11-07 13:44:36.098697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-11-07 13:44:36.098994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-11-07 13:44:36.099007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-11-07 13:44:36.099339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-11-07 13:44:36.099352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-11-07 13:44:36.099537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-11-07 13:44:36.099552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-11-07 13:44:36.099855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-11-07 13:44:36.099873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-11-07 13:44:36.100181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-11-07 13:44:36.100194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-11-07 13:44:36.100505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-11-07 13:44:36.100519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-11-07 13:44:36.100850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-11-07 13:44:36.100867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-11-07 13:44:36.101184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-11-07 13:44:36.101198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-11-07 13:44:36.101505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.149 [2024-11-07 13:44:36.101518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.149 qpair failed and we were unable to recover it. 00:39:28.149 [2024-11-07 13:44:36.101832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-11-07 13:44:36.101845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-11-07 13:44:36.101949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-11-07 13:44:36.101963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-11-07 13:44:36.102287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-11-07 13:44:36.102301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-11-07 13:44:36.102583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-11-07 13:44:36.102596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-11-07 13:44:36.102918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-11-07 13:44:36.102933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-11-07 13:44:36.103226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-11-07 13:44:36.103240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-11-07 13:44:36.103569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-11-07 13:44:36.103582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-11-07 13:44:36.103897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-11-07 13:44:36.103911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-11-07 13:44:36.104242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-11-07 13:44:36.104255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-11-07 13:44:36.104589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-11-07 13:44:36.104603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-11-07 13:44:36.104976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-11-07 13:44:36.104990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-11-07 13:44:36.105306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-11-07 13:44:36.105320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-11-07 13:44:36.105651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-11-07 13:44:36.105664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-11-07 13:44:36.105936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-11-07 13:44:36.105957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-11-07 13:44:36.106276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-11-07 13:44:36.106289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-11-07 13:44:36.106583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-11-07 13:44:36.106604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-11-07 13:44:36.106930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-11-07 13:44:36.106944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-11-07 13:44:36.107261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-11-07 13:44:36.107274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-11-07 13:44:36.107611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-11-07 13:44:36.107624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-11-07 13:44:36.107820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-11-07 13:44:36.107835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-11-07 13:44:36.108175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-11-07 13:44:36.108190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-11-07 13:44:36.108491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-11-07 13:44:36.108505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-11-07 13:44:36.108818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-11-07 13:44:36.108834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-11-07 13:44:36.109149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-11-07 13:44:36.109163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-11-07 13:44:36.109492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-11-07 13:44:36.109505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-11-07 13:44:36.109783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-11-07 13:44:36.109796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-11-07 13:44:36.110003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-11-07 13:44:36.110017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-11-07 13:44:36.110319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-11-07 13:44:36.110332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-11-07 13:44:36.110639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-11-07 13:44:36.110653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-11-07 13:44:36.110964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-11-07 13:44:36.110979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-11-07 13:44:36.111314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-11-07 13:44:36.111328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-11-07 13:44:36.111640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-11-07 13:44:36.111654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-11-07 13:44:36.111975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-11-07 13:44:36.111990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-11-07 13:44:36.112278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-11-07 13:44:36.112291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-11-07 13:44:36.112578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-11-07 13:44:36.112592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-11-07 13:44:36.112931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-11-07 13:44:36.112945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-11-07 13:44:36.113232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-11-07 13:44:36.113245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-11-07 13:44:36.113414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-11-07 13:44:36.113438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.150 qpair failed and we were unable to recover it. 00:39:28.150 [2024-11-07 13:44:36.113744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.150 [2024-11-07 13:44:36.113758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-11-07 13:44:36.114083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-11-07 13:44:36.114097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-11-07 13:44:36.114400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-11-07 13:44:36.114413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-11-07 13:44:36.114620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-11-07 13:44:36.114633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-11-07 13:44:36.114957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-11-07 13:44:36.114972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-11-07 13:44:36.115285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-11-07 13:44:36.115298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-11-07 13:44:36.115609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-11-07 13:44:36.115622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-11-07 13:44:36.115860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-11-07 13:44:36.115878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.151 [2024-11-07 13:44:36.116174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.151 [2024-11-07 13:44:36.116190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.151 qpair failed and we were unable to recover it. 00:39:28.424 [2024-11-07 13:44:36.116509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.424 [2024-11-07 13:44:36.116523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.424 qpair failed and we were unable to recover it. 00:39:28.424 [2024-11-07 13:44:36.116717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.424 [2024-11-07 13:44:36.116731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.424 qpair failed and we were unable to recover it. 00:39:28.424 [2024-11-07 13:44:36.116977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.424 [2024-11-07 13:44:36.116992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.424 qpair failed and we were unable to recover it. 00:39:28.424 [2024-11-07 13:44:36.117306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.424 [2024-11-07 13:44:36.117319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.424 qpair failed and we were unable to recover it. 00:39:28.424 [2024-11-07 13:44:36.117646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.424 [2024-11-07 13:44:36.117660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.424 qpair failed and we were unable to recover it. 00:39:28.424 [2024-11-07 13:44:36.117977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.424 [2024-11-07 13:44:36.117992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.424 qpair failed and we were unable to recover it. 00:39:28.424 [2024-11-07 13:44:36.118317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.424 [2024-11-07 13:44:36.118331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.424 qpair failed and we were unable to recover it. 00:39:28.424 [2024-11-07 13:44:36.118663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.424 [2024-11-07 13:44:36.118676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.424 qpair failed and we were unable to recover it. 00:39:28.424 [2024-11-07 13:44:36.118992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.424 [2024-11-07 13:44:36.119006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.424 qpair failed and we were unable to recover it. 00:39:28.424 [2024-11-07 13:44:36.119327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.424 [2024-11-07 13:44:36.119340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.424 qpair failed and we were unable to recover it. 00:39:28.424 [2024-11-07 13:44:36.119670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.424 [2024-11-07 13:44:36.119683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.424 qpair failed and we were unable to recover it. 00:39:28.424 [2024-11-07 13:44:36.120002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.424 [2024-11-07 13:44:36.120016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.424 qpair failed and we were unable to recover it. 00:39:28.424 [2024-11-07 13:44:36.120221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.424 [2024-11-07 13:44:36.120234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.424 qpair failed and we were unable to recover it. 00:39:28.424 [2024-11-07 13:44:36.120479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.424 [2024-11-07 13:44:36.120493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.424 qpair failed and we were unable to recover it. 00:39:28.424 [2024-11-07 13:44:36.120813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.424 [2024-11-07 13:44:36.120826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.424 qpair failed and we were unable to recover it. 00:39:28.424 [2024-11-07 13:44:36.121040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.424 [2024-11-07 13:44:36.121056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.424 qpair failed and we were unable to recover it. 00:39:28.424 [2024-11-07 13:44:36.121385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.424 [2024-11-07 13:44:36.121399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.424 qpair failed and we were unable to recover it. 00:39:28.424 [2024-11-07 13:44:36.121708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.424 [2024-11-07 13:44:36.121722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.122028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.122042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.122242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.122255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.122474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.122489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.122798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.122811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.123107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.123122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.123442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.123455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.123788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.123801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.124117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.124131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.124437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.124450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.124738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.124751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.125037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.125051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.125357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.125370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.125686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.125707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.126011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.126026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.126406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.126419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.126754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.126768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.127079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.127093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.127406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.127419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.127731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.127744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.128086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.128101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.128415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.128428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.128743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.128757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.129058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.129071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.129393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.129407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.129715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.129728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.129909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.129924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.130285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.130299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.130633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.130646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.130985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.130999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.131181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.131195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.131496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.131509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.131793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.131806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.131994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.132009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.132300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.132313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.132614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.132627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.132945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.132959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.133269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.133283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.133616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.133632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.133931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.133945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.134254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.134267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.134603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.134617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.134926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.134940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.425 [2024-11-07 13:44:36.135262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.425 [2024-11-07 13:44:36.135283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.425 qpair failed and we were unable to recover it. 00:39:28.426 [2024-11-07 13:44:36.135618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.426 [2024-11-07 13:44:36.135631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.426 qpair failed and we were unable to recover it. 00:39:28.426 [2024-11-07 13:44:36.135943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.426 [2024-11-07 13:44:36.135957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.426 qpair failed and we were unable to recover it. 00:39:28.426 [2024-11-07 13:44:36.136249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.426 [2024-11-07 13:44:36.136262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.426 qpair failed and we were unable to recover it. 00:39:28.426 [2024-11-07 13:44:36.136543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.426 [2024-11-07 13:44:36.136564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.426 qpair failed and we were unable to recover it. 00:39:28.426 [2024-11-07 13:44:36.136881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.426 [2024-11-07 13:44:36.136895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.426 qpair failed and we were unable to recover it. 00:39:28.426 [2024-11-07 13:44:36.137206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.426 [2024-11-07 13:44:36.137219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.426 qpair failed and we were unable to recover it. 00:39:28.426 [2024-11-07 13:44:36.137550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.426 [2024-11-07 13:44:36.137563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.426 qpair failed and we were unable to recover it. 00:39:28.426 [2024-11-07 13:44:36.137869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.426 [2024-11-07 13:44:36.137883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.426 qpair failed and we were unable to recover it. 00:39:28.426 [2024-11-07 13:44:36.138211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.426 [2024-11-07 13:44:36.138224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.426 qpair failed and we were unable to recover it. 00:39:28.426 [2024-11-07 13:44:36.138556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.426 [2024-11-07 13:44:36.138570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.426 qpair failed and we were unable to recover it. 00:39:28.426 [2024-11-07 13:44:36.138884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.426 [2024-11-07 13:44:36.138898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.426 qpair failed and we were unable to recover it. 00:39:28.426 [2024-11-07 13:44:36.139214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.426 [2024-11-07 13:44:36.139227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.426 qpair failed and we were unable to recover it. 00:39:28.426 [2024-11-07 13:44:36.139555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.426 [2024-11-07 13:44:36.139569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.426 qpair failed and we were unable to recover it. 00:39:28.426 [2024-11-07 13:44:36.139898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.426 [2024-11-07 13:44:36.139912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.426 qpair failed and we were unable to recover it. 00:39:28.426 [2024-11-07 13:44:36.140236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.426 [2024-11-07 13:44:36.140249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.426 qpair failed and we were unable to recover it. 00:39:28.426 [2024-11-07 13:44:36.140529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.426 [2024-11-07 13:44:36.140542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.426 qpair failed and we were unable to recover it. 00:39:28.426 [2024-11-07 13:44:36.140851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.426 [2024-11-07 13:44:36.140867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.426 qpair failed and we were unable to recover it. 00:39:28.426 [2024-11-07 13:44:36.141149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.426 [2024-11-07 13:44:36.141162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.426 qpair failed and we were unable to recover it. 00:39:28.426 [2024-11-07 13:44:36.141352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.426 [2024-11-07 13:44:36.141366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.426 qpair failed and we were unable to recover it. 00:39:28.426 [2024-11-07 13:44:36.141698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.426 [2024-11-07 13:44:36.141712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.426 qpair failed and we were unable to recover it. 00:39:28.426 [2024-11-07 13:44:36.141900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.426 [2024-11-07 13:44:36.141915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.426 qpair failed and we were unable to recover it. 00:39:28.426 [2024-11-07 13:44:36.142217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.426 [2024-11-07 13:44:36.142231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.426 qpair failed and we were unable to recover it. 00:39:28.426 [2024-11-07 13:44:36.142565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.426 [2024-11-07 13:44:36.142578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.426 qpair failed and we were unable to recover it. 00:39:28.426 [2024-11-07 13:44:36.142870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.426 [2024-11-07 13:44:36.142884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.426 qpair failed and we were unable to recover it. 00:39:28.426 [2024-11-07 13:44:36.143218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.426 [2024-11-07 13:44:36.143231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.426 qpair failed and we were unable to recover it. 00:39:28.426 [2024-11-07 13:44:36.143513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.426 [2024-11-07 13:44:36.143526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.426 qpair failed and we were unable to recover it. 00:39:28.426 [2024-11-07 13:44:36.143837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.426 [2024-11-07 13:44:36.143850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.426 qpair failed and we were unable to recover it. 00:39:28.426 [2024-11-07 13:44:36.144219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.426 [2024-11-07 13:44:36.144233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.426 qpair failed and we were unable to recover it. 00:39:28.426 [2024-11-07 13:44:36.144560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.426 [2024-11-07 13:44:36.144573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.426 qpair failed and we were unable to recover it. 00:39:28.426 [2024-11-07 13:44:36.144884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.426 [2024-11-07 13:44:36.144898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.426 qpair failed and we were unable to recover it. 00:39:28.426 [2024-11-07 13:44:36.145186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.426 [2024-11-07 13:44:36.145199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.426 qpair failed and we were unable to recover it. 00:39:28.426 [2024-11-07 13:44:36.145512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.426 [2024-11-07 13:44:36.145525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.426 qpair failed and we were unable to recover it. 00:39:28.426 [2024-11-07 13:44:36.145832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.426 [2024-11-07 13:44:36.145845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.426 qpair failed and we were unable to recover it. 00:39:28.426 [2024-11-07 13:44:36.146147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.426 [2024-11-07 13:44:36.146161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.426 qpair failed and we were unable to recover it. 00:39:28.426 [2024-11-07 13:44:36.146560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.426 [2024-11-07 13:44:36.146576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.426 qpair failed and we were unable to recover it. 00:39:28.426 [2024-11-07 13:44:36.146888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.426 [2024-11-07 13:44:36.146902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.426 qpair failed and we were unable to recover it. 00:39:28.426 [2024-11-07 13:44:36.147081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.426 [2024-11-07 13:44:36.147095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.426 qpair failed and we were unable to recover it. 00:39:28.426 [2024-11-07 13:44:36.147408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.426 [2024-11-07 13:44:36.147421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.426 qpair failed and we were unable to recover it. 00:39:28.426 [2024-11-07 13:44:36.147745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.427 [2024-11-07 13:44:36.147758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.427 qpair failed and we were unable to recover it. 00:39:28.427 [2024-11-07 13:44:36.148092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.427 [2024-11-07 13:44:36.148107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.427 qpair failed and we were unable to recover it. 00:39:28.427 [2024-11-07 13:44:36.148418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.427 [2024-11-07 13:44:36.148431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.427 qpair failed and we were unable to recover it. 00:39:28.427 [2024-11-07 13:44:36.148749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.427 [2024-11-07 13:44:36.148763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.427 qpair failed and we were unable to recover it. 00:39:28.427 [2024-11-07 13:44:36.148991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.427 [2024-11-07 13:44:36.149005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.427 qpair failed and we were unable to recover it. 00:39:28.427 [2024-11-07 13:44:36.149284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.427 [2024-11-07 13:44:36.149298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.427 qpair failed and we were unable to recover it. 00:39:28.427 [2024-11-07 13:44:36.149607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.427 [2024-11-07 13:44:36.149622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.427 qpair failed and we were unable to recover it. 00:39:28.427 [2024-11-07 13:44:36.149927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.427 [2024-11-07 13:44:36.149940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.427 qpair failed and we were unable to recover it. 00:39:28.427 [2024-11-07 13:44:36.150254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.427 [2024-11-07 13:44:36.150267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.427 qpair failed and we were unable to recover it. 00:39:28.427 [2024-11-07 13:44:36.150604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.427 [2024-11-07 13:44:36.150617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.427 qpair failed and we were unable to recover it. 00:39:28.427 [2024-11-07 13:44:36.150940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.427 [2024-11-07 13:44:36.150953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.427 qpair failed and we were unable to recover it. 00:39:28.427 [2024-11-07 13:44:36.151273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.427 [2024-11-07 13:44:36.151286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.427 qpair failed and we were unable to recover it. 00:39:28.427 [2024-11-07 13:44:36.151616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.427 [2024-11-07 13:44:36.151629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.427 qpair failed and we were unable to recover it. 00:39:28.427 [2024-11-07 13:44:36.151846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.427 [2024-11-07 13:44:36.151860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.427 qpair failed and we were unable to recover it. 00:39:28.427 [2024-11-07 13:44:36.152224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.427 [2024-11-07 13:44:36.152237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.427 qpair failed and we were unable to recover it. 00:39:28.427 [2024-11-07 13:44:36.152558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.427 [2024-11-07 13:44:36.152571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.427 qpair failed and we were unable to recover it. 00:39:28.427 [2024-11-07 13:44:36.152881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.427 [2024-11-07 13:44:36.152895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.427 qpair failed and we were unable to recover it. 00:39:28.427 [2024-11-07 13:44:36.153287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.427 [2024-11-07 13:44:36.153301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.427 qpair failed and we were unable to recover it. 00:39:28.427 [2024-11-07 13:44:36.153602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.427 [2024-11-07 13:44:36.153615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.427 qpair failed and we were unable to recover it. 00:39:28.427 [2024-11-07 13:44:36.153912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.427 [2024-11-07 13:44:36.153925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.427 qpair failed and we were unable to recover it. 00:39:28.427 [2024-11-07 13:44:36.154208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.427 [2024-11-07 13:44:36.154221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.427 qpair failed and we were unable to recover it. 00:39:28.427 [2024-11-07 13:44:36.154532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.427 [2024-11-07 13:44:36.154546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.427 qpair failed and we were unable to recover it. 00:39:28.427 [2024-11-07 13:44:36.154886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.427 [2024-11-07 13:44:36.154900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.427 qpair failed and we were unable to recover it. 00:39:28.427 [2024-11-07 13:44:36.155221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.427 [2024-11-07 13:44:36.155234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.427 qpair failed and we were unable to recover it. 00:39:28.427 [2024-11-07 13:44:36.155493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.427 [2024-11-07 13:44:36.155507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.427 qpair failed and we were unable to recover it. 00:39:28.427 [2024-11-07 13:44:36.155837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.427 [2024-11-07 13:44:36.155851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.427 qpair failed and we were unable to recover it. 00:39:28.427 [2024-11-07 13:44:36.156212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.427 [2024-11-07 13:44:36.156226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.427 qpair failed and we were unable to recover it. 00:39:28.427 [2024-11-07 13:44:36.156528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.427 [2024-11-07 13:44:36.156544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.427 qpair failed and we were unable to recover it. 00:39:28.427 [2024-11-07 13:44:36.156876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.427 [2024-11-07 13:44:36.156890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.427 qpair failed and we were unable to recover it. 00:39:28.427 [2024-11-07 13:44:36.157201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.427 [2024-11-07 13:44:36.157214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.427 qpair failed and we were unable to recover it. 00:39:28.427 [2024-11-07 13:44:36.157545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.427 [2024-11-07 13:44:36.157559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.427 qpair failed and we were unable to recover it. 00:39:28.427 [2024-11-07 13:44:36.157894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.427 [2024-11-07 13:44:36.157908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.427 qpair failed and we were unable to recover it. 00:39:28.427 [2024-11-07 13:44:36.158208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.427 [2024-11-07 13:44:36.158222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.427 qpair failed and we were unable to recover it. 00:39:28.427 [2024-11-07 13:44:36.158543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.427 [2024-11-07 13:44:36.158557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.427 qpair failed and we were unable to recover it. 00:39:28.427 [2024-11-07 13:44:36.158874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.427 [2024-11-07 13:44:36.158888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.427 qpair failed and we were unable to recover it. 00:39:28.427 [2024-11-07 13:44:36.159083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.427 [2024-11-07 13:44:36.159098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.427 qpair failed and we were unable to recover it. 00:39:28.427 [2024-11-07 13:44:36.159402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.427 [2024-11-07 13:44:36.159418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.427 qpair failed and we were unable to recover it. 00:39:28.427 [2024-11-07 13:44:36.159811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.427 [2024-11-07 13:44:36.159824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.427 qpair failed and we were unable to recover it. 00:39:28.427 [2024-11-07 13:44:36.160095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.428 [2024-11-07 13:44:36.160109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.428 qpair failed and we were unable to recover it. 00:39:28.428 [2024-11-07 13:44:36.160428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.428 [2024-11-07 13:44:36.160441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.428 qpair failed and we were unable to recover it. 00:39:28.428 [2024-11-07 13:44:36.160764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.428 [2024-11-07 13:44:36.160777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.428 qpair failed and we were unable to recover it. 00:39:28.428 [2024-11-07 13:44:36.161094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.428 [2024-11-07 13:44:36.161108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.428 qpair failed and we were unable to recover it. 00:39:28.428 [2024-11-07 13:44:36.161448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.428 [2024-11-07 13:44:36.161461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.428 qpair failed and we were unable to recover it. 00:39:28.428 [2024-11-07 13:44:36.161743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.428 [2024-11-07 13:44:36.161756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.428 qpair failed and we were unable to recover it. 00:39:28.428 [2024-11-07 13:44:36.162096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.428 [2024-11-07 13:44:36.162109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.428 qpair failed and we were unable to recover it. 00:39:28.428 [2024-11-07 13:44:36.162394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.428 [2024-11-07 13:44:36.162407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.428 qpair failed and we were unable to recover it. 00:39:28.428 [2024-11-07 13:44:36.162745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.428 [2024-11-07 13:44:36.162759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.428 qpair failed and we were unable to recover it. 00:39:28.428 [2024-11-07 13:44:36.163084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.428 [2024-11-07 13:44:36.163098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.428 qpair failed and we were unable to recover it. 00:39:28.428 [2024-11-07 13:44:36.163425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.428 [2024-11-07 13:44:36.163439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.428 qpair failed and we were unable to recover it. 00:39:28.428 [2024-11-07 13:44:36.163767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.428 [2024-11-07 13:44:36.163780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.428 qpair failed and we were unable to recover it. 00:39:28.428 [2024-11-07 13:44:36.164186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.428 [2024-11-07 13:44:36.164200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.428 qpair failed and we were unable to recover it. 00:39:28.428 [2024-11-07 13:44:36.164533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.428 [2024-11-07 13:44:36.164546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.428 qpair failed and we were unable to recover it. 00:39:28.428 [2024-11-07 13:44:36.164755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.428 [2024-11-07 13:44:36.164768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.428 qpair failed and we were unable to recover it. 00:39:28.428 [2024-11-07 13:44:36.165046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.428 [2024-11-07 13:44:36.165060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.428 qpair failed and we were unable to recover it. 00:39:28.428 [2024-11-07 13:44:36.165370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.428 [2024-11-07 13:44:36.165383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.428 qpair failed and we were unable to recover it. 00:39:28.428 [2024-11-07 13:44:36.165660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.428 [2024-11-07 13:44:36.165673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.428 qpair failed and we were unable to recover it. 00:39:28.428 [2024-11-07 13:44:36.165991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.428 [2024-11-07 13:44:36.166004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.428 qpair failed and we were unable to recover it. 00:39:28.428 [2024-11-07 13:44:36.166323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.428 [2024-11-07 13:44:36.166336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.428 qpair failed and we were unable to recover it. 00:39:28.428 [2024-11-07 13:44:36.166667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.428 [2024-11-07 13:44:36.166681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.428 qpair failed and we were unable to recover it. 00:39:28.428 [2024-11-07 13:44:36.167014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.428 [2024-11-07 13:44:36.167028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.428 qpair failed and we were unable to recover it. 00:39:28.428 [2024-11-07 13:44:36.167354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.428 [2024-11-07 13:44:36.167375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.428 qpair failed and we were unable to recover it. 00:39:28.428 [2024-11-07 13:44:36.167658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.428 [2024-11-07 13:44:36.167671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.428 qpair failed and we were unable to recover it. 00:39:28.428 [2024-11-07 13:44:36.167994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.428 [2024-11-07 13:44:36.168009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.428 qpair failed and we were unable to recover it. 00:39:28.428 [2024-11-07 13:44:36.168319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.428 [2024-11-07 13:44:36.168333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.428 qpair failed and we were unable to recover it. 00:39:28.428 [2024-11-07 13:44:36.168663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.428 [2024-11-07 13:44:36.168677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.428 qpair failed and we were unable to recover it. 00:39:28.428 [2024-11-07 13:44:36.168991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.428 [2024-11-07 13:44:36.169005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.428 qpair failed and we were unable to recover it. 00:39:28.428 [2024-11-07 13:44:36.169315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.428 [2024-11-07 13:44:36.169329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.428 qpair failed and we were unable to recover it. 00:39:28.428 [2024-11-07 13:44:36.169664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.428 [2024-11-07 13:44:36.169677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.428 qpair failed and we were unable to recover it. 00:39:28.428 [2024-11-07 13:44:36.169998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.428 [2024-11-07 13:44:36.170012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.428 qpair failed and we were unable to recover it. 00:39:28.428 [2024-11-07 13:44:36.170328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.428 [2024-11-07 13:44:36.170342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.428 qpair failed and we were unable to recover it. 00:39:28.429 [2024-11-07 13:44:36.170569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.429 [2024-11-07 13:44:36.170582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.429 qpair failed and we were unable to recover it. 00:39:28.429 [2024-11-07 13:44:36.170794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.429 [2024-11-07 13:44:36.170816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.429 qpair failed and we were unable to recover it. 00:39:28.429 [2024-11-07 13:44:36.171130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.429 [2024-11-07 13:44:36.171144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.429 qpair failed and we were unable to recover it. 00:39:28.429 [2024-11-07 13:44:36.171496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.429 [2024-11-07 13:44:36.171510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.429 qpair failed and we were unable to recover it. 00:39:28.429 [2024-11-07 13:44:36.171828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.429 [2024-11-07 13:44:36.171841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.429 qpair failed and we were unable to recover it. 00:39:28.429 [2024-11-07 13:44:36.172148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.429 [2024-11-07 13:44:36.172162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.429 qpair failed and we were unable to recover it. 00:39:28.429 [2024-11-07 13:44:36.172496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.429 [2024-11-07 13:44:36.172516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.429 qpair failed and we were unable to recover it. 00:39:28.429 [2024-11-07 13:44:36.172833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.429 [2024-11-07 13:44:36.172847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.429 qpair failed and we were unable to recover it. 00:39:28.429 [2024-11-07 13:44:36.173140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.429 [2024-11-07 13:44:36.173154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.429 qpair failed and we were unable to recover it. 00:39:28.429 [2024-11-07 13:44:36.173520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.429 [2024-11-07 13:44:36.173534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.429 qpair failed and we were unable to recover it. 00:39:28.429 [2024-11-07 13:44:36.173868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.429 [2024-11-07 13:44:36.173882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.429 qpair failed and we were unable to recover it. 00:39:28.429 [2024-11-07 13:44:36.174194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.429 [2024-11-07 13:44:36.174207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.429 qpair failed and we were unable to recover it. 00:39:28.429 [2024-11-07 13:44:36.174536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.429 [2024-11-07 13:44:36.174550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.429 qpair failed and we were unable to recover it. 00:39:28.429 [2024-11-07 13:44:36.174922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.429 [2024-11-07 13:44:36.174936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.429 qpair failed and we were unable to recover it. 00:39:28.429 [2024-11-07 13:44:36.175253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.429 [2024-11-07 13:44:36.175267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.429 qpair failed and we were unable to recover it. 00:39:28.429 [2024-11-07 13:44:36.175590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.429 [2024-11-07 13:44:36.175604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.429 qpair failed and we were unable to recover it. 00:39:28.429 [2024-11-07 13:44:36.175914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.429 [2024-11-07 13:44:36.175928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.429 qpair failed and we were unable to recover it. 00:39:28.429 [2024-11-07 13:44:36.176277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.429 [2024-11-07 13:44:36.176290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.429 qpair failed and we were unable to recover it. 00:39:28.429 [2024-11-07 13:44:36.176626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.429 [2024-11-07 13:44:36.176642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.429 qpair failed and we were unable to recover it. 00:39:28.429 [2024-11-07 13:44:36.177000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.429 [2024-11-07 13:44:36.177014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.429 qpair failed and we were unable to recover it. 00:39:28.429 [2024-11-07 13:44:36.177332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.429 [2024-11-07 13:44:36.177346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.429 qpair failed and we were unable to recover it. 00:39:28.429 [2024-11-07 13:44:36.177678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.429 [2024-11-07 13:44:36.177691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.429 qpair failed and we were unable to recover it. 00:39:28.429 [2024-11-07 13:44:36.177971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.429 [2024-11-07 13:44:36.177985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.429 qpair failed and we were unable to recover it. 00:39:28.429 [2024-11-07 13:44:36.178309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.429 [2024-11-07 13:44:36.178322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.429 qpair failed and we were unable to recover it. 00:39:28.429 [2024-11-07 13:44:36.178606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.429 [2024-11-07 13:44:36.178619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.429 qpair failed and we were unable to recover it. 00:39:28.429 [2024-11-07 13:44:36.178913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.429 [2024-11-07 13:44:36.178927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.429 qpair failed and we were unable to recover it. 00:39:28.429 [2024-11-07 13:44:36.179235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.429 [2024-11-07 13:44:36.179248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.429 qpair failed and we were unable to recover it. 00:39:28.429 [2024-11-07 13:44:36.179579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.429 [2024-11-07 13:44:36.179592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.429 qpair failed and we were unable to recover it. 00:39:28.429 [2024-11-07 13:44:36.179915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.429 [2024-11-07 13:44:36.179930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.429 qpair failed and we were unable to recover it. 00:39:28.429 [2024-11-07 13:44:36.180247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.429 [2024-11-07 13:44:36.180260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.429 qpair failed and we were unable to recover it. 00:39:28.429 [2024-11-07 13:44:36.180592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.429 [2024-11-07 13:44:36.180606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.429 qpair failed and we were unable to recover it. 00:39:28.429 [2024-11-07 13:44:36.180915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.429 [2024-11-07 13:44:36.180929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.429 qpair failed and we were unable to recover it. 00:39:28.429 [2024-11-07 13:44:36.181245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.429 [2024-11-07 13:44:36.181258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.429 qpair failed and we were unable to recover it. 00:39:28.429 [2024-11-07 13:44:36.181590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.429 [2024-11-07 13:44:36.181605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.429 qpair failed and we were unable to recover it. 00:39:28.429 [2024-11-07 13:44:36.181915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.429 [2024-11-07 13:44:36.181929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.429 qpair failed and we were unable to recover it. 00:39:28.429 [2024-11-07 13:44:36.182030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.429 [2024-11-07 13:44:36.182044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.429 qpair failed and we were unable to recover it. 00:39:28.429 [2024-11-07 13:44:36.182399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.429 [2024-11-07 13:44:36.182412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.430 qpair failed and we were unable to recover it. 00:39:28.430 [2024-11-07 13:44:36.182694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.430 [2024-11-07 13:44:36.182707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.430 qpair failed and we were unable to recover it. 00:39:28.430 [2024-11-07 13:44:36.183034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.430 [2024-11-07 13:44:36.183048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.430 qpair failed and we were unable to recover it. 00:39:28.430 [2024-11-07 13:44:36.183340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.430 [2024-11-07 13:44:36.183354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.430 qpair failed and we were unable to recover it. 00:39:28.430 [2024-11-07 13:44:36.183685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.430 [2024-11-07 13:44:36.183698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.430 qpair failed and we were unable to recover it. 00:39:28.430 [2024-11-07 13:44:36.184056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.430 [2024-11-07 13:44:36.184071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.430 qpair failed and we were unable to recover it. 00:39:28.430 [2024-11-07 13:44:36.184402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.430 [2024-11-07 13:44:36.184415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.430 qpair failed and we were unable to recover it. 00:39:28.430 [2024-11-07 13:44:36.184581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.430 [2024-11-07 13:44:36.184595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.430 qpair failed and we were unable to recover it. 00:39:28.430 [2024-11-07 13:44:36.184803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.430 [2024-11-07 13:44:36.184816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.430 qpair failed and we were unable to recover it. 00:39:28.430 [2024-11-07 13:44:36.185189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.430 [2024-11-07 13:44:36.185203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.430 qpair failed and we were unable to recover it. 00:39:28.430 [2024-11-07 13:44:36.185516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.430 [2024-11-07 13:44:36.185529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.430 qpair failed and we were unable to recover it. 00:39:28.430 [2024-11-07 13:44:36.185686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.430 [2024-11-07 13:44:36.185701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.430 qpair failed and we were unable to recover it. 00:39:28.430 [2024-11-07 13:44:36.186064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.430 [2024-11-07 13:44:36.186078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.430 qpair failed and we were unable to recover it. 00:39:28.430 [2024-11-07 13:44:36.186397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.430 [2024-11-07 13:44:36.186411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.430 qpair failed and we were unable to recover it. 00:39:28.430 [2024-11-07 13:44:36.186709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.430 [2024-11-07 13:44:36.186723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.430 qpair failed and we were unable to recover it. 00:39:28.430 [2024-11-07 13:44:36.187012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.430 [2024-11-07 13:44:36.187026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.430 qpair failed and we were unable to recover it. 00:39:28.430 [2024-11-07 13:44:36.187384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.430 [2024-11-07 13:44:36.187397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.430 qpair failed and we were unable to recover it. 00:39:28.430 [2024-11-07 13:44:36.187678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.430 [2024-11-07 13:44:36.187691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.430 qpair failed and we were unable to recover it. 00:39:28.430 [2024-11-07 13:44:36.188025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.430 [2024-11-07 13:44:36.188039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.430 qpair failed and we were unable to recover it. 00:39:28.430 [2024-11-07 13:44:36.188347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.430 [2024-11-07 13:44:36.188361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.430 qpair failed and we were unable to recover it. 00:39:28.430 [2024-11-07 13:44:36.188674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.430 [2024-11-07 13:44:36.188688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.430 qpair failed and we were unable to recover it. 00:39:28.430 [2024-11-07 13:44:36.188884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.430 [2024-11-07 13:44:36.188897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.430 qpair failed and we were unable to recover it. 00:39:28.430 [2024-11-07 13:44:36.189260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.430 [2024-11-07 13:44:36.189274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.430 qpair failed and we were unable to recover it. 00:39:28.430 [2024-11-07 13:44:36.189605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.430 [2024-11-07 13:44:36.189620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.430 qpair failed and we were unable to recover it. 00:39:28.430 [2024-11-07 13:44:36.189957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.430 [2024-11-07 13:44:36.189971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.430 qpair failed and we were unable to recover it. 00:39:28.430 [2024-11-07 13:44:36.190290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.430 [2024-11-07 13:44:36.190305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.430 qpair failed and we were unable to recover it. 00:39:28.430 [2024-11-07 13:44:36.190613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.430 [2024-11-07 13:44:36.190626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.430 qpair failed and we were unable to recover it. 00:39:28.430 [2024-11-07 13:44:36.190961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.430 [2024-11-07 13:44:36.190975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.430 qpair failed and we were unable to recover it. 00:39:28.430 [2024-11-07 13:44:36.191352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.430 [2024-11-07 13:44:36.191365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.430 qpair failed and we were unable to recover it. 00:39:28.430 [2024-11-07 13:44:36.191654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.430 [2024-11-07 13:44:36.191669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.430 qpair failed and we were unable to recover it. 00:39:28.430 [2024-11-07 13:44:36.191995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.430 [2024-11-07 13:44:36.192009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.430 qpair failed and we were unable to recover it. 00:39:28.430 [2024-11-07 13:44:36.192316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.430 [2024-11-07 13:44:36.192329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.430 qpair failed and we were unable to recover it. 00:39:28.430 [2024-11-07 13:44:36.192637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.430 [2024-11-07 13:44:36.192650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.430 qpair failed and we were unable to recover it. 00:39:28.430 [2024-11-07 13:44:36.192993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.430 [2024-11-07 13:44:36.193006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.430 qpair failed and we were unable to recover it. 00:39:28.430 [2024-11-07 13:44:36.193339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.430 [2024-11-07 13:44:36.193352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.430 qpair failed and we were unable to recover it. 00:39:28.430 [2024-11-07 13:44:36.193673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.430 [2024-11-07 13:44:36.193692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.430 qpair failed and we were unable to recover it. 00:39:28.430 [2024-11-07 13:44:36.194008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.430 [2024-11-07 13:44:36.194022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.430 qpair failed and we were unable to recover it. 00:39:28.430 [2024-11-07 13:44:36.194341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.430 [2024-11-07 13:44:36.194357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.430 qpair failed and we were unable to recover it. 00:39:28.430 [2024-11-07 13:44:36.194677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.430 [2024-11-07 13:44:36.194690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.430 qpair failed and we were unable to recover it. 00:39:28.431 [2024-11-07 13:44:36.195014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.431 [2024-11-07 13:44:36.195028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.431 qpair failed and we were unable to recover it. 00:39:28.431 [2024-11-07 13:44:36.195352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.431 [2024-11-07 13:44:36.195365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.431 qpair failed and we were unable to recover it. 00:39:28.431 [2024-11-07 13:44:36.195687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.431 [2024-11-07 13:44:36.195708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.431 qpair failed and we were unable to recover it. 00:39:28.431 [2024-11-07 13:44:36.196019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.431 [2024-11-07 13:44:36.196034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.431 qpair failed and we were unable to recover it. 00:39:28.431 [2024-11-07 13:44:36.196223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.431 [2024-11-07 13:44:36.196237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.431 qpair failed and we were unable to recover it. 00:39:28.431 [2024-11-07 13:44:36.196569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.431 [2024-11-07 13:44:36.196582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.431 qpair failed and we were unable to recover it. 00:39:28.431 [2024-11-07 13:44:36.196903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.431 [2024-11-07 13:44:36.196917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.431 qpair failed and we were unable to recover it. 00:39:28.431 [2024-11-07 13:44:36.197121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.431 [2024-11-07 13:44:36.197135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.431 qpair failed and we were unable to recover it. 00:39:28.431 [2024-11-07 13:44:36.197462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.431 [2024-11-07 13:44:36.197476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.431 qpair failed and we were unable to recover it. 00:39:28.431 [2024-11-07 13:44:36.197814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.431 [2024-11-07 13:44:36.197828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.431 qpair failed and we were unable to recover it. 00:39:28.431 [2024-11-07 13:44:36.198136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.431 [2024-11-07 13:44:36.198152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.431 qpair failed and we were unable to recover it. 00:39:28.431 [2024-11-07 13:44:36.198471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.431 [2024-11-07 13:44:36.198492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.431 qpair failed and we were unable to recover it. 00:39:28.431 [2024-11-07 13:44:36.198824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.431 [2024-11-07 13:44:36.198838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.431 qpair failed and we were unable to recover it. 00:39:28.431 [2024-11-07 13:44:36.199148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.431 [2024-11-07 13:44:36.199163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.431 qpair failed and we were unable to recover it. 00:39:28.431 [2024-11-07 13:44:36.199387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.431 [2024-11-07 13:44:36.199401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.431 qpair failed and we were unable to recover it. 00:39:28.431 [2024-11-07 13:44:36.199707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.431 [2024-11-07 13:44:36.199721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.431 qpair failed and we were unable to recover it. 00:39:28.431 [2024-11-07 13:44:36.199943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.431 [2024-11-07 13:44:36.199957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.431 qpair failed and we were unable to recover it. 00:39:28.431 [2024-11-07 13:44:36.200283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.431 [2024-11-07 13:44:36.200297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.431 qpair failed and we were unable to recover it. 00:39:28.431 [2024-11-07 13:44:36.200622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.431 [2024-11-07 13:44:36.200636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.431 qpair failed and we were unable to recover it. 00:39:28.431 [2024-11-07 13:44:36.200950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.431 [2024-11-07 13:44:36.200964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.431 qpair failed and we were unable to recover it. 00:39:28.431 [2024-11-07 13:44:36.201255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.431 [2024-11-07 13:44:36.201269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.431 qpair failed and we were unable to recover it. 00:39:28.431 [2024-11-07 13:44:36.201600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.431 [2024-11-07 13:44:36.201613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.431 qpair failed and we were unable to recover it. 00:39:28.431 [2024-11-07 13:44:36.201926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.431 [2024-11-07 13:44:36.201939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.431 qpair failed and we were unable to recover it. 00:39:28.431 [2024-11-07 13:44:36.202255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.431 [2024-11-07 13:44:36.202268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.431 qpair failed and we were unable to recover it. 00:39:28.431 [2024-11-07 13:44:36.202497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.431 [2024-11-07 13:44:36.202510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.431 qpair failed and we were unable to recover it. 00:39:28.431 [2024-11-07 13:44:36.202683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.431 [2024-11-07 13:44:36.202697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.431 qpair failed and we were unable to recover it. 00:39:28.431 [2024-11-07 13:44:36.203016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.431 [2024-11-07 13:44:36.203031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.431 qpair failed and we were unable to recover it. 00:39:28.431 [2024-11-07 13:44:36.203351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.431 [2024-11-07 13:44:36.203373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.431 qpair failed and we were unable to recover it. 00:39:28.431 [2024-11-07 13:44:36.203685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.431 [2024-11-07 13:44:36.203699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.431 qpair failed and we were unable to recover it. 00:39:28.431 [2024-11-07 13:44:36.203999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.431 [2024-11-07 13:44:36.204013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.431 qpair failed and we were unable to recover it. 00:39:28.431 [2024-11-07 13:44:36.204330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.431 [2024-11-07 13:44:36.204343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.431 qpair failed and we were unable to recover it. 00:39:28.431 [2024-11-07 13:44:36.204655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.431 [2024-11-07 13:44:36.204669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.431 qpair failed and we were unable to recover it. 00:39:28.431 [2024-11-07 13:44:36.204875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.431 [2024-11-07 13:44:36.204889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.431 qpair failed and we were unable to recover it. 00:39:28.431 [2024-11-07 13:44:36.205211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.431 [2024-11-07 13:44:36.205225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.431 qpair failed and we were unable to recover it. 00:39:28.431 [2024-11-07 13:44:36.205546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.431 [2024-11-07 13:44:36.205560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.431 qpair failed and we were unable to recover it. 00:39:28.431 [2024-11-07 13:44:36.205764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.431 [2024-11-07 13:44:36.205778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.431 qpair failed and we were unable to recover it. 00:39:28.431 [2024-11-07 13:44:36.206087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.431 [2024-11-07 13:44:36.206101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.431 qpair failed and we were unable to recover it. 00:39:28.431 [2024-11-07 13:44:36.206339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.431 [2024-11-07 13:44:36.206352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.431 qpair failed and we were unable to recover it. 00:39:28.431 [2024-11-07 13:44:36.206674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-11-07 13:44:36.206689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.432 [2024-11-07 13:44:36.206973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-11-07 13:44:36.206986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.432 [2024-11-07 13:44:36.207325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-11-07 13:44:36.207338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.432 [2024-11-07 13:44:36.207615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-11-07 13:44:36.207629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.432 [2024-11-07 13:44:36.207965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-11-07 13:44:36.207979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.432 [2024-11-07 13:44:36.208261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-11-07 13:44:36.208274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.432 [2024-11-07 13:44:36.208602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-11-07 13:44:36.208615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.432 [2024-11-07 13:44:36.208898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-11-07 13:44:36.208912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.432 [2024-11-07 13:44:36.209193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-11-07 13:44:36.209206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.432 [2024-11-07 13:44:36.209517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-11-07 13:44:36.209530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.432 [2024-11-07 13:44:36.209875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-11-07 13:44:36.209889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.432 [2024-11-07 13:44:36.210213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-11-07 13:44:36.210235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.432 [2024-11-07 13:44:36.210619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-11-07 13:44:36.210632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.432 [2024-11-07 13:44:36.210923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-11-07 13:44:36.210937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.432 [2024-11-07 13:44:36.211267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-11-07 13:44:36.211280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.432 [2024-11-07 13:44:36.211483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-11-07 13:44:36.211496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.432 [2024-11-07 13:44:36.211769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-11-07 13:44:36.211782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.432 [2024-11-07 13:44:36.212084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-11-07 13:44:36.212098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.432 [2024-11-07 13:44:36.212418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-11-07 13:44:36.212432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.432 [2024-11-07 13:44:36.212778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-11-07 13:44:36.212792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.432 [2024-11-07 13:44:36.213164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-11-07 13:44:36.213178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.432 [2024-11-07 13:44:36.213500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-11-07 13:44:36.213514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.432 [2024-11-07 13:44:36.213835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-11-07 13:44:36.213848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.432 [2024-11-07 13:44:36.214164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-11-07 13:44:36.214178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.432 [2024-11-07 13:44:36.214370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-11-07 13:44:36.214385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.432 [2024-11-07 13:44:36.214679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-11-07 13:44:36.214692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.432 [2024-11-07 13:44:36.215009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-11-07 13:44:36.215023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.432 [2024-11-07 13:44:36.215307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-11-07 13:44:36.215330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.432 [2024-11-07 13:44:36.215617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-11-07 13:44:36.215630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.432 [2024-11-07 13:44:36.215940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-11-07 13:44:36.215954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.432 [2024-11-07 13:44:36.216147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-11-07 13:44:36.216162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.432 [2024-11-07 13:44:36.216497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-11-07 13:44:36.216510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.432 [2024-11-07 13:44:36.216819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-11-07 13:44:36.216833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.432 [2024-11-07 13:44:36.217150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-11-07 13:44:36.217164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.432 [2024-11-07 13:44:36.217495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-11-07 13:44:36.217509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.432 [2024-11-07 13:44:36.217844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-11-07 13:44:36.217858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.432 [2024-11-07 13:44:36.218168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-11-07 13:44:36.218181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.432 [2024-11-07 13:44:36.218525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-11-07 13:44:36.218538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.432 [2024-11-07 13:44:36.218847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.432 [2024-11-07 13:44:36.218876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.432 qpair failed and we were unable to recover it. 00:39:28.433 [2024-11-07 13:44:36.219203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-11-07 13:44:36.219217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-11-07 13:44:36.219425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-11-07 13:44:36.219441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-11-07 13:44:36.219759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-11-07 13:44:36.219773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-11-07 13:44:36.220103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-11-07 13:44:36.220118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-11-07 13:44:36.220453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-11-07 13:44:36.220467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-11-07 13:44:36.220796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-11-07 13:44:36.220810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-11-07 13:44:36.221139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-11-07 13:44:36.221153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-11-07 13:44:36.221485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-11-07 13:44:36.221499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-11-07 13:44:36.221833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-11-07 13:44:36.221847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-11-07 13:44:36.222165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-11-07 13:44:36.222180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-11-07 13:44:36.222513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-11-07 13:44:36.222527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-11-07 13:44:36.222743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-11-07 13:44:36.222757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-11-07 13:44:36.223045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-11-07 13:44:36.223060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-11-07 13:44:36.223348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-11-07 13:44:36.223362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-11-07 13:44:36.223686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-11-07 13:44:36.223699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-11-07 13:44:36.224018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-11-07 13:44:36.224033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-11-07 13:44:36.224326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-11-07 13:44:36.224339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-11-07 13:44:36.224655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-11-07 13:44:36.224669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-11-07 13:44:36.224971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-11-07 13:44:36.224985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-11-07 13:44:36.225298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-11-07 13:44:36.225311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-11-07 13:44:36.225623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-11-07 13:44:36.225637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-11-07 13:44:36.225846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-11-07 13:44:36.225859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-11-07 13:44:36.226061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-11-07 13:44:36.226074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-11-07 13:44:36.226366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-11-07 13:44:36.226380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-11-07 13:44:36.226698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-11-07 13:44:36.226711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-11-07 13:44:36.226900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-11-07 13:44:36.226915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-11-07 13:44:36.227247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-11-07 13:44:36.227261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-11-07 13:44:36.227580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-11-07 13:44:36.227594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-11-07 13:44:36.227963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-11-07 13:44:36.227978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-11-07 13:44:36.228288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-11-07 13:44:36.228302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-11-07 13:44:36.228625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-11-07 13:44:36.228638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-11-07 13:44:36.228930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-11-07 13:44:36.228944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-11-07 13:44:36.229272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-11-07 13:44:36.229285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.433 qpair failed and we were unable to recover it. 00:39:28.433 [2024-11-07 13:44:36.229690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.433 [2024-11-07 13:44:36.229703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-11-07 13:44:36.230023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-11-07 13:44:36.230038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-11-07 13:44:36.230359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-11-07 13:44:36.230372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-11-07 13:44:36.230652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-11-07 13:44:36.230665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-11-07 13:44:36.230892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-11-07 13:44:36.230907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-11-07 13:44:36.231212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-11-07 13:44:36.231225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-11-07 13:44:36.231503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-11-07 13:44:36.231516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-11-07 13:44:36.231870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-11-07 13:44:36.231885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-11-07 13:44:36.232213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-11-07 13:44:36.232230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-11-07 13:44:36.232561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-11-07 13:44:36.232575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-11-07 13:44:36.232907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-11-07 13:44:36.232922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-11-07 13:44:36.233293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-11-07 13:44:36.233306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-11-07 13:44:36.233585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-11-07 13:44:36.233599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-11-07 13:44:36.233917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-11-07 13:44:36.233931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-11-07 13:44:36.234262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-11-07 13:44:36.234276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-11-07 13:44:36.234458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-11-07 13:44:36.234472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-11-07 13:44:36.234759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-11-07 13:44:36.234772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-11-07 13:44:36.235098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-11-07 13:44:36.235112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-11-07 13:44:36.235448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-11-07 13:44:36.235462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-11-07 13:44:36.235801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-11-07 13:44:36.235814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-11-07 13:44:36.236102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-11-07 13:44:36.236116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-11-07 13:44:36.236426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-11-07 13:44:36.236439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-11-07 13:44:36.236770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-11-07 13:44:36.236785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-11-07 13:44:36.237118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-11-07 13:44:36.237132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-11-07 13:44:36.237449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-11-07 13:44:36.237463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-11-07 13:44:36.237794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-11-07 13:44:36.237808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-11-07 13:44:36.238119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-11-07 13:44:36.238133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-11-07 13:44:36.238423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-11-07 13:44:36.238437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-11-07 13:44:36.238724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-11-07 13:44:36.238738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-11-07 13:44:36.239082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-11-07 13:44:36.239095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-11-07 13:44:36.239416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-11-07 13:44:36.239430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-11-07 13:44:36.239776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-11-07 13:44:36.239790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-11-07 13:44:36.240120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-11-07 13:44:36.240134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-11-07 13:44:36.240447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-11-07 13:44:36.240461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-11-07 13:44:36.240801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-11-07 13:44:36.240816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-11-07 13:44:36.241147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-11-07 13:44:36.241162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-11-07 13:44:36.241493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-11-07 13:44:36.241507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-11-07 13:44:36.241842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-11-07 13:44:36.241856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-11-07 13:44:36.242161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.434 [2024-11-07 13:44:36.242175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.434 qpair failed and we were unable to recover it. 00:39:28.434 [2024-11-07 13:44:36.242361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-11-07 13:44:36.242376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-11-07 13:44:36.242712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-11-07 13:44:36.242727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-11-07 13:44:36.243033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-11-07 13:44:36.243048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-11-07 13:44:36.243360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-11-07 13:44:36.243374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-11-07 13:44:36.243562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-11-07 13:44:36.243579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-11-07 13:44:36.243858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-11-07 13:44:36.243879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-11-07 13:44:36.244196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-11-07 13:44:36.244212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-11-07 13:44:36.244578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-11-07 13:44:36.244593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-11-07 13:44:36.244917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-11-07 13:44:36.244933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-11-07 13:44:36.245251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-11-07 13:44:36.245277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-11-07 13:44:36.245577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-11-07 13:44:36.245593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-11-07 13:44:36.245913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-11-07 13:44:36.245929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-11-07 13:44:36.246278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-11-07 13:44:36.246293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-11-07 13:44:36.246494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-11-07 13:44:36.246508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-11-07 13:44:36.246835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-11-07 13:44:36.246849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-11-07 13:44:36.247159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-11-07 13:44:36.247174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-11-07 13:44:36.247511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-11-07 13:44:36.247526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-11-07 13:44:36.247841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-11-07 13:44:36.247855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-11-07 13:44:36.248181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-11-07 13:44:36.248197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-11-07 13:44:36.248487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-11-07 13:44:36.248501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-11-07 13:44:36.248829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-11-07 13:44:36.248843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-11-07 13:44:36.249243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-11-07 13:44:36.249259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-11-07 13:44:36.249555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-11-07 13:44:36.249571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-11-07 13:44:36.249895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-11-07 13:44:36.249912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-11-07 13:44:36.250220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-11-07 13:44:36.250236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-11-07 13:44:36.250471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-11-07 13:44:36.250485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-11-07 13:44:36.250802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-11-07 13:44:36.250816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-11-07 13:44:36.251144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-11-07 13:44:36.251159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-11-07 13:44:36.251494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-11-07 13:44:36.251508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-11-07 13:44:36.251830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-11-07 13:44:36.251846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-11-07 13:44:36.252171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-11-07 13:44:36.252186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-11-07 13:44:36.252481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-11-07 13:44:36.252495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-11-07 13:44:36.252825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-11-07 13:44:36.252840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-11-07 13:44:36.253166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-11-07 13:44:36.253182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-11-07 13:44:36.253498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-11-07 13:44:36.253515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-11-07 13:44:36.253841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-11-07 13:44:36.253856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-11-07 13:44:36.254209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-11-07 13:44:36.254225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.435 [2024-11-07 13:44:36.254555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.435 [2024-11-07 13:44:36.254569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.435 qpair failed and we were unable to recover it. 00:39:28.436 [2024-11-07 13:44:36.254905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-11-07 13:44:36.254921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-11-07 13:44:36.255255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-11-07 13:44:36.255271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-11-07 13:44:36.255636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-11-07 13:44:36.255652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-11-07 13:44:36.255972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-11-07 13:44:36.255987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-11-07 13:44:36.256193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-11-07 13:44:36.256209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-11-07 13:44:36.256496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-11-07 13:44:36.256511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-11-07 13:44:36.256817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-11-07 13:44:36.256832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-11-07 13:44:36.257138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-11-07 13:44:36.257154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-11-07 13:44:36.257450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-11-07 13:44:36.257464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-11-07 13:44:36.257788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-11-07 13:44:36.257803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-11-07 13:44:36.258113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-11-07 13:44:36.258129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-11-07 13:44:36.258478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-11-07 13:44:36.258497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-11-07 13:44:36.258818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-11-07 13:44:36.258835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-11-07 13:44:36.259152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-11-07 13:44:36.259169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-11-07 13:44:36.259503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-11-07 13:44:36.259517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-11-07 13:44:36.259723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-11-07 13:44:36.259739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-11-07 13:44:36.260049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-11-07 13:44:36.260065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-11-07 13:44:36.260349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-11-07 13:44:36.260363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-11-07 13:44:36.260683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-11-07 13:44:36.260698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-11-07 13:44:36.261033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-11-07 13:44:36.261049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-11-07 13:44:36.261380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-11-07 13:44:36.261396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-11-07 13:44:36.261724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-11-07 13:44:36.261738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-11-07 13:44:36.262047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-11-07 13:44:36.262063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-11-07 13:44:36.262346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-11-07 13:44:36.262362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-11-07 13:44:36.262664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-11-07 13:44:36.262679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-11-07 13:44:36.262998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-11-07 13:44:36.263014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-11-07 13:44:36.263342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-11-07 13:44:36.263358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-11-07 13:44:36.263686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-11-07 13:44:36.263701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-11-07 13:44:36.264025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-11-07 13:44:36.264041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-11-07 13:44:36.264369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-11-07 13:44:36.264383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-11-07 13:44:36.264724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-11-07 13:44:36.264740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-11-07 13:44:36.265060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-11-07 13:44:36.265076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-11-07 13:44:36.265408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-11-07 13:44:36.265423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-11-07 13:44:36.265740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-11-07 13:44:36.265756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-11-07 13:44:36.266086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-11-07 13:44:36.266102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-11-07 13:44:36.266441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-11-07 13:44:36.266457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-11-07 13:44:36.266791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-11-07 13:44:36.266807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.436 qpair failed and we were unable to recover it. 00:39:28.436 [2024-11-07 13:44:36.267012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.436 [2024-11-07 13:44:36.267028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-11-07 13:44:36.267343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-11-07 13:44:36.267358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-11-07 13:44:36.267662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-11-07 13:44:36.267678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-11-07 13:44:36.268009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-11-07 13:44:36.268026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-11-07 13:44:36.268353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-11-07 13:44:36.268368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-11-07 13:44:36.268593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-11-07 13:44:36.268609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-11-07 13:44:36.268934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-11-07 13:44:36.268950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-11-07 13:44:36.269269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-11-07 13:44:36.269283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-11-07 13:44:36.269493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-11-07 13:44:36.269508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-11-07 13:44:36.269844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-11-07 13:44:36.269859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-11-07 13:44:36.270169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-11-07 13:44:36.270185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-11-07 13:44:36.270516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-11-07 13:44:36.270532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-11-07 13:44:36.270857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-11-07 13:44:36.270879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-11-07 13:44:36.271167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-11-07 13:44:36.271181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-11-07 13:44:36.271442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-11-07 13:44:36.271461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-11-07 13:44:36.271778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-11-07 13:44:36.271794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-11-07 13:44:36.272084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-11-07 13:44:36.272101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-11-07 13:44:36.272425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-11-07 13:44:36.272441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-11-07 13:44:36.272768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-11-07 13:44:36.272783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-11-07 13:44:36.273089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-11-07 13:44:36.273105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-11-07 13:44:36.273405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-11-07 13:44:36.273420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-11-07 13:44:36.273742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-11-07 13:44:36.273757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-11-07 13:44:36.274057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-11-07 13:44:36.274071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-11-07 13:44:36.274422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-11-07 13:44:36.274437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-11-07 13:44:36.274769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-11-07 13:44:36.274784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-11-07 13:44:36.275091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-11-07 13:44:36.275106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-11-07 13:44:36.275282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-11-07 13:44:36.275300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-11-07 13:44:36.275619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-11-07 13:44:36.275635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-11-07 13:44:36.275959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-11-07 13:44:36.275974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-11-07 13:44:36.276302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-11-07 13:44:36.276317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-11-07 13:44:36.276643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-11-07 13:44:36.276658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-11-07 13:44:36.276969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-11-07 13:44:36.276984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-11-07 13:44:36.277159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-11-07 13:44:36.277175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-11-07 13:44:36.277499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-11-07 13:44:36.277515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-11-07 13:44:36.277693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-11-07 13:44:36.277710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-11-07 13:44:36.278044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-11-07 13:44:36.278059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-11-07 13:44:36.278378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-11-07 13:44:36.278394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-11-07 13:44:36.278728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.437 [2024-11-07 13:44:36.278742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.437 qpair failed and we were unable to recover it. 00:39:28.437 [2024-11-07 13:44:36.279052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-11-07 13:44:36.279067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-11-07 13:44:36.279417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-11-07 13:44:36.279433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-11-07 13:44:36.279745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-11-07 13:44:36.279761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-11-07 13:44:36.280048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-11-07 13:44:36.280065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-11-07 13:44:36.280274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-11-07 13:44:36.280289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-11-07 13:44:36.280615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-11-07 13:44:36.280630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-11-07 13:44:36.280851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-11-07 13:44:36.280869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-11-07 13:44:36.281046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-11-07 13:44:36.281063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-11-07 13:44:36.281356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-11-07 13:44:36.281372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-11-07 13:44:36.281696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-11-07 13:44:36.281711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-11-07 13:44:36.282036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-11-07 13:44:36.282052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-11-07 13:44:36.282257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-11-07 13:44:36.282271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-11-07 13:44:36.282482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-11-07 13:44:36.282497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-11-07 13:44:36.282824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-11-07 13:44:36.282840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-11-07 13:44:36.283163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-11-07 13:44:36.283178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-11-07 13:44:36.283494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-11-07 13:44:36.283510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-11-07 13:44:36.283846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-11-07 13:44:36.283870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-11-07 13:44:36.284201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-11-07 13:44:36.284215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-11-07 13:44:36.284539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-11-07 13:44:36.284554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-11-07 13:44:36.284850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-11-07 13:44:36.284869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-11-07 13:44:36.285203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-11-07 13:44:36.285217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-11-07 13:44:36.285546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-11-07 13:44:36.285561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-11-07 13:44:36.285930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-11-07 13:44:36.285946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-11-07 13:44:36.286280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-11-07 13:44:36.286295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-11-07 13:44:36.286625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-11-07 13:44:36.286641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-11-07 13:44:36.286964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-11-07 13:44:36.286980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-11-07 13:44:36.287309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-11-07 13:44:36.287323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-11-07 13:44:36.287649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-11-07 13:44:36.287663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-11-07 13:44:36.287992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-11-07 13:44:36.288007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-11-07 13:44:36.288326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-11-07 13:44:36.288342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-11-07 13:44:36.288670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-11-07 13:44:36.288686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-11-07 13:44:36.289005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-11-07 13:44:36.289020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-11-07 13:44:36.289211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-11-07 13:44:36.289227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-11-07 13:44:36.289535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-11-07 13:44:36.289551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.438 [2024-11-07 13:44:36.289879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.438 [2024-11-07 13:44:36.289895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.438 qpair failed and we were unable to recover it. 00:39:28.439 [2024-11-07 13:44:36.290214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-11-07 13:44:36.290229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-11-07 13:44:36.290563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-11-07 13:44:36.290577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-11-07 13:44:36.290908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-11-07 13:44:36.290924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-11-07 13:44:36.291269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-11-07 13:44:36.291284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-11-07 13:44:36.291607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-11-07 13:44:36.291622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-11-07 13:44:36.291820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-11-07 13:44:36.291836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-11-07 13:44:36.292169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-11-07 13:44:36.292185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-11-07 13:44:36.292512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-11-07 13:44:36.292528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-11-07 13:44:36.292854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-11-07 13:44:36.292874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-11-07 13:44:36.293196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-11-07 13:44:36.293211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-11-07 13:44:36.293568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-11-07 13:44:36.293584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-11-07 13:44:36.293887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-11-07 13:44:36.293902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-11-07 13:44:36.294220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-11-07 13:44:36.294235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-11-07 13:44:36.294559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-11-07 13:44:36.294575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-11-07 13:44:36.294902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-11-07 13:44:36.294918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-11-07 13:44:36.295211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-11-07 13:44:36.295226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-11-07 13:44:36.295551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-11-07 13:44:36.295566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-11-07 13:44:36.295893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-11-07 13:44:36.295909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-11-07 13:44:36.296219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-11-07 13:44:36.296233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-11-07 13:44:36.296572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-11-07 13:44:36.296588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-11-07 13:44:36.296948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-11-07 13:44:36.296964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-11-07 13:44:36.297280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-11-07 13:44:36.297297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-11-07 13:44:36.297624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-11-07 13:44:36.297639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-11-07 13:44:36.297946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-11-07 13:44:36.297961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-11-07 13:44:36.298153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-11-07 13:44:36.298167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-11-07 13:44:36.298482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-11-07 13:44:36.298496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-11-07 13:44:36.298716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-11-07 13:44:36.298731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-11-07 13:44:36.299070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-11-07 13:44:36.299085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-11-07 13:44:36.299445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-11-07 13:44:36.299460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-11-07 13:44:36.299783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-11-07 13:44:36.299798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-11-07 13:44:36.300101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-11-07 13:44:36.300116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-11-07 13:44:36.300449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-11-07 13:44:36.300464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-11-07 13:44:36.300761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-11-07 13:44:36.300776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-11-07 13:44:36.301071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-11-07 13:44:36.301086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-11-07 13:44:36.301300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-11-07 13:44:36.301315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-11-07 13:44:36.301631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-11-07 13:44:36.301645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-11-07 13:44:36.301963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-11-07 13:44:36.301979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-11-07 13:44:36.302302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.439 [2024-11-07 13:44:36.302317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.439 qpair failed and we were unable to recover it. 00:39:28.439 [2024-11-07 13:44:36.302644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-11-07 13:44:36.302659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-11-07 13:44:36.302989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-11-07 13:44:36.303005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-11-07 13:44:36.303194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-11-07 13:44:36.303210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-11-07 13:44:36.303507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-11-07 13:44:36.303523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-11-07 13:44:36.303859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-11-07 13:44:36.303880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-11-07 13:44:36.304203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-11-07 13:44:36.304218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-11-07 13:44:36.304542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-11-07 13:44:36.304558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-11-07 13:44:36.304938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-11-07 13:44:36.304953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-11-07 13:44:36.305270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-11-07 13:44:36.305285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-11-07 13:44:36.305604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-11-07 13:44:36.305620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-11-07 13:44:36.305914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-11-07 13:44:36.305929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-11-07 13:44:36.306253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-11-07 13:44:36.306267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-11-07 13:44:36.306585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-11-07 13:44:36.306600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-11-07 13:44:36.306903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-11-07 13:44:36.306919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-11-07 13:44:36.307242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-11-07 13:44:36.307257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-11-07 13:44:36.307572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-11-07 13:44:36.307588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-11-07 13:44:36.307943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-11-07 13:44:36.307959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-11-07 13:44:36.308282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-11-07 13:44:36.308296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-11-07 13:44:36.308607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-11-07 13:44:36.308623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-11-07 13:44:36.308954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-11-07 13:44:36.308970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-11-07 13:44:36.309292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-11-07 13:44:36.309306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-11-07 13:44:36.309618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-11-07 13:44:36.309633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-11-07 13:44:36.309929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-11-07 13:44:36.309945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-11-07 13:44:36.310306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-11-07 13:44:36.310325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-11-07 13:44:36.310607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-11-07 13:44:36.310622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-11-07 13:44:36.310926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-11-07 13:44:36.310942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-11-07 13:44:36.311266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-11-07 13:44:36.311281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-11-07 13:44:36.311576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-11-07 13:44:36.311591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-11-07 13:44:36.311903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-11-07 13:44:36.311918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-11-07 13:44:36.312226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-11-07 13:44:36.312241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-11-07 13:44:36.312572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-11-07 13:44:36.312587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-11-07 13:44:36.312874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-11-07 13:44:36.312890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-11-07 13:44:36.313179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-11-07 13:44:36.313193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-11-07 13:44:36.313503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-11-07 13:44:36.313519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-11-07 13:44:36.313831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-11-07 13:44:36.313847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-11-07 13:44:36.314192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-11-07 13:44:36.314207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-11-07 13:44:36.314533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-11-07 13:44:36.314548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.440 [2024-11-07 13:44:36.314886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.440 [2024-11-07 13:44:36.314902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.440 qpair failed and we were unable to recover it. 00:39:28.441 [2024-11-07 13:44:36.315213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-11-07 13:44:36.315227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-11-07 13:44:36.315474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-11-07 13:44:36.315489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-11-07 13:44:36.315813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-11-07 13:44:36.315828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-11-07 13:44:36.316130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-11-07 13:44:36.316146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-11-07 13:44:36.316503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-11-07 13:44:36.316518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-11-07 13:44:36.316844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-11-07 13:44:36.316858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-11-07 13:44:36.317009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-11-07 13:44:36.317023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-11-07 13:44:36.317237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-11-07 13:44:36.317252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-11-07 13:44:36.317554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-11-07 13:44:36.317569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-11-07 13:44:36.317898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-11-07 13:44:36.317913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-11-07 13:44:36.318231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-11-07 13:44:36.318247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-11-07 13:44:36.318534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-11-07 13:44:36.318550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-11-07 13:44:36.318879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-11-07 13:44:36.318895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-11-07 13:44:36.319218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-11-07 13:44:36.319233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-11-07 13:44:36.319564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-11-07 13:44:36.319580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-11-07 13:44:36.319913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-11-07 13:44:36.319928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-11-07 13:44:36.320136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-11-07 13:44:36.320151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-11-07 13:44:36.320457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-11-07 13:44:36.320472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-11-07 13:44:36.320783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-11-07 13:44:36.320799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-11-07 13:44:36.321047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-11-07 13:44:36.321062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-11-07 13:44:36.321384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-11-07 13:44:36.321398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-11-07 13:44:36.321706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-11-07 13:44:36.321720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-11-07 13:44:36.322028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-11-07 13:44:36.322042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-11-07 13:44:36.322378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-11-07 13:44:36.322393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-11-07 13:44:36.322696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-11-07 13:44:36.322711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-11-07 13:44:36.323026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-11-07 13:44:36.323044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-11-07 13:44:36.323379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-11-07 13:44:36.323395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-11-07 13:44:36.323710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-11-07 13:44:36.323725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-11-07 13:44:36.324028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-11-07 13:44:36.324044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-11-07 13:44:36.324386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-11-07 13:44:36.324401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-11-07 13:44:36.324715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-11-07 13:44:36.324731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-11-07 13:44:36.325027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-11-07 13:44:36.325042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-11-07 13:44:36.325337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-11-07 13:44:36.325352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-11-07 13:44:36.325661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-11-07 13:44:36.325677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-11-07 13:44:36.325986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-11-07 13:44:36.326002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-11-07 13:44:36.326333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-11-07 13:44:36.326348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-11-07 13:44:36.326665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-11-07 13:44:36.326679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-11-07 13:44:36.326882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.441 [2024-11-07 13:44:36.326897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.441 qpair failed and we were unable to recover it. 00:39:28.441 [2024-11-07 13:44:36.327209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-11-07 13:44:36.327225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-11-07 13:44:36.327552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-11-07 13:44:36.327569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-11-07 13:44:36.327893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-11-07 13:44:36.327908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-11-07 13:44:36.328243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-11-07 13:44:36.328258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-11-07 13:44:36.328574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-11-07 13:44:36.328590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-11-07 13:44:36.328918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-11-07 13:44:36.328934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-11-07 13:44:36.329217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-11-07 13:44:36.329232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-11-07 13:44:36.329538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-11-07 13:44:36.329553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-11-07 13:44:36.329890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-11-07 13:44:36.329906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-11-07 13:44:36.330217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-11-07 13:44:36.330232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-11-07 13:44:36.330546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-11-07 13:44:36.330561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-11-07 13:44:36.330875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-11-07 13:44:36.330891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-11-07 13:44:36.331203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-11-07 13:44:36.331219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-11-07 13:44:36.331545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-11-07 13:44:36.331560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-11-07 13:44:36.331892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-11-07 13:44:36.331908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-11-07 13:44:36.332217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-11-07 13:44:36.332231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-11-07 13:44:36.332547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-11-07 13:44:36.332561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-11-07 13:44:36.332893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-11-07 13:44:36.332908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-11-07 13:44:36.333217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-11-07 13:44:36.333232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-11-07 13:44:36.333541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-11-07 13:44:36.333557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-11-07 13:44:36.333879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-11-07 13:44:36.333894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-11-07 13:44:36.334190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-11-07 13:44:36.334205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-11-07 13:44:36.334530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-11-07 13:44:36.334545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-11-07 13:44:36.334875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-11-07 13:44:36.334891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-11-07 13:44:36.335198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-11-07 13:44:36.335214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-11-07 13:44:36.335540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-11-07 13:44:36.335554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-11-07 13:44:36.335883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-11-07 13:44:36.335899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-11-07 13:44:36.336207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-11-07 13:44:36.336223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-11-07 13:44:36.336545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-11-07 13:44:36.336559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-11-07 13:44:36.336854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-11-07 13:44:36.336874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-11-07 13:44:36.337216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-11-07 13:44:36.337231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-11-07 13:44:36.337593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-11-07 13:44:36.337608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-11-07 13:44:36.337924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-11-07 13:44:36.337940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-11-07 13:44:36.338256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.442 [2024-11-07 13:44:36.338271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.442 qpair failed and we were unable to recover it. 00:39:28.442 [2024-11-07 13:44:36.338569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-11-07 13:44:36.338583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-11-07 13:44:36.338879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-11-07 13:44:36.338894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-11-07 13:44:36.339211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-11-07 13:44:36.339226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-11-07 13:44:36.339586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-11-07 13:44:36.339601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-11-07 13:44:36.339923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-11-07 13:44:36.339939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-11-07 13:44:36.340268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-11-07 13:44:36.340283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-11-07 13:44:36.340600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-11-07 13:44:36.340615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-11-07 13:44:36.340948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-11-07 13:44:36.340963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-11-07 13:44:36.341164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-11-07 13:44:36.341180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-11-07 13:44:36.341494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-11-07 13:44:36.341509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-11-07 13:44:36.341840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-11-07 13:44:36.341855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-11-07 13:44:36.342162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-11-07 13:44:36.342178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-11-07 13:44:36.342503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-11-07 13:44:36.342519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-11-07 13:44:36.342842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-11-07 13:44:36.342857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-11-07 13:44:36.343170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-11-07 13:44:36.343185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-11-07 13:44:36.343509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-11-07 13:44:36.343523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-11-07 13:44:36.343849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-11-07 13:44:36.343872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-11-07 13:44:36.344164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-11-07 13:44:36.344179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-11-07 13:44:36.344511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-11-07 13:44:36.344527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-11-07 13:44:36.344730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-11-07 13:44:36.344745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-11-07 13:44:36.345051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-11-07 13:44:36.345069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-11-07 13:44:36.345386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-11-07 13:44:36.345401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-11-07 13:44:36.345725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-11-07 13:44:36.345740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-11-07 13:44:36.345958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-11-07 13:44:36.345972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-11-07 13:44:36.346302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-11-07 13:44:36.346317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-11-07 13:44:36.346639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-11-07 13:44:36.346655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-11-07 13:44:36.346980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-11-07 13:44:36.346996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-11-07 13:44:36.347312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-11-07 13:44:36.347327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-11-07 13:44:36.347668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-11-07 13:44:36.347683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-11-07 13:44:36.347979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-11-07 13:44:36.347994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-11-07 13:44:36.348316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-11-07 13:44:36.348332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-11-07 13:44:36.348656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-11-07 13:44:36.348671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-11-07 13:44:36.349002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-11-07 13:44:36.349017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-11-07 13:44:36.349348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-11-07 13:44:36.349363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-11-07 13:44:36.349689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-11-07 13:44:36.349704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-11-07 13:44:36.350043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-11-07 13:44:36.350059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-11-07 13:44:36.350387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-11-07 13:44:36.350402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-11-07 13:44:36.350741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-11-07 13:44:36.350756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.443 [2024-11-07 13:44:36.351082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.443 [2024-11-07 13:44:36.351097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.443 qpair failed and we were unable to recover it. 00:39:28.444 [2024-11-07 13:44:36.351422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-11-07 13:44:36.351437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-11-07 13:44:36.351751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-11-07 13:44:36.351767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-11-07 13:44:36.352084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-11-07 13:44:36.352099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-11-07 13:44:36.352422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-11-07 13:44:36.352438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-11-07 13:44:36.352627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-11-07 13:44:36.352642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-11-07 13:44:36.352946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-11-07 13:44:36.352961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-11-07 13:44:36.353284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-11-07 13:44:36.353299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-11-07 13:44:36.353595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-11-07 13:44:36.353610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-11-07 13:44:36.353789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-11-07 13:44:36.353803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-11-07 13:44:36.354128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-11-07 13:44:36.354143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-11-07 13:44:36.354512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-11-07 13:44:36.354527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-11-07 13:44:36.354859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-11-07 13:44:36.354885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-11-07 13:44:36.355187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-11-07 13:44:36.355201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-11-07 13:44:36.355525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-11-07 13:44:36.355541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-11-07 13:44:36.355874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-11-07 13:44:36.355890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-11-07 13:44:36.356219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-11-07 13:44:36.356235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-11-07 13:44:36.356551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-11-07 13:44:36.356566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-11-07 13:44:36.356904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-11-07 13:44:36.356919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-11-07 13:44:36.357237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-11-07 13:44:36.357252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-11-07 13:44:36.357572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-11-07 13:44:36.357587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-11-07 13:44:36.357873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-11-07 13:44:36.357887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-11-07 13:44:36.358226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-11-07 13:44:36.358244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-11-07 13:44:36.358529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-11-07 13:44:36.358544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-11-07 13:44:36.358869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-11-07 13:44:36.358884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-11-07 13:44:36.359206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-11-07 13:44:36.359221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-11-07 13:44:36.359547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-11-07 13:44:36.359561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-11-07 13:44:36.359905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-11-07 13:44:36.359920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-11-07 13:44:36.360205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-11-07 13:44:36.360220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-11-07 13:44:36.360583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-11-07 13:44:36.360599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-11-07 13:44:36.360931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-11-07 13:44:36.360946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-11-07 13:44:36.361148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-11-07 13:44:36.361164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-11-07 13:44:36.361485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-11-07 13:44:36.361500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-11-07 13:44:36.361827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-11-07 13:44:36.361841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-11-07 13:44:36.362165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-11-07 13:44:36.362180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-11-07 13:44:36.362503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-11-07 13:44:36.362519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-11-07 13:44:36.362824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-11-07 13:44:36.362839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-11-07 13:44:36.363055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-11-07 13:44:36.363071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-11-07 13:44:36.363367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.444 [2024-11-07 13:44:36.363382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.444 qpair failed and we were unable to recover it. 00:39:28.444 [2024-11-07 13:44:36.363725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-11-07 13:44:36.363740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-11-07 13:44:36.364119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-11-07 13:44:36.364134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-11-07 13:44:36.364427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-11-07 13:44:36.364442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-11-07 13:44:36.364787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-11-07 13:44:36.364802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-11-07 13:44:36.365108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-11-07 13:44:36.365123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-11-07 13:44:36.365446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-11-07 13:44:36.365462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-11-07 13:44:36.365792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-11-07 13:44:36.365807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-11-07 13:44:36.366127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-11-07 13:44:36.366142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-11-07 13:44:36.366491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-11-07 13:44:36.366506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-11-07 13:44:36.366797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-11-07 13:44:36.366813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-11-07 13:44:36.367122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-11-07 13:44:36.367138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-11-07 13:44:36.367467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-11-07 13:44:36.367483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-11-07 13:44:36.367814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-11-07 13:44:36.367830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-11-07 13:44:36.368153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-11-07 13:44:36.368168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-11-07 13:44:36.368367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-11-07 13:44:36.368382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-11-07 13:44:36.368675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-11-07 13:44:36.368690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-11-07 13:44:36.369007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-11-07 13:44:36.369023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-11-07 13:44:36.369347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-11-07 13:44:36.369362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-11-07 13:44:36.369693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-11-07 13:44:36.369708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-11-07 13:44:36.370009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-11-07 13:44:36.370024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-11-07 13:44:36.370336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-11-07 13:44:36.370351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-11-07 13:44:36.370634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-11-07 13:44:36.370648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-11-07 13:44:36.370966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-11-07 13:44:36.370981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-11-07 13:44:36.371308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-11-07 13:44:36.371326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-11-07 13:44:36.371655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-11-07 13:44:36.371670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-11-07 13:44:36.371974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-11-07 13:44:36.371989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-11-07 13:44:36.372311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-11-07 13:44:36.372325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-11-07 13:44:36.372658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-11-07 13:44:36.372673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-11-07 13:44:36.372984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-11-07 13:44:36.373000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-11-07 13:44:36.373327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-11-07 13:44:36.373343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-11-07 13:44:36.373639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-11-07 13:44:36.373655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-11-07 13:44:36.373986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-11-07 13:44:36.374002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-11-07 13:44:36.374396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-11-07 13:44:36.374411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-11-07 13:44:36.374714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-11-07 13:44:36.374729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-11-07 13:44:36.375059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-11-07 13:44:36.375074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-11-07 13:44:36.375388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-11-07 13:44:36.375404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-11-07 13:44:36.375735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-11-07 13:44:36.375750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-11-07 13:44:36.376042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-11-07 13:44:36.376058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.445 [2024-11-07 13:44:36.376421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.445 [2024-11-07 13:44:36.376437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.445 qpair failed and we were unable to recover it. 00:39:28.446 [2024-11-07 13:44:36.376762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-11-07 13:44:36.376778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-11-07 13:44:36.377099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-11-07 13:44:36.377114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-11-07 13:44:36.377480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-11-07 13:44:36.377495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-11-07 13:44:36.377702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-11-07 13:44:36.377718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-11-07 13:44:36.378057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-11-07 13:44:36.378072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-11-07 13:44:36.378391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-11-07 13:44:36.378407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-11-07 13:44:36.378698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-11-07 13:44:36.378713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-11-07 13:44:36.379055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-11-07 13:44:36.379071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-11-07 13:44:36.379390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-11-07 13:44:36.379404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-11-07 13:44:36.379715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-11-07 13:44:36.379732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-11-07 13:44:36.380032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-11-07 13:44:36.380047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-11-07 13:44:36.380382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-11-07 13:44:36.380398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-11-07 13:44:36.380703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-11-07 13:44:36.380717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-11-07 13:44:36.381042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-11-07 13:44:36.381059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-11-07 13:44:36.381376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-11-07 13:44:36.381390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-11-07 13:44:36.381725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-11-07 13:44:36.381740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-11-07 13:44:36.382040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-11-07 13:44:36.382056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-11-07 13:44:36.382375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-11-07 13:44:36.382390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-11-07 13:44:36.382712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-11-07 13:44:36.382727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-11-07 13:44:36.383026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-11-07 13:44:36.383041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-11-07 13:44:36.383369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-11-07 13:44:36.383383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-11-07 13:44:36.383678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-11-07 13:44:36.383694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-11-07 13:44:36.384017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-11-07 13:44:36.384033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-11-07 13:44:36.384344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-11-07 13:44:36.384359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-11-07 13:44:36.384688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-11-07 13:44:36.384705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-11-07 13:44:36.385024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-11-07 13:44:36.385040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-11-07 13:44:36.385363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-11-07 13:44:36.385378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-11-07 13:44:36.385688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-11-07 13:44:36.385703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-11-07 13:44:36.386029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-11-07 13:44:36.386044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-11-07 13:44:36.386371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-11-07 13:44:36.386386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-11-07 13:44:36.386714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-11-07 13:44:36.386729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-11-07 13:44:36.386902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-11-07 13:44:36.386916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-11-07 13:44:36.387185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-11-07 13:44:36.387202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-11-07 13:44:36.387528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-11-07 13:44:36.387542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-11-07 13:44:36.387850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-11-07 13:44:36.387869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-11-07 13:44:36.388179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-11-07 13:44:36.388194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-11-07 13:44:36.388535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-11-07 13:44:36.388550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-11-07 13:44:36.388855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.446 [2024-11-07 13:44:36.388874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.446 qpair failed and we were unable to recover it. 00:39:28.446 [2024-11-07 13:44:36.389177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-11-07 13:44:36.389193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-11-07 13:44:36.389524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-11-07 13:44:36.389539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-11-07 13:44:36.389950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-11-07 13:44:36.389966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-11-07 13:44:36.390289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-11-07 13:44:36.390303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-11-07 13:44:36.390632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-11-07 13:44:36.390646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-11-07 13:44:36.390975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-11-07 13:44:36.390990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-11-07 13:44:36.391196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-11-07 13:44:36.391210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-11-07 13:44:36.391527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-11-07 13:44:36.391542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-11-07 13:44:36.391869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-11-07 13:44:36.391884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-11-07 13:44:36.392188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-11-07 13:44:36.392203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-11-07 13:44:36.392521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-11-07 13:44:36.392537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-11-07 13:44:36.392850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-11-07 13:44:36.392869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-11-07 13:44:36.393185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-11-07 13:44:36.393200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-11-07 13:44:36.393545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-11-07 13:44:36.393560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-11-07 13:44:36.393913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-11-07 13:44:36.393928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-11-07 13:44:36.394241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-11-07 13:44:36.394256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-11-07 13:44:36.394590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-11-07 13:44:36.394605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-11-07 13:44:36.394893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-11-07 13:44:36.394908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-11-07 13:44:36.395239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-11-07 13:44:36.395253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-11-07 13:44:36.395596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-11-07 13:44:36.395611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-11-07 13:44:36.395911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-11-07 13:44:36.395926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-11-07 13:44:36.396251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-11-07 13:44:36.396266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-11-07 13:44:36.396602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-11-07 13:44:36.396616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-11-07 13:44:36.396918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-11-07 13:44:36.396933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-11-07 13:44:36.397318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-11-07 13:44:36.397332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-11-07 13:44:36.397660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-11-07 13:44:36.397675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-11-07 13:44:36.397990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-11-07 13:44:36.398008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-11-07 13:44:36.398334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-11-07 13:44:36.398348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-11-07 13:44:36.398679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-11-07 13:44:36.398694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-11-07 13:44:36.399019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-11-07 13:44:36.399035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-11-07 13:44:36.399358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-11-07 13:44:36.399374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-11-07 13:44:36.399555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-11-07 13:44:36.399572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-11-07 13:44:36.399750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-11-07 13:44:36.399766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-11-07 13:44:36.400048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.447 [2024-11-07 13:44:36.400063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.447 qpair failed and we were unable to recover it. 00:39:28.447 [2024-11-07 13:44:36.400420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-11-07 13:44:36.400434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-11-07 13:44:36.400762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-11-07 13:44:36.400777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-11-07 13:44:36.401071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-11-07 13:44:36.401086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-11-07 13:44:36.401417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-11-07 13:44:36.401433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-11-07 13:44:36.401740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-11-07 13:44:36.401755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-11-07 13:44:36.402132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-11-07 13:44:36.402148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-11-07 13:44:36.402445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-11-07 13:44:36.402460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-11-07 13:44:36.402776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-11-07 13:44:36.402791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-11-07 13:44:36.403119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-11-07 13:44:36.403134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-11-07 13:44:36.403468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-11-07 13:44:36.403483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-11-07 13:44:36.403810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-11-07 13:44:36.403826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-11-07 13:44:36.404151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-11-07 13:44:36.404166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-11-07 13:44:36.404381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-11-07 13:44:36.404395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-11-07 13:44:36.404720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-11-07 13:44:36.404736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-11-07 13:44:36.405029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-11-07 13:44:36.405046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-11-07 13:44:36.405333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-11-07 13:44:36.405352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-11-07 13:44:36.405676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-11-07 13:44:36.405690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-11-07 13:44:36.406016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-11-07 13:44:36.406032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-11-07 13:44:36.406386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-11-07 13:44:36.406401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-11-07 13:44:36.406698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-11-07 13:44:36.406714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-11-07 13:44:36.407026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-11-07 13:44:36.407041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-11-07 13:44:36.407346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-11-07 13:44:36.407361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-11-07 13:44:36.407669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-11-07 13:44:36.407684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-11-07 13:44:36.407983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-11-07 13:44:36.407998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-11-07 13:44:36.408325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-11-07 13:44:36.408340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-11-07 13:44:36.408703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-11-07 13:44:36.408719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-11-07 13:44:36.409029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-11-07 13:44:36.409045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-11-07 13:44:36.409330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-11-07 13:44:36.409345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-11-07 13:44:36.409627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-11-07 13:44:36.409642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-11-07 13:44:36.409979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-11-07 13:44:36.409995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-11-07 13:44:36.410321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-11-07 13:44:36.410335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-11-07 13:44:36.410517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-11-07 13:44:36.410533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-11-07 13:44:36.410807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-11-07 13:44:36.410825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-11-07 13:44:36.411112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-11-07 13:44:36.411127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-11-07 13:44:36.411454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-11-07 13:44:36.411470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-11-07 13:44:36.411792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-11-07 13:44:36.411807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-11-07 13:44:36.412120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-11-07 13:44:36.412136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-11-07 13:44:36.412507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-11-07 13:44:36.412522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.448 [2024-11-07 13:44:36.412720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.448 [2024-11-07 13:44:36.412735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.448 qpair failed and we were unable to recover it. 00:39:28.449 [2024-11-07 13:44:36.412841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-11-07 13:44:36.412855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-11-07 13:44:36.413169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-11-07 13:44:36.413183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-11-07 13:44:36.413510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-11-07 13:44:36.413526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-11-07 13:44:36.413851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-11-07 13:44:36.413877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-11-07 13:44:36.414162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-11-07 13:44:36.414176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-11-07 13:44:36.414505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-11-07 13:44:36.414521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.449 [2024-11-07 13:44:36.414700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.449 [2024-11-07 13:44:36.414715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.449 qpair failed and we were unable to recover it. 00:39:28.723 [2024-11-07 13:44:36.415035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.723 [2024-11-07 13:44:36.415051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.723 qpair failed and we were unable to recover it. 00:39:28.723 [2024-11-07 13:44:36.415361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.723 [2024-11-07 13:44:36.415377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.723 qpair failed and we were unable to recover it. 00:39:28.724 [2024-11-07 13:44:36.415676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.724 [2024-11-07 13:44:36.415691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.724 qpair failed and we were unable to recover it. 00:39:28.724 [2024-11-07 13:44:36.416020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.724 [2024-11-07 13:44:36.416035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.724 qpair failed and we were unable to recover it. 00:39:28.724 [2024-11-07 13:44:36.416369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.724 [2024-11-07 13:44:36.416385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.724 qpair failed and we were unable to recover it. 00:39:28.724 [2024-11-07 13:44:36.416735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.724 [2024-11-07 13:44:36.416750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.724 qpair failed and we were unable to recover it. 00:39:28.724 [2024-11-07 13:44:36.417078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.724 [2024-11-07 13:44:36.417093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.724 qpair failed and we were unable to recover it. 00:39:28.724 [2024-11-07 13:44:36.417408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.724 [2024-11-07 13:44:36.417423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.724 qpair failed and we were unable to recover it. 00:39:28.724 [2024-11-07 13:44:36.417754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.724 [2024-11-07 13:44:36.417769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.724 qpair failed and we were unable to recover it. 00:39:28.724 [2024-11-07 13:44:36.418094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.724 [2024-11-07 13:44:36.418110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.724 qpair failed and we were unable to recover it. 00:39:28.724 [2024-11-07 13:44:36.418439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.724 [2024-11-07 13:44:36.418454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.724 qpair failed and we were unable to recover it. 00:39:28.724 [2024-11-07 13:44:36.418750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.724 [2024-11-07 13:44:36.418765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.724 qpair failed and we were unable to recover it. 00:39:28.724 [2024-11-07 13:44:36.419083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.724 [2024-11-07 13:44:36.419099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.724 qpair failed and we were unable to recover it. 00:39:28.724 [2024-11-07 13:44:36.419430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.724 [2024-11-07 13:44:36.419444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.724 qpair failed and we were unable to recover it. 00:39:28.724 [2024-11-07 13:44:36.419665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.724 [2024-11-07 13:44:36.419680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.724 qpair failed and we were unable to recover it. 00:39:28.724 [2024-11-07 13:44:36.419995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.724 [2024-11-07 13:44:36.420010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.724 qpair failed and we were unable to recover it. 00:39:28.724 [2024-11-07 13:44:36.420190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.724 [2024-11-07 13:44:36.420204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.724 qpair failed and we were unable to recover it. 00:39:28.724 [2024-11-07 13:44:36.420547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.724 [2024-11-07 13:44:36.420561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.724 qpair failed and we were unable to recover it. 00:39:28.724 [2024-11-07 13:44:36.420890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.724 [2024-11-07 13:44:36.420906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.724 qpair failed and we were unable to recover it. 00:39:28.724 [2024-11-07 13:44:36.421197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.724 [2024-11-07 13:44:36.421212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.724 qpair failed and we were unable to recover it. 00:39:28.724 [2024-11-07 13:44:36.421519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.724 [2024-11-07 13:44:36.421534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.724 qpair failed and we were unable to recover it. 00:39:28.724 [2024-11-07 13:44:36.421859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.724 [2024-11-07 13:44:36.421882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.724 qpair failed and we were unable to recover it. 00:39:28.724 [2024-11-07 13:44:36.422202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.724 [2024-11-07 13:44:36.422217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.724 qpair failed and we were unable to recover it. 00:39:28.724 [2024-11-07 13:44:36.422548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.724 [2024-11-07 13:44:36.422563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.724 qpair failed and we were unable to recover it. 00:39:28.724 [2024-11-07 13:44:36.422872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.724 [2024-11-07 13:44:36.422887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.724 qpair failed and we were unable to recover it. 00:39:28.724 [2024-11-07 13:44:36.423176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.724 [2024-11-07 13:44:36.423190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.724 qpair failed and we were unable to recover it. 00:39:28.724 [2024-11-07 13:44:36.423532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.724 [2024-11-07 13:44:36.423550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.724 qpair failed and we were unable to recover it. 00:39:28.724 [2024-11-07 13:44:36.423872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.724 [2024-11-07 13:44:36.423887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.724 qpair failed and we were unable to recover it. 00:39:28.724 [2024-11-07 13:44:36.424222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.724 [2024-11-07 13:44:36.424237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.724 qpair failed and we were unable to recover it. 00:39:28.724 [2024-11-07 13:44:36.424553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.724 [2024-11-07 13:44:36.424568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.724 qpair failed and we were unable to recover it. 00:39:28.724 [2024-11-07 13:44:36.424855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.724 [2024-11-07 13:44:36.424874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.724 qpair failed and we were unable to recover it. 00:39:28.724 [2024-11-07 13:44:36.425178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.724 [2024-11-07 13:44:36.425193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.724 qpair failed and we were unable to recover it. 00:39:28.724 [2024-11-07 13:44:36.425510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.724 [2024-11-07 13:44:36.425525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.724 qpair failed and we were unable to recover it. 00:39:28.724 [2024-11-07 13:44:36.425850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.724 [2024-11-07 13:44:36.425872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.724 qpair failed and we were unable to recover it. 00:39:28.724 [2024-11-07 13:44:36.426209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.724 [2024-11-07 13:44:36.426224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.724 qpair failed and we were unable to recover it. 00:39:28.724 [2024-11-07 13:44:36.426539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.724 [2024-11-07 13:44:36.426554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.724 qpair failed and we were unable to recover it. 00:39:28.724 [2024-11-07 13:44:36.426918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.724 [2024-11-07 13:44:36.426942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.724 qpair failed and we were unable to recover it. 00:39:28.724 [2024-11-07 13:44:36.427252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.724 [2024-11-07 13:44:36.427266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.724 qpair failed and we were unable to recover it. 00:39:28.724 [2024-11-07 13:44:36.427602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.724 [2024-11-07 13:44:36.427617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.724 qpair failed and we were unable to recover it. 00:39:28.724 [2024-11-07 13:44:36.427989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.725 [2024-11-07 13:44:36.428004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.725 qpair failed and we were unable to recover it. 00:39:28.725 [2024-11-07 13:44:36.428333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.725 [2024-11-07 13:44:36.428348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.725 qpair failed and we were unable to recover it. 00:39:28.725 [2024-11-07 13:44:36.428670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.725 [2024-11-07 13:44:36.428684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.725 qpair failed and we were unable to recover it. 00:39:28.725 [2024-11-07 13:44:36.428975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.725 [2024-11-07 13:44:36.428990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.725 qpair failed and we were unable to recover it. 00:39:28.725 [2024-11-07 13:44:36.429313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.725 [2024-11-07 13:44:36.429327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.725 qpair failed and we were unable to recover it. 00:39:28.725 [2024-11-07 13:44:36.429643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.725 [2024-11-07 13:44:36.429658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.725 qpair failed and we were unable to recover it. 00:39:28.725 [2024-11-07 13:44:36.430022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.725 [2024-11-07 13:44:36.430037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.725 qpair failed and we were unable to recover it. 00:39:28.725 [2024-11-07 13:44:36.430330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.725 [2024-11-07 13:44:36.430344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.725 qpair failed and we were unable to recover it. 00:39:28.725 [2024-11-07 13:44:36.430673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.725 [2024-11-07 13:44:36.430687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.725 qpair failed and we were unable to recover it. 00:39:28.725 [2024-11-07 13:44:36.431018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.725 [2024-11-07 13:44:36.431032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.725 qpair failed and we were unable to recover it. 00:39:28.725 [2024-11-07 13:44:36.431327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.725 [2024-11-07 13:44:36.431342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.725 qpair failed and we were unable to recover it. 00:39:28.725 [2024-11-07 13:44:36.431673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.725 [2024-11-07 13:44:36.431688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.725 qpair failed and we were unable to recover it. 00:39:28.725 [2024-11-07 13:44:36.432013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.725 [2024-11-07 13:44:36.432028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.725 qpair failed and we were unable to recover it. 00:39:28.725 [2024-11-07 13:44:36.432355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.725 [2024-11-07 13:44:36.432370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.725 qpair failed and we were unable to recover it. 00:39:28.725 [2024-11-07 13:44:36.432705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.725 [2024-11-07 13:44:36.432720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.725 qpair failed and we were unable to recover it. 00:39:28.725 [2024-11-07 13:44:36.433051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.725 [2024-11-07 13:44:36.433067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.725 qpair failed and we were unable to recover it. 00:39:28.725 [2024-11-07 13:44:36.433385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.725 [2024-11-07 13:44:36.433399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.725 qpair failed and we were unable to recover it. 00:39:28.725 [2024-11-07 13:44:36.433714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.725 [2024-11-07 13:44:36.433729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.725 qpair failed and we were unable to recover it. 00:39:28.725 [2024-11-07 13:44:36.434060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.725 [2024-11-07 13:44:36.434075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.725 qpair failed and we were unable to recover it. 00:39:28.725 [2024-11-07 13:44:36.434408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.725 [2024-11-07 13:44:36.434424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.725 qpair failed and we were unable to recover it. 00:39:28.725 [2024-11-07 13:44:36.434741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.725 [2024-11-07 13:44:36.434756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.725 qpair failed and we were unable to recover it. 00:39:28.725 [2024-11-07 13:44:36.435090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.725 [2024-11-07 13:44:36.435106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.725 qpair failed and we were unable to recover it. 00:39:28.725 [2024-11-07 13:44:36.435407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.725 [2024-11-07 13:44:36.435422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.725 qpair failed and we were unable to recover it. 00:39:28.725 [2024-11-07 13:44:36.435745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.725 [2024-11-07 13:44:36.435760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.725 qpair failed and we were unable to recover it. 00:39:28.725 [2024-11-07 13:44:36.436089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.725 [2024-11-07 13:44:36.436104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.725 qpair failed and we were unable to recover it. 00:39:28.725 [2024-11-07 13:44:36.436431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.725 [2024-11-07 13:44:36.436447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.725 qpair failed and we were unable to recover it. 00:39:28.725 [2024-11-07 13:44:36.436769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.725 [2024-11-07 13:44:36.436784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.725 qpair failed and we were unable to recover it. 00:39:28.725 [2024-11-07 13:44:36.437112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.725 [2024-11-07 13:44:36.437130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.725 qpair failed and we were unable to recover it. 00:39:28.725 [2024-11-07 13:44:36.437430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.725 [2024-11-07 13:44:36.437444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.725 qpair failed and we were unable to recover it. 00:39:28.725 [2024-11-07 13:44:36.437754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.725 [2024-11-07 13:44:36.437769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.725 qpair failed and we were unable to recover it. 00:39:28.725 [2024-11-07 13:44:36.438048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.725 [2024-11-07 13:44:36.438062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.725 qpair failed and we were unable to recover it. 00:39:28.725 [2024-11-07 13:44:36.438354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.725 [2024-11-07 13:44:36.438368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.725 qpair failed and we were unable to recover it. 00:39:28.725 [2024-11-07 13:44:36.438569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.725 [2024-11-07 13:44:36.438584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.725 qpair failed and we were unable to recover it. 00:39:28.725 [2024-11-07 13:44:36.438907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.725 [2024-11-07 13:44:36.438922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.725 qpair failed and we were unable to recover it. 00:39:28.725 [2024-11-07 13:44:36.439239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.725 [2024-11-07 13:44:36.439262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.725 qpair failed and we were unable to recover it. 00:39:28.725 [2024-11-07 13:44:36.439602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.725 [2024-11-07 13:44:36.439617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.725 qpair failed and we were unable to recover it. 00:39:28.726 [2024-11-07 13:44:36.439927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.726 [2024-11-07 13:44:36.439942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.726 qpair failed and we were unable to recover it. 00:39:28.726 [2024-11-07 13:44:36.440127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.726 [2024-11-07 13:44:36.440141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.726 qpair failed and we were unable to recover it. 00:39:28.726 [2024-11-07 13:44:36.440503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.726 [2024-11-07 13:44:36.440518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.726 qpair failed and we were unable to recover it. 00:39:28.726 [2024-11-07 13:44:36.440858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.726 [2024-11-07 13:44:36.440877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.726 qpair failed and we were unable to recover it. 00:39:28.726 [2024-11-07 13:44:36.441181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.726 [2024-11-07 13:44:36.441195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.726 qpair failed and we were unable to recover it. 00:39:28.726 [2024-11-07 13:44:36.441539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.726 [2024-11-07 13:44:36.441555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.726 qpair failed and we were unable to recover it. 00:39:28.726 [2024-11-07 13:44:36.441876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.726 [2024-11-07 13:44:36.441891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.726 qpair failed and we were unable to recover it. 00:39:28.726 [2024-11-07 13:44:36.442233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.726 [2024-11-07 13:44:36.442248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.726 qpair failed and we were unable to recover it. 00:39:28.726 [2024-11-07 13:44:36.442441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.726 [2024-11-07 13:44:36.442458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.726 qpair failed and we were unable to recover it. 00:39:28.726 [2024-11-07 13:44:36.442739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.726 [2024-11-07 13:44:36.442754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.726 qpair failed and we were unable to recover it. 00:39:28.726 [2024-11-07 13:44:36.443034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.726 [2024-11-07 13:44:36.443049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.726 qpair failed and we were unable to recover it. 00:39:28.726 [2024-11-07 13:44:36.443335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.726 [2024-11-07 13:44:36.443351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.726 qpair failed and we were unable to recover it. 00:39:28.726 [2024-11-07 13:44:36.443706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.726 [2024-11-07 13:44:36.443721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.726 qpair failed and we were unable to recover it. 00:39:28.726 [2024-11-07 13:44:36.444055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.726 [2024-11-07 13:44:36.444070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.726 qpair failed and we were unable to recover it. 00:39:28.726 [2024-11-07 13:44:36.444378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.726 [2024-11-07 13:44:36.444393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.726 qpair failed and we were unable to recover it. 00:39:28.726 [2024-11-07 13:44:36.444722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.726 [2024-11-07 13:44:36.444737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.726 qpair failed and we were unable to recover it. 00:39:28.726 [2024-11-07 13:44:36.445035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.726 [2024-11-07 13:44:36.445050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.726 qpair failed and we were unable to recover it. 00:39:28.726 [2024-11-07 13:44:36.445349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.726 [2024-11-07 13:44:36.445364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.726 qpair failed and we were unable to recover it. 00:39:28.726 [2024-11-07 13:44:36.445682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.726 [2024-11-07 13:44:36.445698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.726 qpair failed and we were unable to recover it. 00:39:28.726 [2024-11-07 13:44:36.446042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.726 [2024-11-07 13:44:36.446057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.726 qpair failed and we were unable to recover it. 00:39:28.726 [2024-11-07 13:44:36.446374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.726 [2024-11-07 13:44:36.446389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.726 qpair failed and we were unable to recover it. 00:39:28.726 [2024-11-07 13:44:36.446732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.726 [2024-11-07 13:44:36.446748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.726 qpair failed and we were unable to recover it. 00:39:28.726 [2024-11-07 13:44:36.447049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.726 [2024-11-07 13:44:36.447067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.726 qpair failed and we were unable to recover it. 00:39:28.726 [2024-11-07 13:44:36.447409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.726 [2024-11-07 13:44:36.447424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.726 qpair failed and we were unable to recover it. 00:39:28.726 [2024-11-07 13:44:36.447752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.726 [2024-11-07 13:44:36.447767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.726 qpair failed and we were unable to recover it. 00:39:28.726 [2024-11-07 13:44:36.448098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.726 [2024-11-07 13:44:36.448113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.726 qpair failed and we were unable to recover it. 00:39:28.726 [2024-11-07 13:44:36.448432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.726 [2024-11-07 13:44:36.448447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.726 qpair failed and we were unable to recover it. 00:39:28.726 [2024-11-07 13:44:36.448639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.726 [2024-11-07 13:44:36.448655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.726 qpair failed and we were unable to recover it. 00:39:28.726 [2024-11-07 13:44:36.448973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.726 [2024-11-07 13:44:36.448990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.726 qpair failed and we were unable to recover it. 00:39:28.726 [2024-11-07 13:44:36.449304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.726 [2024-11-07 13:44:36.449319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.726 qpair failed and we were unable to recover it. 00:39:28.726 [2024-11-07 13:44:36.449648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.726 [2024-11-07 13:44:36.449663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.726 qpair failed and we were unable to recover it. 00:39:28.726 [2024-11-07 13:44:36.449995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.726 [2024-11-07 13:44:36.450013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.726 qpair failed and we were unable to recover it. 00:39:28.726 [2024-11-07 13:44:36.450337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.726 [2024-11-07 13:44:36.450351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.726 qpair failed and we were unable to recover it. 00:39:28.726 [2024-11-07 13:44:36.450678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.726 [2024-11-07 13:44:36.450693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.726 qpair failed and we were unable to recover it. 00:39:28.726 [2024-11-07 13:44:36.451000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.726 [2024-11-07 13:44:36.451015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.726 qpair failed and we were unable to recover it. 00:39:28.726 [2024-11-07 13:44:36.451235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.726 [2024-11-07 13:44:36.451250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.726 qpair failed and we were unable to recover it. 00:39:28.726 [2024-11-07 13:44:36.451565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.726 [2024-11-07 13:44:36.451581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.726 qpair failed and we were unable to recover it. 00:39:28.726 [2024-11-07 13:44:36.451899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.726 [2024-11-07 13:44:36.451914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.727 qpair failed and we were unable to recover it. 00:39:28.727 [2024-11-07 13:44:36.452245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.727 [2024-11-07 13:44:36.452260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.727 qpair failed and we were unable to recover it. 00:39:28.727 [2024-11-07 13:44:36.452591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.727 [2024-11-07 13:44:36.452606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.727 qpair failed and we were unable to recover it. 00:39:28.727 [2024-11-07 13:44:36.452912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.727 [2024-11-07 13:44:36.452927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.727 qpair failed and we were unable to recover it. 00:39:28.727 [2024-11-07 13:44:36.453252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.727 [2024-11-07 13:44:36.453266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.727 qpair failed and we were unable to recover it. 00:39:28.727 [2024-11-07 13:44:36.453597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.727 [2024-11-07 13:44:36.453613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.727 qpair failed and we were unable to recover it. 00:39:28.727 [2024-11-07 13:44:36.453950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.727 [2024-11-07 13:44:36.453965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.727 qpair failed and we were unable to recover it. 00:39:28.727 [2024-11-07 13:44:36.454184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.727 [2024-11-07 13:44:36.454198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.727 qpair failed and we were unable to recover it. 00:39:28.727 [2024-11-07 13:44:36.454534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.727 [2024-11-07 13:44:36.454549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.727 qpair failed and we were unable to recover it. 00:39:28.727 [2024-11-07 13:44:36.454740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.727 [2024-11-07 13:44:36.454755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.727 qpair failed and we were unable to recover it. 00:39:28.727 [2024-11-07 13:44:36.455036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.727 [2024-11-07 13:44:36.455051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.727 qpair failed and we were unable to recover it. 00:39:28.727 [2024-11-07 13:44:36.455377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.727 [2024-11-07 13:44:36.455392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.727 qpair failed and we were unable to recover it. 00:39:28.727 [2024-11-07 13:44:36.455709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.727 [2024-11-07 13:44:36.455724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.727 qpair failed and we were unable to recover it. 00:39:28.727 [2024-11-07 13:44:36.456034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.727 [2024-11-07 13:44:36.456049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.727 qpair failed and we were unable to recover it. 00:39:28.727 [2024-11-07 13:44:36.456332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.727 [2024-11-07 13:44:36.456347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.727 qpair failed and we were unable to recover it. 00:39:28.727 [2024-11-07 13:44:36.456678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.727 [2024-11-07 13:44:36.456693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.727 qpair failed and we were unable to recover it. 00:39:28.727 [2024-11-07 13:44:36.457020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.727 [2024-11-07 13:44:36.457036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.727 qpair failed and we were unable to recover it. 00:39:28.727 [2024-11-07 13:44:36.457370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.727 [2024-11-07 13:44:36.457384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.727 qpair failed and we were unable to recover it. 00:39:28.727 [2024-11-07 13:44:36.457717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.727 [2024-11-07 13:44:36.457733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.727 qpair failed and we were unable to recover it. 00:39:28.727 [2024-11-07 13:44:36.458036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.727 [2024-11-07 13:44:36.458052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.727 qpair failed and we were unable to recover it. 00:39:28.727 [2024-11-07 13:44:36.458363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.727 [2024-11-07 13:44:36.458377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.727 qpair failed and we were unable to recover it. 00:39:28.727 [2024-11-07 13:44:36.458692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.727 [2024-11-07 13:44:36.458707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.727 qpair failed and we were unable to recover it. 00:39:28.727 [2024-11-07 13:44:36.459033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.727 [2024-11-07 13:44:36.459049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.727 qpair failed and we were unable to recover it. 00:39:28.727 [2024-11-07 13:44:36.459372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.727 [2024-11-07 13:44:36.459387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.727 qpair failed and we were unable to recover it. 00:39:28.727 [2024-11-07 13:44:36.459712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.727 [2024-11-07 13:44:36.459727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.727 qpair failed and we were unable to recover it. 00:39:28.727 [2024-11-07 13:44:36.460038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.727 [2024-11-07 13:44:36.460053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.727 qpair failed and we were unable to recover it. 00:39:28.727 [2024-11-07 13:44:36.460391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.727 [2024-11-07 13:44:36.460407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.727 qpair failed and we were unable to recover it. 00:39:28.727 [2024-11-07 13:44:36.460726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.727 [2024-11-07 13:44:36.460740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.727 qpair failed and we were unable to recover it. 00:39:28.727 [2024-11-07 13:44:36.461081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.727 [2024-11-07 13:44:36.461096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.727 qpair failed and we were unable to recover it. 00:39:28.727 [2024-11-07 13:44:36.461451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.727 [2024-11-07 13:44:36.461466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.727 qpair failed and we were unable to recover it. 00:39:28.727 [2024-11-07 13:44:36.461755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.727 [2024-11-07 13:44:36.461769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.727 qpair failed and we were unable to recover it. 00:39:28.727 [2024-11-07 13:44:36.462075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.727 [2024-11-07 13:44:36.462090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.727 qpair failed and we were unable to recover it. 00:39:28.727 [2024-11-07 13:44:36.462401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.727 [2024-11-07 13:44:36.462415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.727 qpair failed and we were unable to recover it. 00:39:28.727 [2024-11-07 13:44:36.462744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.727 [2024-11-07 13:44:36.462758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.727 qpair failed and we were unable to recover it. 00:39:28.727 [2024-11-07 13:44:36.463115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.727 [2024-11-07 13:44:36.463135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.727 qpair failed and we were unable to recover it. 00:39:28.727 [2024-11-07 13:44:36.463330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.727 [2024-11-07 13:44:36.463345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.727 qpair failed and we were unable to recover it. 00:39:28.727 [2024-11-07 13:44:36.463672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.727 [2024-11-07 13:44:36.463687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.727 qpair failed and we were unable to recover it. 00:39:28.727 [2024-11-07 13:44:36.464013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.727 [2024-11-07 13:44:36.464028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.727 qpair failed and we were unable to recover it. 00:39:28.727 [2024-11-07 13:44:36.464392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-11-07 13:44:36.464406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-11-07 13:44:36.464704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-11-07 13:44:36.464720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-11-07 13:44:36.465040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-11-07 13:44:36.465055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-11-07 13:44:36.465380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-11-07 13:44:36.465395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-11-07 13:44:36.465722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-11-07 13:44:36.465736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-11-07 13:44:36.466035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-11-07 13:44:36.466050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-11-07 13:44:36.466240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-11-07 13:44:36.466256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-11-07 13:44:36.466540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-11-07 13:44:36.466555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-11-07 13:44:36.466759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-11-07 13:44:36.466774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-11-07 13:44:36.467072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-11-07 13:44:36.467088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-11-07 13:44:36.467418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-11-07 13:44:36.467433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-11-07 13:44:36.467629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-11-07 13:44:36.467644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-11-07 13:44:36.467900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-11-07 13:44:36.467915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-11-07 13:44:36.468248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-11-07 13:44:36.468263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-11-07 13:44:36.468624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-11-07 13:44:36.468638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-11-07 13:44:36.468806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-11-07 13:44:36.468821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-11-07 13:44:36.469117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-11-07 13:44:36.469133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-11-07 13:44:36.469458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-11-07 13:44:36.469473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-11-07 13:44:36.469804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-11-07 13:44:36.469819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-11-07 13:44:36.470120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-11-07 13:44:36.470135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-11-07 13:44:36.470465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-11-07 13:44:36.470481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-11-07 13:44:36.470780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-11-07 13:44:36.470794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-11-07 13:44:36.471103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-11-07 13:44:36.471118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-11-07 13:44:36.471442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-11-07 13:44:36.471457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-11-07 13:44:36.471759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-11-07 13:44:36.471773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-11-07 13:44:36.472103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-11-07 13:44:36.472118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-11-07 13:44:36.472444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-11-07 13:44:36.472460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-11-07 13:44:36.472785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-11-07 13:44:36.472799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-11-07 13:44:36.473094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-11-07 13:44:36.473109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.728 qpair failed and we were unable to recover it. 00:39:28.728 [2024-11-07 13:44:36.473429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.728 [2024-11-07 13:44:36.473443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-11-07 13:44:36.473747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-11-07 13:44:36.473761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-11-07 13:44:36.474080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-11-07 13:44:36.474096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-11-07 13:44:36.474390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-11-07 13:44:36.474404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-11-07 13:44:36.474709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-11-07 13:44:36.474724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-11-07 13:44:36.475027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-11-07 13:44:36.475043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-11-07 13:44:36.475374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-11-07 13:44:36.475389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-11-07 13:44:36.475720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-11-07 13:44:36.475738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-11-07 13:44:36.476035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-11-07 13:44:36.476050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-11-07 13:44:36.476359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-11-07 13:44:36.476373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-11-07 13:44:36.476555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-11-07 13:44:36.476571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-11-07 13:44:36.476883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-11-07 13:44:36.476898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-11-07 13:44:36.477218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-11-07 13:44:36.477233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-11-07 13:44:36.477558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-11-07 13:44:36.477574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-11-07 13:44:36.477897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-11-07 13:44:36.477913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-11-07 13:44:36.478254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-11-07 13:44:36.478269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-11-07 13:44:36.478590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-11-07 13:44:36.478605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-11-07 13:44:36.478788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-11-07 13:44:36.478803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-11-07 13:44:36.479092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-11-07 13:44:36.479108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-11-07 13:44:36.479411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-11-07 13:44:36.479426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-11-07 13:44:36.479750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-11-07 13:44:36.479765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-11-07 13:44:36.480096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-11-07 13:44:36.480111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-11-07 13:44:36.480435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-11-07 13:44:36.480451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-11-07 13:44:36.480750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-11-07 13:44:36.480766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-11-07 13:44:36.481099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-11-07 13:44:36.481115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-11-07 13:44:36.481439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-11-07 13:44:36.481455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-11-07 13:44:36.481780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-11-07 13:44:36.481796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-11-07 13:44:36.482118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-11-07 13:44:36.482134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-11-07 13:44:36.482454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-11-07 13:44:36.482469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-11-07 13:44:36.482794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-11-07 13:44:36.482809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-11-07 13:44:36.483135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-11-07 13:44:36.483152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-11-07 13:44:36.483491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-11-07 13:44:36.483506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-11-07 13:44:36.483860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-11-07 13:44:36.483880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-11-07 13:44:36.484182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-11-07 13:44:36.484197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-11-07 13:44:36.484521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-11-07 13:44:36.484535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-11-07 13:44:36.484831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-11-07 13:44:36.484845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.729 [2024-11-07 13:44:36.485160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.729 [2024-11-07 13:44:36.485177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.729 qpair failed and we were unable to recover it. 00:39:28.730 [2024-11-07 13:44:36.485484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-11-07 13:44:36.485499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-11-07 13:44:36.485805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-11-07 13:44:36.485819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-11-07 13:44:36.486189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-11-07 13:44:36.486204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-11-07 13:44:36.486531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-11-07 13:44:36.486546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-11-07 13:44:36.486886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-11-07 13:44:36.486902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-11-07 13:44:36.487217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-11-07 13:44:36.487233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-11-07 13:44:36.487521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-11-07 13:44:36.487535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-11-07 13:44:36.487903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-11-07 13:44:36.487918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-11-07 13:44:36.488139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-11-07 13:44:36.488157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-11-07 13:44:36.488422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-11-07 13:44:36.488437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-11-07 13:44:36.488763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-11-07 13:44:36.488780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-11-07 13:44:36.489039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-11-07 13:44:36.489054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-11-07 13:44:36.489339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-11-07 13:44:36.489354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-11-07 13:44:36.489674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-11-07 13:44:36.489690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-11-07 13:44:36.489978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-11-07 13:44:36.489993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-11-07 13:44:36.490300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-11-07 13:44:36.490315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-11-07 13:44:36.490637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-11-07 13:44:36.490652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-11-07 13:44:36.490972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-11-07 13:44:36.490987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-11-07 13:44:36.491324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-11-07 13:44:36.491339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-11-07 13:44:36.491654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-11-07 13:44:36.491670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-11-07 13:44:36.491985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-11-07 13:44:36.492001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-11-07 13:44:36.492259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-11-07 13:44:36.492273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-11-07 13:44:36.492606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-11-07 13:44:36.492621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-11-07 13:44:36.492958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-11-07 13:44:36.492974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-11-07 13:44:36.493262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-11-07 13:44:36.493277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-11-07 13:44:36.493626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-11-07 13:44:36.493641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 4147546 Killed "${NVMF_APP[@]}" "$@" 00:39:28.730 [2024-11-07 13:44:36.493954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-11-07 13:44:36.493970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-11-07 13:44:36.494309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-11-07 13:44:36.494324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 13:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:39:28.730 [2024-11-07 13:44:36.494658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-11-07 13:44:36.494673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-11-07 13:44:36.494873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-11-07 13:44:36.494892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 13:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:39:28.730 13:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:28.730 [2024-11-07 13:44:36.495211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-11-07 13:44:36.495226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.730 qpair failed and we were unable to recover it. 00:39:28.730 [2024-11-07 13:44:36.495448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.730 [2024-11-07 13:44:36.495462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 13:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:28.731 13:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:28.731 [2024-11-07 13:44:36.495763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-11-07 13:44:36.495779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-11-07 13:44:36.495995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-11-07 13:44:36.496013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-11-07 13:44:36.496323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-11-07 13:44:36.496342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-11-07 13:44:36.496713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-11-07 13:44:36.496728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-11-07 13:44:36.496949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-11-07 13:44:36.496964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-11-07 13:44:36.497292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-11-07 13:44:36.497308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-11-07 13:44:36.497602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-11-07 13:44:36.497617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-11-07 13:44:36.497826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-11-07 13:44:36.497841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-11-07 13:44:36.498166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-11-07 13:44:36.498181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-11-07 13:44:36.498499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-11-07 13:44:36.498515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-11-07 13:44:36.498844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-11-07 13:44:36.498859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-11-07 13:44:36.499188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-11-07 13:44:36.499202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-11-07 13:44:36.499508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-11-07 13:44:36.499523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-11-07 13:44:36.499904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-11-07 13:44:36.499919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-11-07 13:44:36.500228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-11-07 13:44:36.500242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-11-07 13:44:36.500578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-11-07 13:44:36.500592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-11-07 13:44:36.500799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-11-07 13:44:36.500813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-11-07 13:44:36.501010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-11-07 13:44:36.501027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-11-07 13:44:36.501321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-11-07 13:44:36.501337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-11-07 13:44:36.501657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-11-07 13:44:36.501672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-11-07 13:44:36.501996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-11-07 13:44:36.502012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-11-07 13:44:36.502204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-11-07 13:44:36.502219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-11-07 13:44:36.502561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-11-07 13:44:36.502576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-11-07 13:44:36.502807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-11-07 13:44:36.502823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 [2024-11-07 13:44:36.503130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-11-07 13:44:36.503145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 13:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=4148563 00:39:28.731 [2024-11-07 13:44:36.503473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-11-07 13:44:36.503488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 13:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 4148563 00:39:28.731 [2024-11-07 13:44:36.503824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-11-07 13:44:36.503840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.731 13:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:39:28.731 13:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 4148563 ']' 00:39:28.731 [2024-11-07 13:44:36.504214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.731 [2024-11-07 13:44:36.504233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.731 qpair failed and we were unable to recover it. 00:39:28.732 13:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:28.732 [2024-11-07 13:44:36.504556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-11-07 13:44:36.504571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 13:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:28.732 13:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:28.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:28.732 [2024-11-07 13:44:36.504896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-11-07 13:44:36.504912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 13:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:28.732 [2024-11-07 13:44:36.505239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-11-07 13:44:36.505255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 13:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:28.732 [2024-11-07 13:44:36.505581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-11-07 13:44:36.505597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-11-07 13:44:36.505925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-11-07 13:44:36.505942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-11-07 13:44:36.506266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-11-07 13:44:36.506281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-11-07 13:44:36.506569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-11-07 13:44:36.506584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-11-07 13:44:36.506907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-11-07 13:44:36.506923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-11-07 13:44:36.507203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-11-07 13:44:36.507219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-11-07 13:44:36.507543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-11-07 13:44:36.507560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-11-07 13:44:36.507768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-11-07 13:44:36.507784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-11-07 13:44:36.508098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-11-07 13:44:36.508113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-11-07 13:44:36.508427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-11-07 13:44:36.508445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-11-07 13:44:36.508766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-11-07 13:44:36.508782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-11-07 13:44:36.509081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-11-07 13:44:36.509097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-11-07 13:44:36.509430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-11-07 13:44:36.509445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-11-07 13:44:36.509779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-11-07 13:44:36.509795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-11-07 13:44:36.510056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-11-07 13:44:36.510072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-11-07 13:44:36.510184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-11-07 13:44:36.510199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-11-07 13:44:36.510546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-11-07 13:44:36.510561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-11-07 13:44:36.510841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-11-07 13:44:36.510856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-11-07 13:44:36.511186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-11-07 13:44:36.511202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-11-07 13:44:36.511416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-11-07 13:44:36.511431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-11-07 13:44:36.511716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-11-07 13:44:36.511732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-11-07 13:44:36.512057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-11-07 13:44:36.512073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-11-07 13:44:36.512286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-11-07 13:44:36.512301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-11-07 13:44:36.512615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-11-07 13:44:36.512630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-11-07 13:44:36.512966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-11-07 13:44:36.512982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-11-07 13:44:36.513154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-11-07 13:44:36.513170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-11-07 13:44:36.513388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-11-07 13:44:36.513403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-11-07 13:44:36.513730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-11-07 13:44:36.513745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-11-07 13:44:36.513959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-11-07 13:44:36.513975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.732 [2024-11-07 13:44:36.514303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.732 [2024-11-07 13:44:36.514319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.732 qpair failed and we were unable to recover it. 00:39:28.733 [2024-11-07 13:44:36.514481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-11-07 13:44:36.514496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-11-07 13:44:36.514802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-11-07 13:44:36.514817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-11-07 13:44:36.515127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-11-07 13:44:36.515142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-11-07 13:44:36.515511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-11-07 13:44:36.515529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-11-07 13:44:36.515848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-11-07 13:44:36.515868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-11-07 13:44:36.516164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-11-07 13:44:36.516181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-11-07 13:44:36.516501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-11-07 13:44:36.516517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-11-07 13:44:36.516845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-11-07 13:44:36.516866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-11-07 13:44:36.517240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-11-07 13:44:36.517256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-11-07 13:44:36.517535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-11-07 13:44:36.517551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-11-07 13:44:36.517850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-11-07 13:44:36.517875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-11-07 13:44:36.518170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-11-07 13:44:36.518186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-11-07 13:44:36.518516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-11-07 13:44:36.518531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-11-07 13:44:36.518747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-11-07 13:44:36.518762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-11-07 13:44:36.518957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-11-07 13:44:36.518973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-11-07 13:44:36.519305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-11-07 13:44:36.519321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-11-07 13:44:36.519614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-11-07 13:44:36.519630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-11-07 13:44:36.519920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-11-07 13:44:36.519936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-11-07 13:44:36.520249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-11-07 13:44:36.520266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-11-07 13:44:36.520596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-11-07 13:44:36.520611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-11-07 13:44:36.520936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-11-07 13:44:36.520952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-11-07 13:44:36.521241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-11-07 13:44:36.521257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-11-07 13:44:36.521581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-11-07 13:44:36.521597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-11-07 13:44:36.521928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-11-07 13:44:36.521945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-11-07 13:44:36.522259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-11-07 13:44:36.522275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-11-07 13:44:36.522599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-11-07 13:44:36.522615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-11-07 13:44:36.522940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-11-07 13:44:36.522957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-11-07 13:44:36.523131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-11-07 13:44:36.523147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-11-07 13:44:36.523351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-11-07 13:44:36.523366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-11-07 13:44:36.523635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-11-07 13:44:36.523650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-11-07 13:44:36.523839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.733 [2024-11-07 13:44:36.523855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.733 qpair failed and we were unable to recover it. 00:39:28.733 [2024-11-07 13:44:36.524162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-11-07 13:44:36.524178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-11-07 13:44:36.524512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-11-07 13:44:36.524529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-11-07 13:44:36.524856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-11-07 13:44:36.524876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-11-07 13:44:36.525186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-11-07 13:44:36.525202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-11-07 13:44:36.525527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-11-07 13:44:36.525542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-11-07 13:44:36.525881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-11-07 13:44:36.525897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-11-07 13:44:36.526220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-11-07 13:44:36.526236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-11-07 13:44:36.526456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-11-07 13:44:36.526470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-11-07 13:44:36.526780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-11-07 13:44:36.526795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-11-07 13:44:36.527115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-11-07 13:44:36.527130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-11-07 13:44:36.527459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-11-07 13:44:36.527474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-11-07 13:44:36.527805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-11-07 13:44:36.527820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-11-07 13:44:36.528153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-11-07 13:44:36.528172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-11-07 13:44:36.528501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-11-07 13:44:36.528517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-11-07 13:44:36.528786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-11-07 13:44:36.528802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-11-07 13:44:36.529117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-11-07 13:44:36.529133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-11-07 13:44:36.529437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-11-07 13:44:36.529453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-11-07 13:44:36.529777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-11-07 13:44:36.529792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-11-07 13:44:36.530112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-11-07 13:44:36.530128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-11-07 13:44:36.530489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-11-07 13:44:36.530504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-11-07 13:44:36.530821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-11-07 13:44:36.530837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-11-07 13:44:36.531221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-11-07 13:44:36.531241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-11-07 13:44:36.531565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-11-07 13:44:36.531581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-11-07 13:44:36.531833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-11-07 13:44:36.531848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-11-07 13:44:36.532205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-11-07 13:44:36.532221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-11-07 13:44:36.532575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-11-07 13:44:36.532591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-11-07 13:44:36.532930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-11-07 13:44:36.532947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-11-07 13:44:36.533260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-11-07 13:44:36.533276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-11-07 13:44:36.533461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-11-07 13:44:36.533475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-11-07 13:44:36.533791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-11-07 13:44:36.533807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-11-07 13:44:36.533976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-11-07 13:44:36.533992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-11-07 13:44:36.534321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-11-07 13:44:36.534336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-11-07 13:44:36.534740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-11-07 13:44:36.534756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-11-07 13:44:36.534970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-11-07 13:44:36.534985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-11-07 13:44:36.535332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-11-07 13:44:36.535347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-11-07 13:44:36.535712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-11-07 13:44:36.535728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.734 qpair failed and we were unable to recover it. 00:39:28.734 [2024-11-07 13:44:36.536124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.734 [2024-11-07 13:44:36.536140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-11-07 13:44:36.536472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-11-07 13:44:36.536487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-11-07 13:44:36.536684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-11-07 13:44:36.536700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-11-07 13:44:36.537129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-11-07 13:44:36.537146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-11-07 13:44:36.537469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-11-07 13:44:36.537485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-11-07 13:44:36.537809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-11-07 13:44:36.537824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-11-07 13:44:36.538161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-11-07 13:44:36.538177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-11-07 13:44:36.538431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-11-07 13:44:36.538446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-11-07 13:44:36.538775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-11-07 13:44:36.538791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-11-07 13:44:36.539074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-11-07 13:44:36.539090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-11-07 13:44:36.539374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-11-07 13:44:36.539389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-11-07 13:44:36.539710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-11-07 13:44:36.539725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-11-07 13:44:36.540088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-11-07 13:44:36.540104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-11-07 13:44:36.540323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-11-07 13:44:36.540338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-11-07 13:44:36.540721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-11-07 13:44:36.540737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-11-07 13:44:36.541082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-11-07 13:44:36.541099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-11-07 13:44:36.541435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-11-07 13:44:36.541450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-11-07 13:44:36.541778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-11-07 13:44:36.541793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-11-07 13:44:36.542119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-11-07 13:44:36.542135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-11-07 13:44:36.542454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-11-07 13:44:36.542470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-11-07 13:44:36.542799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-11-07 13:44:36.542814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-11-07 13:44:36.543130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-11-07 13:44:36.543146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-11-07 13:44:36.543337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-11-07 13:44:36.543351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-11-07 13:44:36.543671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-11-07 13:44:36.543686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-11-07 13:44:36.543893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-11-07 13:44:36.543908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-11-07 13:44:36.544202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-11-07 13:44:36.544217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-11-07 13:44:36.544406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-11-07 13:44:36.544422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-11-07 13:44:36.544740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-11-07 13:44:36.544756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-11-07 13:44:36.545102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-11-07 13:44:36.545117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-11-07 13:44:36.545456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-11-07 13:44:36.545471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-11-07 13:44:36.545808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-11-07 13:44:36.545824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-11-07 13:44:36.546272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-11-07 13:44:36.546288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-11-07 13:44:36.546606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-11-07 13:44:36.546621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-11-07 13:44:36.546968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-11-07 13:44:36.546985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-11-07 13:44:36.547201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-11-07 13:44:36.547217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-11-07 13:44:36.547531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-11-07 13:44:36.547548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-11-07 13:44:36.547883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.735 [2024-11-07 13:44:36.547898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.735 qpair failed and we were unable to recover it. 00:39:28.735 [2024-11-07 13:44:36.548236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-11-07 13:44:36.548252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-11-07 13:44:36.548561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-11-07 13:44:36.548578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-11-07 13:44:36.548882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-11-07 13:44:36.548898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-11-07 13:44:36.549232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-11-07 13:44:36.549248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-11-07 13:44:36.549588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-11-07 13:44:36.549605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-11-07 13:44:36.549858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-11-07 13:44:36.549879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-11-07 13:44:36.550254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-11-07 13:44:36.550271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-11-07 13:44:36.550612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-11-07 13:44:36.550627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-11-07 13:44:36.550956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-11-07 13:44:36.550973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-11-07 13:44:36.551327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-11-07 13:44:36.551344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-11-07 13:44:36.551652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-11-07 13:44:36.551667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-11-07 13:44:36.552009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-11-07 13:44:36.552026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-11-07 13:44:36.552367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-11-07 13:44:36.552382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-11-07 13:44:36.552720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-11-07 13:44:36.552736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-11-07 13:44:36.553085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-11-07 13:44:36.553102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-11-07 13:44:36.553497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-11-07 13:44:36.553513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-11-07 13:44:36.553853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-11-07 13:44:36.553873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-11-07 13:44:36.554201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-11-07 13:44:36.554216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-11-07 13:44:36.554593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-11-07 13:44:36.554608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-11-07 13:44:36.554932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-11-07 13:44:36.554948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-11-07 13:44:36.555288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-11-07 13:44:36.555304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-11-07 13:44:36.555643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-11-07 13:44:36.555658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-11-07 13:44:36.555886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-11-07 13:44:36.555902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-11-07 13:44:36.556246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-11-07 13:44:36.556262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-11-07 13:44:36.556599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-11-07 13:44:36.556617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-11-07 13:44:36.556873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-11-07 13:44:36.556890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-11-07 13:44:36.557227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-11-07 13:44:36.557243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-11-07 13:44:36.557581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-11-07 13:44:36.557598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-11-07 13:44:36.557966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-11-07 13:44:36.557982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-11-07 13:44:36.558322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-11-07 13:44:36.558338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-11-07 13:44:36.558645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-11-07 13:44:36.558661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-11-07 13:44:36.558988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-11-07 13:44:36.559002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-11-07 13:44:36.559214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-11-07 13:44:36.559229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-11-07 13:44:36.559428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-11-07 13:44:36.559446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-11-07 13:44:36.559750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.736 [2024-11-07 13:44:36.559765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.736 qpair failed and we were unable to recover it. 00:39:28.736 [2024-11-07 13:44:36.560003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-11-07 13:44:36.560019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-11-07 13:44:36.560327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-11-07 13:44:36.560344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-11-07 13:44:36.560672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-11-07 13:44:36.560690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-11-07 13:44:36.561044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-11-07 13:44:36.561062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-11-07 13:44:36.561356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-11-07 13:44:36.561372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-11-07 13:44:36.561712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-11-07 13:44:36.561727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-11-07 13:44:36.562068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-11-07 13:44:36.562085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-11-07 13:44:36.562379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-11-07 13:44:36.562394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-11-07 13:44:36.562738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-11-07 13:44:36.562753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-11-07 13:44:36.562928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-11-07 13:44:36.562942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-11-07 13:44:36.563288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-11-07 13:44:36.563302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-11-07 13:44:36.563645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-11-07 13:44:36.563665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-11-07 13:44:36.563873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-11-07 13:44:36.563889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-11-07 13:44:36.564262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-11-07 13:44:36.564277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-11-07 13:44:36.564623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-11-07 13:44:36.564639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-11-07 13:44:36.564868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-11-07 13:44:36.564884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-11-07 13:44:36.565221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-11-07 13:44:36.565236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-11-07 13:44:36.565546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-11-07 13:44:36.565561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-11-07 13:44:36.565768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-11-07 13:44:36.565782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-11-07 13:44:36.566103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-11-07 13:44:36.566118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-11-07 13:44:36.566456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-11-07 13:44:36.566471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-11-07 13:44:36.566797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-11-07 13:44:36.566811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-11-07 13:44:36.567125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-11-07 13:44:36.567140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-11-07 13:44:36.567336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-11-07 13:44:36.567351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-11-07 13:44:36.567696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-11-07 13:44:36.567713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-11-07 13:44:36.568064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-11-07 13:44:36.568082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-11-07 13:44:36.568420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-11-07 13:44:36.568435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-11-07 13:44:36.568810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-11-07 13:44:36.568825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-11-07 13:44:36.569154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-11-07 13:44:36.569171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-11-07 13:44:36.569552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-11-07 13:44:36.569568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-11-07 13:44:36.569908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-11-07 13:44:36.569928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-11-07 13:44:36.570253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-11-07 13:44:36.570268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-11-07 13:44:36.570602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-11-07 13:44:36.570618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.737 [2024-11-07 13:44:36.570936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.737 [2024-11-07 13:44:36.570953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.737 qpair failed and we were unable to recover it. 00:39:28.738 [2024-11-07 13:44:36.571281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-11-07 13:44:36.571297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-11-07 13:44:36.571624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-11-07 13:44:36.571640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-11-07 13:44:36.571808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-11-07 13:44:36.571825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-11-07 13:44:36.572148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-11-07 13:44:36.572164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-11-07 13:44:36.572493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-11-07 13:44:36.572508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-11-07 13:44:36.572691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-11-07 13:44:36.572705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-11-07 13:44:36.573036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-11-07 13:44:36.573053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-11-07 13:44:36.573389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-11-07 13:44:36.573405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-11-07 13:44:36.573794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-11-07 13:44:36.573808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-11-07 13:44:36.574155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-11-07 13:44:36.574171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-11-07 13:44:36.574495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-11-07 13:44:36.574509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-11-07 13:44:36.574784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-11-07 13:44:36.574799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-11-07 13:44:36.575183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-11-07 13:44:36.575199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-11-07 13:44:36.575544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-11-07 13:44:36.575559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-11-07 13:44:36.575879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-11-07 13:44:36.575894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-11-07 13:44:36.576109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-11-07 13:44:36.576124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-11-07 13:44:36.576455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-11-07 13:44:36.576470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-11-07 13:44:36.576811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-11-07 13:44:36.576828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-11-07 13:44:36.577160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-11-07 13:44:36.577176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-11-07 13:44:36.577507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-11-07 13:44:36.577522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-11-07 13:44:36.577891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-11-07 13:44:36.577907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-11-07 13:44:36.578221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-11-07 13:44:36.578236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-11-07 13:44:36.578546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-11-07 13:44:36.578561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-11-07 13:44:36.578883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-11-07 13:44:36.578900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-11-07 13:44:36.579105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-11-07 13:44:36.579120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-11-07 13:44:36.579421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-11-07 13:44:36.579437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-11-07 13:44:36.579757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-11-07 13:44:36.579772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-11-07 13:44:36.580089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-11-07 13:44:36.580106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-11-07 13:44:36.580320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-11-07 13:44:36.580336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-11-07 13:44:36.580684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-11-07 13:44:36.580699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.738 [2024-11-07 13:44:36.581032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.738 [2024-11-07 13:44:36.581048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.738 qpair failed and we were unable to recover it. 00:39:28.739 [2024-11-07 13:44:36.581379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-11-07 13:44:36.581394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-11-07 13:44:36.581797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-11-07 13:44:36.581812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-11-07 13:44:36.582112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-11-07 13:44:36.582128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-11-07 13:44:36.582451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-11-07 13:44:36.582466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-11-07 13:44:36.582786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-11-07 13:44:36.582801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-11-07 13:44:36.582996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-11-07 13:44:36.583012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-11-07 13:44:36.583351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-11-07 13:44:36.583366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-11-07 13:44:36.583710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-11-07 13:44:36.583725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-11-07 13:44:36.584031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-11-07 13:44:36.584046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-11-07 13:44:36.584306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-11-07 13:44:36.584320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-11-07 13:44:36.584498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-11-07 13:44:36.584512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-11-07 13:44:36.584844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-11-07 13:44:36.584859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-11-07 13:44:36.585195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-11-07 13:44:36.585211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-11-07 13:44:36.585537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-11-07 13:44:36.585552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-11-07 13:44:36.585729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-11-07 13:44:36.585744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-11-07 13:44:36.586039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-11-07 13:44:36.586055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-11-07 13:44:36.586378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-11-07 13:44:36.586394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-11-07 13:44:36.586718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-11-07 13:44:36.586733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-11-07 13:44:36.587065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-11-07 13:44:36.587081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-11-07 13:44:36.587400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-11-07 13:44:36.587415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-11-07 13:44:36.587793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-11-07 13:44:36.587808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-11-07 13:44:36.588129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-11-07 13:44:36.588144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-11-07 13:44:36.588456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-11-07 13:44:36.588471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-11-07 13:44:36.588800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-11-07 13:44:36.588815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-11-07 13:44:36.589135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-11-07 13:44:36.589151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-11-07 13:44:36.589484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-11-07 13:44:36.589498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-11-07 13:44:36.589812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-11-07 13:44:36.589829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-11-07 13:44:36.590159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-11-07 13:44:36.590175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-11-07 13:44:36.590478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-11-07 13:44:36.590493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-11-07 13:44:36.590810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-11-07 13:44:36.590825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-11-07 13:44:36.590813] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:39:28.739 [2024-11-07 13:44:36.590921] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:28.739 [2024-11-07 13:44:36.591148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-11-07 13:44:36.591163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-11-07 13:44:36.591510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-11-07 13:44:36.591524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.739 qpair failed and we were unable to recover it. 00:39:28.739 [2024-11-07 13:44:36.591858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.739 [2024-11-07 13:44:36.591882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-11-07 13:44:36.592230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-11-07 13:44:36.592246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-11-07 13:44:36.592592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-11-07 13:44:36.592607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-11-07 13:44:36.592818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-11-07 13:44:36.592834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-11-07 13:44:36.593185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-11-07 13:44:36.593200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-11-07 13:44:36.593549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-11-07 13:44:36.593565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-11-07 13:44:36.593914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-11-07 13:44:36.593934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-11-07 13:44:36.594119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-11-07 13:44:36.594134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-11-07 13:44:36.594331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-11-07 13:44:36.594348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-11-07 13:44:36.594651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-11-07 13:44:36.594667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-11-07 13:44:36.595000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-11-07 13:44:36.595017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-11-07 13:44:36.595352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-11-07 13:44:36.595368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-11-07 13:44:36.595656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-11-07 13:44:36.595673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-11-07 13:44:36.596014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-11-07 13:44:36.596030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-11-07 13:44:36.596361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-11-07 13:44:36.596376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-11-07 13:44:36.596737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-11-07 13:44:36.596753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-11-07 13:44:36.597115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-11-07 13:44:36.597131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-11-07 13:44:36.597419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-11-07 13:44:36.597434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-11-07 13:44:36.597738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-11-07 13:44:36.597753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-11-07 13:44:36.598092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-11-07 13:44:36.598109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-11-07 13:44:36.598440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-11-07 13:44:36.598457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-11-07 13:44:36.598788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-11-07 13:44:36.598804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-11-07 13:44:36.599113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-11-07 13:44:36.599130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-11-07 13:44:36.599326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-11-07 13:44:36.599341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-11-07 13:44:36.599555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-11-07 13:44:36.599570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-11-07 13:44:36.599794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-11-07 13:44:36.599810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-11-07 13:44:36.600146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-11-07 13:44:36.600162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-11-07 13:44:36.600504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-11-07 13:44:36.600519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-11-07 13:44:36.600850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-11-07 13:44:36.600870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-11-07 13:44:36.601253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-11-07 13:44:36.601268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-11-07 13:44:36.601571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-11-07 13:44:36.601587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-11-07 13:44:36.601929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-11-07 13:44:36.601946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-11-07 13:44:36.602279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-11-07 13:44:36.602295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-11-07 13:44:36.602599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-11-07 13:44:36.602615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-11-07 13:44:36.602949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-11-07 13:44:36.602964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-11-07 13:44:36.603297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.740 [2024-11-07 13:44:36.603313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.740 qpair failed and we were unable to recover it. 00:39:28.740 [2024-11-07 13:44:36.603494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-11-07 13:44:36.603509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-11-07 13:44:36.603679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-11-07 13:44:36.603696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-11-07 13:44:36.604027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-11-07 13:44:36.604043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-11-07 13:44:36.604246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-11-07 13:44:36.604259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-11-07 13:44:36.604468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-11-07 13:44:36.604483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-11-07 13:44:36.604771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-11-07 13:44:36.604786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-11-07 13:44:36.605019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-11-07 13:44:36.605035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-11-07 13:44:36.605354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-11-07 13:44:36.605369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-11-07 13:44:36.605686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-11-07 13:44:36.605703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-11-07 13:44:36.606032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-11-07 13:44:36.606048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-11-07 13:44:36.606395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-11-07 13:44:36.606413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-11-07 13:44:36.606605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-11-07 13:44:36.606621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-11-07 13:44:36.606897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-11-07 13:44:36.606913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-11-07 13:44:36.607239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-11-07 13:44:36.607253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-11-07 13:44:36.607589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-11-07 13:44:36.607604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-11-07 13:44:36.607931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-11-07 13:44:36.607946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-11-07 13:44:36.608277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-11-07 13:44:36.608293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-11-07 13:44:36.608616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-11-07 13:44:36.608631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-11-07 13:44:36.609016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-11-07 13:44:36.609031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-11-07 13:44:36.609418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-11-07 13:44:36.609433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-11-07 13:44:36.609754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-11-07 13:44:36.609770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-11-07 13:44:36.609975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-11-07 13:44:36.609990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-11-07 13:44:36.610312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-11-07 13:44:36.610326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-11-07 13:44:36.610526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-11-07 13:44:36.610541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-11-07 13:44:36.610785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-11-07 13:44:36.610801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-11-07 13:44:36.611137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-11-07 13:44:36.611152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-11-07 13:44:36.611491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-11-07 13:44:36.611507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-11-07 13:44:36.611835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-11-07 13:44:36.611853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-11-07 13:44:36.612035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-11-07 13:44:36.612050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-11-07 13:44:36.612390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-11-07 13:44:36.612405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-11-07 13:44:36.612704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-11-07 13:44:36.612720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-11-07 13:44:36.612906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-11-07 13:44:36.612921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-11-07 13:44:36.613112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-11-07 13:44:36.613127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-11-07 13:44:36.613462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-11-07 13:44:36.613477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.741 [2024-11-07 13:44:36.613690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.741 [2024-11-07 13:44:36.613709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.741 qpair failed and we were unable to recover it. 00:39:28.742 [2024-11-07 13:44:36.614007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-11-07 13:44:36.614023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-11-07 13:44:36.614333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-11-07 13:44:36.614350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-11-07 13:44:36.614672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-11-07 13:44:36.614687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-11-07 13:44:36.614980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-11-07 13:44:36.614995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-11-07 13:44:36.615333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-11-07 13:44:36.615348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-11-07 13:44:36.615659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-11-07 13:44:36.615674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-11-07 13:44:36.616032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-11-07 13:44:36.616046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-11-07 13:44:36.616342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-11-07 13:44:36.616357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-11-07 13:44:36.616689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-11-07 13:44:36.616704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-11-07 13:44:36.617046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-11-07 13:44:36.617063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-11-07 13:44:36.617395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-11-07 13:44:36.617410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-11-07 13:44:36.617608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-11-07 13:44:36.617622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-11-07 13:44:36.617854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-11-07 13:44:36.617875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-11-07 13:44:36.618123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-11-07 13:44:36.618138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-11-07 13:44:36.618452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-11-07 13:44:36.618466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-11-07 13:44:36.618807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-11-07 13:44:36.618825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-11-07 13:44:36.619047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-11-07 13:44:36.619063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-11-07 13:44:36.619358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-11-07 13:44:36.619373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-11-07 13:44:36.619683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-11-07 13:44:36.619698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-11-07 13:44:36.619990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-11-07 13:44:36.620006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-11-07 13:44:36.620334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-11-07 13:44:36.620348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-11-07 13:44:36.620693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-11-07 13:44:36.620708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-11-07 13:44:36.621033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-11-07 13:44:36.621048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-11-07 13:44:36.621341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-11-07 13:44:36.621355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-11-07 13:44:36.621741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-11-07 13:44:36.621756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-11-07 13:44:36.622083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-11-07 13:44:36.622098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-11-07 13:44:36.622475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-11-07 13:44:36.622491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-11-07 13:44:36.622833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-11-07 13:44:36.622849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-11-07 13:44:36.623200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-11-07 13:44:36.623217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-11-07 13:44:36.623537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-11-07 13:44:36.623552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-11-07 13:44:36.623921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-11-07 13:44:36.623937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-11-07 13:44:36.624291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-11-07 13:44:36.624307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-11-07 13:44:36.624485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-11-07 13:44:36.624499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-11-07 13:44:36.624833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-11-07 13:44:36.624848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-11-07 13:44:36.625170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.742 [2024-11-07 13:44:36.625186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.742 qpair failed and we were unable to recover it. 00:39:28.742 [2024-11-07 13:44:36.625525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-11-07 13:44:36.625541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-11-07 13:44:36.625826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-11-07 13:44:36.625840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-11-07 13:44:36.626056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-11-07 13:44:36.626072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-11-07 13:44:36.626401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-11-07 13:44:36.626416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-11-07 13:44:36.626751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-11-07 13:44:36.626766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-11-07 13:44:36.627113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-11-07 13:44:36.627128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-11-07 13:44:36.627472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-11-07 13:44:36.627487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-11-07 13:44:36.627808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-11-07 13:44:36.627823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-11-07 13:44:36.628156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-11-07 13:44:36.628171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-11-07 13:44:36.628501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-11-07 13:44:36.628515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-11-07 13:44:36.628938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-11-07 13:44:36.628954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-11-07 13:44:36.629274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-11-07 13:44:36.629289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-11-07 13:44:36.629587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-11-07 13:44:36.629602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-11-07 13:44:36.629934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-11-07 13:44:36.629949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-11-07 13:44:36.630284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-11-07 13:44:36.630299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-11-07 13:44:36.630634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-11-07 13:44:36.630649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-11-07 13:44:36.631002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-11-07 13:44:36.631017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-11-07 13:44:36.631196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-11-07 13:44:36.631210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-11-07 13:44:36.631577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-11-07 13:44:36.631592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-11-07 13:44:36.631793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-11-07 13:44:36.631808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-11-07 13:44:36.632165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-11-07 13:44:36.632184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-11-07 13:44:36.632515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-11-07 13:44:36.632532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-11-07 13:44:36.632866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-11-07 13:44:36.632881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-11-07 13:44:36.633101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-11-07 13:44:36.633116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-11-07 13:44:36.633447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-11-07 13:44:36.633462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-11-07 13:44:36.633665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-11-07 13:44:36.633679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-11-07 13:44:36.633968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-11-07 13:44:36.633984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-11-07 13:44:36.634160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-11-07 13:44:36.634175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-11-07 13:44:36.634734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-11-07 13:44:36.634851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.743 [2024-11-07 13:44:36.635349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.743 [2024-11-07 13:44:36.635403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:28.743 qpair failed and we were unable to recover it. 00:39:28.744 [2024-11-07 13:44:36.635824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-11-07 13:44:36.635882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-11-07 13:44:36.636235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-11-07 13:44:36.636250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-11-07 13:44:36.636565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-11-07 13:44:36.636580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-11-07 13:44:36.636918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-11-07 13:44:36.636934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-11-07 13:44:36.637233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-11-07 13:44:36.637248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-11-07 13:44:36.637548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-11-07 13:44:36.637563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-11-07 13:44:36.637873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-11-07 13:44:36.637889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-11-07 13:44:36.638101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-11-07 13:44:36.638116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-11-07 13:44:36.638319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-11-07 13:44:36.638333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-11-07 13:44:36.638630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-11-07 13:44:36.638645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-11-07 13:44:36.638998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-11-07 13:44:36.639013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-11-07 13:44:36.639354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-11-07 13:44:36.639369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-11-07 13:44:36.639659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-11-07 13:44:36.639674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-11-07 13:44:36.640009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-11-07 13:44:36.640025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-11-07 13:44:36.640348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-11-07 13:44:36.640364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-11-07 13:44:36.640666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-11-07 13:44:36.640681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-11-07 13:44:36.640875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-11-07 13:44:36.640889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-11-07 13:44:36.641227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-11-07 13:44:36.641242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-11-07 13:44:36.641427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-11-07 13:44:36.641441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-11-07 13:44:36.641726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-11-07 13:44:36.641741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-11-07 13:44:36.642082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-11-07 13:44:36.642097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-11-07 13:44:36.642313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-11-07 13:44:36.642328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-11-07 13:44:36.642534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-11-07 13:44:36.642549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-11-07 13:44:36.642879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-11-07 13:44:36.642894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-11-07 13:44:36.643112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-11-07 13:44:36.643126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-11-07 13:44:36.643291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-11-07 13:44:36.643305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-11-07 13:44:36.643601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-11-07 13:44:36.643615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-11-07 13:44:36.643953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-11-07 13:44:36.643970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-11-07 13:44:36.644294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-11-07 13:44:36.644309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-11-07 13:44:36.644633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-11-07 13:44:36.644648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-11-07 13:44:36.644923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-11-07 13:44:36.644938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-11-07 13:44:36.645273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-11-07 13:44:36.645288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-11-07 13:44:36.645455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-11-07 13:44:36.645471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-11-07 13:44:36.645708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-11-07 13:44:36.645722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-11-07 13:44:36.646022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.744 [2024-11-07 13:44:36.646037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.744 qpair failed and we were unable to recover it. 00:39:28.744 [2024-11-07 13:44:36.646361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-11-07 13:44:36.646377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-11-07 13:44:36.646696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-11-07 13:44:36.646711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-11-07 13:44:36.647028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-11-07 13:44:36.647044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-11-07 13:44:36.647347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-11-07 13:44:36.647361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-11-07 13:44:36.647663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-11-07 13:44:36.647679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-11-07 13:44:36.647866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-11-07 13:44:36.647883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-11-07 13:44:36.648081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-11-07 13:44:36.648096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-11-07 13:44:36.648296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-11-07 13:44:36.648310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-11-07 13:44:36.648638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-11-07 13:44:36.648653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-11-07 13:44:36.648971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-11-07 13:44:36.648986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-11-07 13:44:36.649300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-11-07 13:44:36.649315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-11-07 13:44:36.649561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-11-07 13:44:36.649575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-11-07 13:44:36.649782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-11-07 13:44:36.649797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-11-07 13:44:36.650135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-11-07 13:44:36.650151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-11-07 13:44:36.650489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-11-07 13:44:36.650503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-11-07 13:44:36.650876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-11-07 13:44:36.650891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-11-07 13:44:36.651027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-11-07 13:44:36.651043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-11-07 13:44:36.651108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000417100 (9): Bad file descriptor 00:39:28.745 [2024-11-07 13:44:36.651838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-11-07 13:44:36.651961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-11-07 13:44:36.652451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-11-07 13:44:36.652505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-11-07 13:44:36.652869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-11-07 13:44:36.652885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-11-07 13:44:36.653186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-11-07 13:44:36.653202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-11-07 13:44:36.653494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-11-07 13:44:36.653509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-11-07 13:44:36.653844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-11-07 13:44:36.653860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-11-07 13:44:36.654160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-11-07 13:44:36.654174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-11-07 13:44:36.654510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-11-07 13:44:36.654524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-11-07 13:44:36.654873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-11-07 13:44:36.654888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-11-07 13:44:36.655178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-11-07 13:44:36.655194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-11-07 13:44:36.655489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-11-07 13:44:36.655505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-11-07 13:44:36.655827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-11-07 13:44:36.655841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-11-07 13:44:36.656176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-11-07 13:44:36.656192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-11-07 13:44:36.656529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-11-07 13:44:36.656545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-11-07 13:44:36.656742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-11-07 13:44:36.656757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-11-07 13:44:36.657154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-11-07 13:44:36.657173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-11-07 13:44:36.657486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-11-07 13:44:36.657501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-11-07 13:44:36.657684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-11-07 13:44:36.657700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-11-07 13:44:36.657912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-11-07 13:44:36.657929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-11-07 13:44:36.658230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-11-07 13:44:36.658245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.745 [2024-11-07 13:44:36.658528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.745 [2024-11-07 13:44:36.658543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.745 qpair failed and we were unable to recover it. 00:39:28.746 [2024-11-07 13:44:36.658867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.746 [2024-11-07 13:44:36.658883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.746 qpair failed and we were unable to recover it. 00:39:28.746 [2024-11-07 13:44:36.659184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.746 [2024-11-07 13:44:36.659198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.746 qpair failed and we were unable to recover it. 00:39:28.746 [2024-11-07 13:44:36.659486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.746 [2024-11-07 13:44:36.659501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.746 qpair failed and we were unable to recover it. 00:39:28.746 [2024-11-07 13:44:36.659836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.746 [2024-11-07 13:44:36.659850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.746 qpair failed and we were unable to recover it. 00:39:28.746 [2024-11-07 13:44:36.660190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.746 [2024-11-07 13:44:36.660206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.746 qpair failed and we were unable to recover it. 00:39:28.746 [2024-11-07 13:44:36.660508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.746 [2024-11-07 13:44:36.660522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.746 qpair failed and we were unable to recover it. 00:39:28.746 [2024-11-07 13:44:36.660841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.746 [2024-11-07 13:44:36.660856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.746 qpair failed and we were unable to recover it. 00:39:28.746 [2024-11-07 13:44:36.661210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.746 [2024-11-07 13:44:36.661225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.746 qpair failed and we were unable to recover it. 00:39:28.746 [2024-11-07 13:44:36.661550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.746 [2024-11-07 13:44:36.661565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.746 qpair failed and we were unable to recover it. 00:39:28.746 [2024-11-07 13:44:36.661750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.746 [2024-11-07 13:44:36.661766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.746 qpair failed and we were unable to recover it. 00:39:28.746 [2024-11-07 13:44:36.661890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.746 [2024-11-07 13:44:36.661907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.746 qpair failed and we were unable to recover it. 00:39:28.746 [2024-11-07 13:44:36.662214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.746 [2024-11-07 13:44:36.662228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.746 qpair failed and we were unable to recover it. 00:39:28.746 [2024-11-07 13:44:36.662420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.746 [2024-11-07 13:44:36.662435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.746 qpair failed and we were unable to recover it. 00:39:28.746 [2024-11-07 13:44:36.662645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.746 [2024-11-07 13:44:36.662659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.746 qpair failed and we were unable to recover it. 00:39:28.746 [2024-11-07 13:44:36.662885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.746 [2024-11-07 13:44:36.662900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.746 qpair failed and we were unable to recover it. 00:39:28.746 [2024-11-07 13:44:36.663203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.746 [2024-11-07 13:44:36.663218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.746 qpair failed and we were unable to recover it. 00:39:28.746 [2024-11-07 13:44:36.663563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.746 [2024-11-07 13:44:36.663578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.746 qpair failed and we were unable to recover it. 00:39:28.746 [2024-11-07 13:44:36.663878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.746 [2024-11-07 13:44:36.663893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.746 qpair failed and we were unable to recover it. 00:39:28.746 [2024-11-07 13:44:36.664093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.746 [2024-11-07 13:44:36.664108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.746 qpair failed and we were unable to recover it. 00:39:28.746 [2024-11-07 13:44:36.664410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.746 [2024-11-07 13:44:36.664424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.746 qpair failed and we were unable to recover it. 00:39:28.746 [2024-11-07 13:44:36.664730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.746 [2024-11-07 13:44:36.664746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.746 qpair failed and we were unable to recover it. 00:39:28.746 [2024-11-07 13:44:36.664974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.746 [2024-11-07 13:44:36.664989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.746 qpair failed and we were unable to recover it. 00:39:28.746 [2024-11-07 13:44:36.665296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.746 [2024-11-07 13:44:36.665310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.746 qpair failed and we were unable to recover it. 00:39:28.746 [2024-11-07 13:44:36.665640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.746 [2024-11-07 13:44:36.665655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.746 qpair failed and we were unable to recover it. 00:39:28.746 [2024-11-07 13:44:36.665876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.746 [2024-11-07 13:44:36.665891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.746 qpair failed and we were unable to recover it. 00:39:28.746 [2024-11-07 13:44:36.666136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.746 [2024-11-07 13:44:36.666151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.746 qpair failed and we were unable to recover it. 00:39:28.746 [2024-11-07 13:44:36.666476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.746 [2024-11-07 13:44:36.666491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.746 qpair failed and we were unable to recover it. 00:39:28.746 [2024-11-07 13:44:36.666674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.746 [2024-11-07 13:44:36.666688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.746 qpair failed and we were unable to recover it. 00:39:28.746 [2024-11-07 13:44:36.666912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.746 [2024-11-07 13:44:36.666928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.746 qpair failed and we were unable to recover it. 00:39:28.746 [2024-11-07 13:44:36.667252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.746 [2024-11-07 13:44:36.667266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.746 qpair failed and we were unable to recover it. 00:39:28.746 [2024-11-07 13:44:36.667590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.746 [2024-11-07 13:44:36.667605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.746 qpair failed and we were unable to recover it. 00:39:28.746 [2024-11-07 13:44:36.667938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.746 [2024-11-07 13:44:36.667953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.746 qpair failed and we were unable to recover it. 00:39:28.746 [2024-11-07 13:44:36.668130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.746 [2024-11-07 13:44:36.668144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.746 qpair failed and we were unable to recover it. 00:39:28.746 [2024-11-07 13:44:36.668330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.746 [2024-11-07 13:44:36.668346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.746 qpair failed and we were unable to recover it. 00:39:28.746 [2024-11-07 13:44:36.668672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.746 [2024-11-07 13:44:36.668687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.746 qpair failed and we were unable to recover it. 00:39:28.747 [2024-11-07 13:44:36.669061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.747 [2024-11-07 13:44:36.669076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.747 qpair failed and we were unable to recover it. 00:39:28.747 [2024-11-07 13:44:36.669396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.747 [2024-11-07 13:44:36.669412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.747 qpair failed and we were unable to recover it. 00:39:28.747 [2024-11-07 13:44:36.669695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.747 [2024-11-07 13:44:36.669710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.747 qpair failed and we were unable to recover it. 00:39:28.747 [2024-11-07 13:44:36.670031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.747 [2024-11-07 13:44:36.670046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.747 qpair failed and we were unable to recover it. 00:39:28.747 [2024-11-07 13:44:36.670367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.747 [2024-11-07 13:44:36.670382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.747 qpair failed and we were unable to recover it. 00:39:28.747 [2024-11-07 13:44:36.670713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.747 [2024-11-07 13:44:36.670727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.747 qpair failed and we were unable to recover it. 00:39:28.747 [2024-11-07 13:44:36.670941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.747 [2024-11-07 13:44:36.670956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.747 qpair failed and we were unable to recover it. 00:39:28.747 [2024-11-07 13:44:36.671253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.747 [2024-11-07 13:44:36.671268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.747 qpair failed and we were unable to recover it. 00:39:28.747 [2024-11-07 13:44:36.671571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.747 [2024-11-07 13:44:36.671585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.747 qpair failed and we were unable to recover it. 00:39:28.747 [2024-11-07 13:44:36.671785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.747 [2024-11-07 13:44:36.671801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.747 qpair failed and we were unable to recover it. 00:39:28.747 [2024-11-07 13:44:36.672106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.747 [2024-11-07 13:44:36.672121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.747 qpair failed and we were unable to recover it. 00:39:28.747 [2024-11-07 13:44:36.672416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.747 [2024-11-07 13:44:36.672432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.747 qpair failed and we were unable to recover it. 00:39:28.747 [2024-11-07 13:44:36.672834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.747 [2024-11-07 13:44:36.672849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.747 qpair failed and we were unable to recover it. 00:39:28.747 [2024-11-07 13:44:36.673193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.747 [2024-11-07 13:44:36.673208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.747 qpair failed and we were unable to recover it. 00:39:28.747 [2024-11-07 13:44:36.673535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.747 [2024-11-07 13:44:36.673550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.747 qpair failed and we were unable to recover it. 00:39:28.747 [2024-11-07 13:44:36.673883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.747 [2024-11-07 13:44:36.673901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.747 qpair failed and we were unable to recover it. 00:39:28.747 [2024-11-07 13:44:36.674234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.747 [2024-11-07 13:44:36.674249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.747 qpair failed and we were unable to recover it. 00:39:28.747 [2024-11-07 13:44:36.674577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.747 [2024-11-07 13:44:36.674592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.747 qpair failed and we were unable to recover it. 00:39:28.747 [2024-11-07 13:44:36.674901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.747 [2024-11-07 13:44:36.674916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.747 qpair failed and we were unable to recover it. 00:39:28.747 [2024-11-07 13:44:36.675258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.747 [2024-11-07 13:44:36.675274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.747 qpair failed and we were unable to recover it. 00:39:28.747 [2024-11-07 13:44:36.675452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.747 [2024-11-07 13:44:36.675467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.747 qpair failed and we were unable to recover it. 00:39:28.747 [2024-11-07 13:44:36.675744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.747 [2024-11-07 13:44:36.675759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.747 qpair failed and we were unable to recover it. 00:39:28.747 [2024-11-07 13:44:36.676129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.747 [2024-11-07 13:44:36.676144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.747 qpair failed and we were unable to recover it. 00:39:28.747 [2024-11-07 13:44:36.676470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.747 [2024-11-07 13:44:36.676484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.747 qpair failed and we were unable to recover it. 00:39:28.747 [2024-11-07 13:44:36.676802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.747 [2024-11-07 13:44:36.676816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.747 qpair failed and we were unable to recover it. 00:39:28.747 [2024-11-07 13:44:36.677130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.747 [2024-11-07 13:44:36.677146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.747 qpair failed and we were unable to recover it. 00:39:28.747 [2024-11-07 13:44:36.677440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.747 [2024-11-07 13:44:36.677455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.747 qpair failed and we were unable to recover it. 00:39:28.747 [2024-11-07 13:44:36.677762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.747 [2024-11-07 13:44:36.677778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.747 qpair failed and we were unable to recover it. 00:39:28.747 [2024-11-07 13:44:36.678116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.747 [2024-11-07 13:44:36.678132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.747 qpair failed and we were unable to recover it. 00:39:28.747 [2024-11-07 13:44:36.678420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.747 [2024-11-07 13:44:36.678435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.747 qpair failed and we were unable to recover it. 00:39:28.747 [2024-11-07 13:44:36.678764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.747 [2024-11-07 13:44:36.678778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.747 qpair failed and we were unable to recover it. 00:39:28.747 [2024-11-07 13:44:36.679090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.747 [2024-11-07 13:44:36.679106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.747 qpair failed and we were unable to recover it. 00:39:28.747 [2024-11-07 13:44:36.679423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.747 [2024-11-07 13:44:36.679437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.747 qpair failed and we were unable to recover it. 00:39:28.747 [2024-11-07 13:44:36.679770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.747 [2024-11-07 13:44:36.679785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.747 qpair failed and we were unable to recover it. 00:39:28.747 [2024-11-07 13:44:36.680096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.747 [2024-11-07 13:44:36.680111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.747 qpair failed and we were unable to recover it. 00:39:28.747 [2024-11-07 13:44:36.680419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.747 [2024-11-07 13:44:36.680434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.747 qpair failed and we were unable to recover it. 00:39:28.747 [2024-11-07 13:44:36.680750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.748 [2024-11-07 13:44:36.680765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.748 qpair failed and we were unable to recover it. 00:39:28.748 [2024-11-07 13:44:36.681085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.748 [2024-11-07 13:44:36.681101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.748 qpair failed and we were unable to recover it. 00:39:28.748 [2024-11-07 13:44:36.681430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.748 [2024-11-07 13:44:36.681446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.748 qpair failed and we were unable to recover it. 00:39:28.748 [2024-11-07 13:44:36.681776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.748 [2024-11-07 13:44:36.681791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.748 qpair failed and we were unable to recover it. 00:39:28.748 [2024-11-07 13:44:36.682105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.748 [2024-11-07 13:44:36.682120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.748 qpair failed and we were unable to recover it. 00:39:28.748 [2024-11-07 13:44:36.682405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.748 [2024-11-07 13:44:36.682420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.748 qpair failed and we were unable to recover it. 00:39:28.748 [2024-11-07 13:44:36.682742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.748 [2024-11-07 13:44:36.682758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.748 qpair failed and we were unable to recover it. 00:39:28.748 [2024-11-07 13:44:36.683073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.748 [2024-11-07 13:44:36.683089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.748 qpair failed and we were unable to recover it. 00:39:28.748 [2024-11-07 13:44:36.683291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.748 [2024-11-07 13:44:36.683307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.748 qpair failed and we were unable to recover it. 00:39:28.748 [2024-11-07 13:44:36.683632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.748 [2024-11-07 13:44:36.683648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.748 qpair failed and we were unable to recover it. 00:39:28.748 [2024-11-07 13:44:36.683983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.748 [2024-11-07 13:44:36.683998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.748 qpair failed and we were unable to recover it. 00:39:28.748 [2024-11-07 13:44:36.684337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.748 [2024-11-07 13:44:36.684352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.748 qpair failed and we were unable to recover it. 00:39:28.748 [2024-11-07 13:44:36.684662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.748 [2024-11-07 13:44:36.684676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.748 qpair failed and we were unable to recover it. 00:39:28.748 [2024-11-07 13:44:36.684968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.748 [2024-11-07 13:44:36.684983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.748 qpair failed and we were unable to recover it. 00:39:28.748 [2024-11-07 13:44:36.685189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.748 [2024-11-07 13:44:36.685203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.748 qpair failed and we were unable to recover it. 00:39:28.748 [2024-11-07 13:44:36.685501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.748 [2024-11-07 13:44:36.685515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.748 qpair failed and we were unable to recover it. 00:39:28.748 [2024-11-07 13:44:36.685826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.748 [2024-11-07 13:44:36.685840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.748 qpair failed and we were unable to recover it. 00:39:28.748 [2024-11-07 13:44:36.686182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.748 [2024-11-07 13:44:36.686197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.748 qpair failed and we were unable to recover it. 00:39:28.748 [2024-11-07 13:44:36.686308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.748 [2024-11-07 13:44:36.686322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.748 qpair failed and we were unable to recover it. 00:39:28.748 [2024-11-07 13:44:36.686532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.748 [2024-11-07 13:44:36.686548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.748 qpair failed and we were unable to recover it. 00:39:28.748 [2024-11-07 13:44:36.686856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.748 [2024-11-07 13:44:36.686876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.748 qpair failed and we were unable to recover it. 00:39:28.748 [2024-11-07 13:44:36.687195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.748 [2024-11-07 13:44:36.687211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.748 qpair failed and we were unable to recover it. 00:39:28.748 [2024-11-07 13:44:36.687540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.748 [2024-11-07 13:44:36.687555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.748 qpair failed and we were unable to recover it. 00:39:28.748 [2024-11-07 13:44:36.687793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.748 [2024-11-07 13:44:36.687807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.748 qpair failed and we were unable to recover it. 00:39:28.748 [2024-11-07 13:44:36.688034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.748 [2024-11-07 13:44:36.688049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.748 qpair failed and we were unable to recover it. 00:39:28.748 [2024-11-07 13:44:36.688376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.748 [2024-11-07 13:44:36.688390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.748 qpair failed and we were unable to recover it. 00:39:28.748 [2024-11-07 13:44:36.688706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.748 [2024-11-07 13:44:36.688721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.748 qpair failed and we were unable to recover it. 00:39:28.748 [2024-11-07 13:44:36.689037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.748 [2024-11-07 13:44:36.689052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.748 qpair failed and we were unable to recover it. 00:39:28.748 [2024-11-07 13:44:36.689372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.748 [2024-11-07 13:44:36.689387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.748 qpair failed and we were unable to recover it. 00:39:28.748 [2024-11-07 13:44:36.689726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.748 [2024-11-07 13:44:36.689740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.748 qpair failed and we were unable to recover it. 00:39:28.748 [2024-11-07 13:44:36.689941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.749 [2024-11-07 13:44:36.689955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.749 qpair failed and we were unable to recover it. 00:39:28.749 [2024-11-07 13:44:36.690295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.749 [2024-11-07 13:44:36.690310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.749 qpair failed and we were unable to recover it. 00:39:28.749 [2024-11-07 13:44:36.690609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.749 [2024-11-07 13:44:36.690625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.749 qpair failed and we were unable to recover it. 00:39:28.749 [2024-11-07 13:44:36.690920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.749 [2024-11-07 13:44:36.690935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.749 qpair failed and we were unable to recover it. 00:39:28.749 [2024-11-07 13:44:36.691258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.749 [2024-11-07 13:44:36.691273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.749 qpair failed and we were unable to recover it. 00:39:28.749 [2024-11-07 13:44:36.691595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.749 [2024-11-07 13:44:36.691610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.749 qpair failed and we were unable to recover it. 00:39:28.749 [2024-11-07 13:44:36.691907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.749 [2024-11-07 13:44:36.691922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.749 qpair failed and we were unable to recover it. 00:39:28.749 [2024-11-07 13:44:36.692205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.749 [2024-11-07 13:44:36.692219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.749 qpair failed and we were unable to recover it. 00:39:28.749 [2024-11-07 13:44:36.692550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.749 [2024-11-07 13:44:36.692564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.749 qpair failed and we were unable to recover it. 00:39:28.749 [2024-11-07 13:44:36.692848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.749 [2024-11-07 13:44:36.692866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.749 qpair failed and we were unable to recover it. 00:39:28.749 [2024-11-07 13:44:36.693157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.749 [2024-11-07 13:44:36.693172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.749 qpair failed and we were unable to recover it. 00:39:28.749 [2024-11-07 13:44:36.693504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.749 [2024-11-07 13:44:36.693519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.749 qpair failed and we were unable to recover it. 00:39:28.749 [2024-11-07 13:44:36.693684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.749 [2024-11-07 13:44:36.693701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.749 qpair failed and we were unable to recover it. 00:39:28.749 [2024-11-07 13:44:36.694029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.749 [2024-11-07 13:44:36.694044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.749 qpair failed and we were unable to recover it. 00:39:28.749 [2024-11-07 13:44:36.694355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.749 [2024-11-07 13:44:36.694370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.749 qpair failed and we were unable to recover it. 00:39:28.749 [2024-11-07 13:44:36.694585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.749 [2024-11-07 13:44:36.694600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.749 qpair failed and we were unable to recover it. 00:39:28.749 [2024-11-07 13:44:36.694786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.749 [2024-11-07 13:44:36.694801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.749 qpair failed and we were unable to recover it. 00:39:28.749 [2024-11-07 13:44:36.695110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.749 [2024-11-07 13:44:36.695125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.749 qpair failed and we were unable to recover it. 00:39:28.749 [2024-11-07 13:44:36.695450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.749 [2024-11-07 13:44:36.695466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.749 qpair failed and we were unable to recover it. 00:39:28.749 [2024-11-07 13:44:36.695784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.749 [2024-11-07 13:44:36.695799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.749 qpair failed and we were unable to recover it. 00:39:28.749 [2024-11-07 13:44:36.696167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.749 [2024-11-07 13:44:36.696186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.749 qpair failed and we were unable to recover it. 00:39:28.749 [2024-11-07 13:44:36.696518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.749 [2024-11-07 13:44:36.696534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.749 qpair failed and we were unable to recover it. 00:39:28.749 [2024-11-07 13:44:36.696870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.749 [2024-11-07 13:44:36.696886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.749 qpair failed and we were unable to recover it. 00:39:28.749 [2024-11-07 13:44:36.697170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.749 [2024-11-07 13:44:36.697184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.749 qpair failed and we were unable to recover it. 00:39:28.749 [2024-11-07 13:44:36.697396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.749 [2024-11-07 13:44:36.697410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.749 qpair failed and we were unable to recover it. 00:39:28.749 [2024-11-07 13:44:36.697730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.749 [2024-11-07 13:44:36.697746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.749 qpair failed and we were unable to recover it. 00:39:28.749 [2024-11-07 13:44:36.698087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.749 [2024-11-07 13:44:36.698104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.749 qpair failed and we were unable to recover it. 00:39:28.749 [2024-11-07 13:44:36.698424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.749 [2024-11-07 13:44:36.698440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.749 qpair failed and we were unable to recover it. 00:39:28.749 [2024-11-07 13:44:36.698768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.749 [2024-11-07 13:44:36.698783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.749 qpair failed and we were unable to recover it. 00:39:28.749 [2024-11-07 13:44:36.699069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.749 [2024-11-07 13:44:36.699087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.749 qpair failed and we were unable to recover it. 00:39:28.749 [2024-11-07 13:44:36.699434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.749 [2024-11-07 13:44:36.699449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.749 qpair failed and we were unable to recover it. 00:39:28.749 [2024-11-07 13:44:36.699746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.749 [2024-11-07 13:44:36.699762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.749 qpair failed and we were unable to recover it. 00:39:28.749 [2024-11-07 13:44:36.700080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.749 [2024-11-07 13:44:36.700096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.749 qpair failed and we were unable to recover it. 00:39:28.749 [2024-11-07 13:44:36.700418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.749 [2024-11-07 13:44:36.700434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.749 qpair failed and we were unable to recover it. 00:39:28.749 [2024-11-07 13:44:36.700719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.749 [2024-11-07 13:44:36.700734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.749 qpair failed and we were unable to recover it. 00:39:28.749 [2024-11-07 13:44:36.700934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.750 [2024-11-07 13:44:36.700949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.750 qpair failed and we were unable to recover it. 00:39:28.750 [2024-11-07 13:44:36.701275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.750 [2024-11-07 13:44:36.701291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.750 qpair failed and we were unable to recover it. 00:39:28.750 [2024-11-07 13:44:36.701609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.750 [2024-11-07 13:44:36.701623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.750 qpair failed and we were unable to recover it. 00:39:28.750 [2024-11-07 13:44:36.701800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.750 [2024-11-07 13:44:36.701814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.750 qpair failed and we were unable to recover it. 00:39:28.750 [2024-11-07 13:44:36.702140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.750 [2024-11-07 13:44:36.702156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.750 qpair failed and we were unable to recover it. 00:39:28.750 [2024-11-07 13:44:36.702483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.750 [2024-11-07 13:44:36.702498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.750 qpair failed and we were unable to recover it. 00:39:28.750 [2024-11-07 13:44:36.702821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.750 [2024-11-07 13:44:36.702835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.750 qpair failed and we were unable to recover it. 00:39:28.750 [2024-11-07 13:44:36.703023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.750 [2024-11-07 13:44:36.703039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.750 qpair failed and we were unable to recover it. 00:39:28.750 [2024-11-07 13:44:36.703368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.750 [2024-11-07 13:44:36.703383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.750 qpair failed and we were unable to recover it. 00:39:28.750 [2024-11-07 13:44:36.703598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.750 [2024-11-07 13:44:36.703612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.750 qpair failed and we were unable to recover it. 00:39:28.750 [2024-11-07 13:44:36.703880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.750 [2024-11-07 13:44:36.703896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.750 qpair failed and we were unable to recover it. 00:39:28.750 [2024-11-07 13:44:36.704232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.750 [2024-11-07 13:44:36.704247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.750 qpair failed and we were unable to recover it. 00:39:28.750 [2024-11-07 13:44:36.704546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.750 [2024-11-07 13:44:36.704561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.750 qpair failed and we were unable to recover it. 00:39:28.750 [2024-11-07 13:44:36.704873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.750 [2024-11-07 13:44:36.704889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.750 qpair failed and we were unable to recover it. 00:39:28.750 [2024-11-07 13:44:36.705222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.750 [2024-11-07 13:44:36.705237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.750 qpair failed and we were unable to recover it. 00:39:28.750 [2024-11-07 13:44:36.705574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.750 [2024-11-07 13:44:36.705589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.750 qpair failed and we were unable to recover it. 00:39:28.750 [2024-11-07 13:44:36.705903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.750 [2024-11-07 13:44:36.705918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.750 qpair failed and we were unable to recover it. 00:39:28.750 [2024-11-07 13:44:36.706239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.750 [2024-11-07 13:44:36.706254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.750 qpair failed and we were unable to recover it. 00:39:28.750 [2024-11-07 13:44:36.706584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.750 [2024-11-07 13:44:36.706598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.750 qpair failed and we were unable to recover it. 00:39:28.750 [2024-11-07 13:44:36.706906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.750 [2024-11-07 13:44:36.706921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.750 qpair failed and we were unable to recover it. 00:39:28.750 [2024-11-07 13:44:36.707218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.750 [2024-11-07 13:44:36.707233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.750 qpair failed and we were unable to recover it. 00:39:28.750 [2024-11-07 13:44:36.707541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.750 [2024-11-07 13:44:36.707556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.750 qpair failed and we were unable to recover it. 00:39:28.750 [2024-11-07 13:44:36.707883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.750 [2024-11-07 13:44:36.707898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.750 qpair failed and we were unable to recover it. 00:39:28.750 [2024-11-07 13:44:36.708206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.750 [2024-11-07 13:44:36.708221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.750 qpair failed and we were unable to recover it. 00:39:28.750 [2024-11-07 13:44:36.708529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.750 [2024-11-07 13:44:36.708544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.750 qpair failed and we were unable to recover it. 00:39:28.750 [2024-11-07 13:44:36.708754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.750 [2024-11-07 13:44:36.708768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.750 qpair failed and we were unable to recover it. 00:39:28.750 [2024-11-07 13:44:36.708956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.750 [2024-11-07 13:44:36.708972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.750 qpair failed and we were unable to recover it. 00:39:28.750 [2024-11-07 13:44:36.709317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.750 [2024-11-07 13:44:36.709332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.750 qpair failed and we were unable to recover it. 00:39:28.750 [2024-11-07 13:44:36.709651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.750 [2024-11-07 13:44:36.709665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.750 qpair failed and we were unable to recover it. 00:39:28.750 [2024-11-07 13:44:36.709837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.750 [2024-11-07 13:44:36.709851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.750 qpair failed and we were unable to recover it. 00:39:28.750 [2024-11-07 13:44:36.710059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.750 [2024-11-07 13:44:36.710075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.750 qpair failed and we were unable to recover it. 00:39:28.750 [2024-11-07 13:44:36.710365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.750 [2024-11-07 13:44:36.710380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.750 qpair failed and we were unable to recover it. 00:39:28.750 [2024-11-07 13:44:36.710707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.750 [2024-11-07 13:44:36.710723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.750 qpair failed and we were unable to recover it. 00:39:28.750 [2024-11-07 13:44:36.710903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.750 [2024-11-07 13:44:36.710918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.750 qpair failed and we were unable to recover it. 00:39:28.751 [2024-11-07 13:44:36.711248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.751 [2024-11-07 13:44:36.711264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.751 qpair failed and we were unable to recover it. 00:39:28.751 [2024-11-07 13:44:36.711589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.751 [2024-11-07 13:44:36.711604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.751 qpair failed and we were unable to recover it. 00:39:28.751 [2024-11-07 13:44:36.711887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.751 [2024-11-07 13:44:36.711903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.751 qpair failed and we were unable to recover it. 00:39:28.751 [2024-11-07 13:44:36.712217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.751 [2024-11-07 13:44:36.712232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.751 qpair failed and we were unable to recover it. 00:39:28.751 [2024-11-07 13:44:36.712535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.751 [2024-11-07 13:44:36.712551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.751 qpair failed and we were unable to recover it. 00:39:28.751 [2024-11-07 13:44:36.712888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.751 [2024-11-07 13:44:36.712903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.751 qpair failed and we were unable to recover it. 00:39:28.751 [2024-11-07 13:44:36.713221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.751 [2024-11-07 13:44:36.713235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.751 qpair failed and we were unable to recover it. 00:39:28.751 [2024-11-07 13:44:36.713585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.751 [2024-11-07 13:44:36.713599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.751 qpair failed and we were unable to recover it. 00:39:28.751 [2024-11-07 13:44:36.713922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.751 [2024-11-07 13:44:36.713937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:28.751 qpair failed and we were unable to recover it. 00:39:29.024 [2024-11-07 13:44:36.714142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-11-07 13:44:36.714158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-11-07 13:44:36.714433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-11-07 13:44:36.714448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-11-07 13:44:36.714786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-11-07 13:44:36.714800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-11-07 13:44:36.715094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-11-07 13:44:36.715109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-11-07 13:44:36.715426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-11-07 13:44:36.715440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-11-07 13:44:36.715774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-11-07 13:44:36.715789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-11-07 13:44:36.716112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-11-07 13:44:36.716128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-11-07 13:44:36.716458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-11-07 13:44:36.716472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-11-07 13:44:36.716805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-11-07 13:44:36.716820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.024 qpair failed and we were unable to recover it. 00:39:29.024 [2024-11-07 13:44:36.717157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.024 [2024-11-07 13:44:36.717174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-11-07 13:44:36.717375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-11-07 13:44:36.717389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-11-07 13:44:36.717687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-11-07 13:44:36.717700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-11-07 13:44:36.718064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-11-07 13:44:36.718080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-11-07 13:44:36.718292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-11-07 13:44:36.718307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-11-07 13:44:36.718642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-11-07 13:44:36.718657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-11-07 13:44:36.718932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-11-07 13:44:36.718947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-11-07 13:44:36.719241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-11-07 13:44:36.719255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-11-07 13:44:36.719593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-11-07 13:44:36.719607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-11-07 13:44:36.719909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-11-07 13:44:36.719926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-11-07 13:44:36.720322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-11-07 13:44:36.720338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-11-07 13:44:36.720620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-11-07 13:44:36.720634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-11-07 13:44:36.720954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-11-07 13:44:36.720969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-11-07 13:44:36.721188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-11-07 13:44:36.721202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-11-07 13:44:36.721537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-11-07 13:44:36.721553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-11-07 13:44:36.721874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-11-07 13:44:36.721890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-11-07 13:44:36.722244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-11-07 13:44:36.722259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-11-07 13:44:36.722430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-11-07 13:44:36.722445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-11-07 13:44:36.722772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-11-07 13:44:36.722787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-11-07 13:44:36.723110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-11-07 13:44:36.723125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-11-07 13:44:36.723344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-11-07 13:44:36.723359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-11-07 13:44:36.723527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-11-07 13:44:36.723543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-11-07 13:44:36.723879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-11-07 13:44:36.723895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-11-07 13:44:36.724209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-11-07 13:44:36.724224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-11-07 13:44:36.724557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-11-07 13:44:36.724572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-11-07 13:44:36.724903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-11-07 13:44:36.724918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-11-07 13:44:36.725234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-11-07 13:44:36.725248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-11-07 13:44:36.725445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-11-07 13:44:36.725459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-11-07 13:44:36.725652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-11-07 13:44:36.725667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-11-07 13:44:36.725975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-11-07 13:44:36.725990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-11-07 13:44:36.726164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-11-07 13:44:36.726179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-11-07 13:44:36.726455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-11-07 13:44:36.726470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-11-07 13:44:36.726753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-11-07 13:44:36.726769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-11-07 13:44:36.727089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-11-07 13:44:36.727104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-11-07 13:44:36.727391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-11-07 13:44:36.727407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-11-07 13:44:36.727741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.025 [2024-11-07 13:44:36.727756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.025 qpair failed and we were unable to recover it. 00:39:29.025 [2024-11-07 13:44:36.727859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-11-07 13:44:36.727879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-11-07 13:44:36.728174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-11-07 13:44:36.728189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-11-07 13:44:36.728369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-11-07 13:44:36.728382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-11-07 13:44:36.728722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-11-07 13:44:36.728738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-11-07 13:44:36.728932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-11-07 13:44:36.728947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-11-07 13:44:36.729266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-11-07 13:44:36.729281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-11-07 13:44:36.729615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-11-07 13:44:36.729629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-11-07 13:44:36.729954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-11-07 13:44:36.729970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-11-07 13:44:36.730244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-11-07 13:44:36.730259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-11-07 13:44:36.730439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-11-07 13:44:36.730454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-11-07 13:44:36.730770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-11-07 13:44:36.730785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-11-07 13:44:36.731091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-11-07 13:44:36.731107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-11-07 13:44:36.731474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-11-07 13:44:36.731489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-11-07 13:44:36.731666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-11-07 13:44:36.731684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-11-07 13:44:36.731974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-11-07 13:44:36.731990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-11-07 13:44:36.732276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-11-07 13:44:36.732290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-11-07 13:44:36.732469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-11-07 13:44:36.732485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-11-07 13:44:36.732813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-11-07 13:44:36.732828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-11-07 13:44:36.733152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-11-07 13:44:36.733169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-11-07 13:44:36.733494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-11-07 13:44:36.733510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-11-07 13:44:36.733836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-11-07 13:44:36.733852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-11-07 13:44:36.734077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-11-07 13:44:36.734092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-11-07 13:44:36.734409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-11-07 13:44:36.734424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-11-07 13:44:36.734757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-11-07 13:44:36.734772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-11-07 13:44:36.735020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-11-07 13:44:36.735036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-11-07 13:44:36.735361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-11-07 13:44:36.735379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-11-07 13:44:36.735543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-11-07 13:44:36.735558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-11-07 13:44:36.735854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-11-07 13:44:36.735873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-11-07 13:44:36.736143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-11-07 13:44:36.736158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-11-07 13:44:36.736495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-11-07 13:44:36.736511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-11-07 13:44:36.736792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-11-07 13:44:36.736808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-11-07 13:44:36.737129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-11-07 13:44:36.737144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-11-07 13:44:36.737472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-11-07 13:44:36.737488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-11-07 13:44:36.737804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-11-07 13:44:36.737820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-11-07 13:44:36.738147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-11-07 13:44:36.738164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-11-07 13:44:36.738486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.026 [2024-11-07 13:44:36.738501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.026 qpair failed and we were unable to recover it. 00:39:29.026 [2024-11-07 13:44:36.738831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-11-07 13:44:36.738848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-11-07 13:44:36.739180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-11-07 13:44:36.739196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-11-07 13:44:36.739535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-11-07 13:44:36.739552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-11-07 13:44:36.739845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-11-07 13:44:36.739866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-11-07 13:44:36.740165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-11-07 13:44:36.740182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-11-07 13:44:36.740515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-11-07 13:44:36.740531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-11-07 13:44:36.740880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-11-07 13:44:36.740897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-11-07 13:44:36.741170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-11-07 13:44:36.741185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-11-07 13:44:36.741391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-11-07 13:44:36.741406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 Read completed with error (sct=0, sc=8) 00:39:29.027 starting I/O failed 00:39:29.027 Read completed with error (sct=0, sc=8) 00:39:29.027 starting I/O failed 00:39:29.027 Read completed with error (sct=0, sc=8) 00:39:29.027 starting I/O failed 00:39:29.027 Read completed with error (sct=0, sc=8) 00:39:29.027 starting I/O failed 00:39:29.027 Read completed with error (sct=0, sc=8) 00:39:29.027 starting I/O failed 00:39:29.027 Read completed with error (sct=0, sc=8) 00:39:29.027 starting I/O failed 00:39:29.027 Read completed with error (sct=0, sc=8) 00:39:29.027 starting I/O failed 00:39:29.027 Read completed with error (sct=0, sc=8) 00:39:29.027 starting I/O failed 00:39:29.027 Read completed with error (sct=0, sc=8) 00:39:29.027 starting I/O failed 00:39:29.027 Read completed with error (sct=0, sc=8) 00:39:29.027 starting I/O failed 00:39:29.027 Read completed with error (sct=0, sc=8) 00:39:29.027 starting I/O failed 00:39:29.027 Read completed with error (sct=0, sc=8) 00:39:29.027 starting I/O failed 00:39:29.027 Write completed with error (sct=0, sc=8) 00:39:29.027 starting I/O failed 00:39:29.027 Read completed with error (sct=0, sc=8) 00:39:29.027 starting I/O failed 00:39:29.027 Write completed with error (sct=0, sc=8) 00:39:29.027 starting I/O failed 00:39:29.027 Write completed with error (sct=0, sc=8) 00:39:29.027 starting I/O failed 00:39:29.027 Read completed with error (sct=0, sc=8) 00:39:29.027 starting I/O failed 00:39:29.027 Read completed with error (sct=0, sc=8) 00:39:29.027 starting I/O failed 00:39:29.027 Write completed with error (sct=0, sc=8) 00:39:29.027 starting I/O failed 00:39:29.027 Write completed with error (sct=0, sc=8) 00:39:29.027 starting I/O failed 00:39:29.027 Write completed with error (sct=0, sc=8) 00:39:29.027 starting I/O failed 00:39:29.027 Read completed with error (sct=0, sc=8) 00:39:29.027 starting I/O failed 00:39:29.027 Read completed with error (sct=0, sc=8) 00:39:29.027 starting I/O failed 00:39:29.027 Read completed with error (sct=0, sc=8) 00:39:29.027 starting I/O failed 00:39:29.027 Read completed with error (sct=0, sc=8) 00:39:29.027 starting I/O failed 00:39:29.027 Write completed with error (sct=0, sc=8) 00:39:29.027 starting I/O failed 00:39:29.027 Write completed with error (sct=0, sc=8) 00:39:29.027 starting I/O failed 00:39:29.027 Read completed with error (sct=0, sc=8) 00:39:29.027 starting I/O failed 00:39:29.027 Read completed with error (sct=0, sc=8) 00:39:29.027 starting I/O failed 00:39:29.027 Write completed with error (sct=0, sc=8) 00:39:29.027 starting I/O failed 00:39:29.027 Read completed with error (sct=0, sc=8) 00:39:29.027 starting I/O failed 00:39:29.027 Write completed with error (sct=0, sc=8) 00:39:29.027 starting I/O failed 00:39:29.027 [2024-11-07 13:44:36.742623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:39:29.027 [2024-11-07 13:44:36.743180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-11-07 13:44:36.743293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-11-07 13:44:36.743600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-11-07 13:44:36.743664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-11-07 13:44:36.744130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-11-07 13:44:36.744243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-11-07 13:44:36.744681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-11-07 13:44:36.744733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-11-07 13:44:36.745204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-11-07 13:44:36.745316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-11-07 13:44:36.745777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-11-07 13:44:36.745829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-11-07 13:44:36.746099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-11-07 13:44:36.746146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-11-07 13:44:36.746546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-11-07 13:44:36.746587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-11-07 13:44:36.746952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-11-07 13:44:36.746996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-11-07 13:44:36.747351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-11-07 13:44:36.747393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-11-07 13:44:36.747755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-11-07 13:44:36.747797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-11-07 13:44:36.748059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-11-07 13:44:36.748102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-11-07 13:44:36.748460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-11-07 13:44:36.748500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-11-07 13:44:36.748879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-11-07 13:44:36.748921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-11-07 13:44:36.749312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-11-07 13:44:36.749355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-11-07 13:44:36.749739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.027 [2024-11-07 13:44:36.749781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.027 qpair failed and we were unable to recover it. 00:39:29.027 [2024-11-07 13:44:36.750168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-11-07 13:44:36.750211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-11-07 13:44:36.750586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-11-07 13:44:36.750627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-11-07 13:44:36.751012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-11-07 13:44:36.751055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-11-07 13:44:36.751423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-11-07 13:44:36.751466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-11-07 13:44:36.751836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-11-07 13:44:36.751891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-11-07 13:44:36.752129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-11-07 13:44:36.752171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-11-07 13:44:36.752591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-11-07 13:44:36.752634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-11-07 13:44:36.752939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-11-07 13:44:36.752981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-11-07 13:44:36.753237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-11-07 13:44:36.753284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-11-07 13:44:36.753640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-11-07 13:44:36.753681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-11-07 13:44:36.754079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-11-07 13:44:36.754122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-11-07 13:44:36.754506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-11-07 13:44:36.754547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-11-07 13:44:36.754902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-11-07 13:44:36.754947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-11-07 13:44:36.755303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-11-07 13:44:36.755343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-11-07 13:44:36.755737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-11-07 13:44:36.755778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-11-07 13:44:36.756150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-11-07 13:44:36.756193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-11-07 13:44:36.756545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-11-07 13:44:36.756585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-11-07 13:44:36.756972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-11-07 13:44:36.757016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-11-07 13:44:36.757378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-11-07 13:44:36.757421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-11-07 13:44:36.757713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-11-07 13:44:36.757757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-11-07 13:44:36.758128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:29.028 [2024-11-07 13:44:36.758152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-11-07 13:44:36.758194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-11-07 13:44:36.758566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-11-07 13:44:36.758608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-11-07 13:44:36.758910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-11-07 13:44:36.758953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-11-07 13:44:36.759309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-11-07 13:44:36.759350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-11-07 13:44:36.759715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-11-07 13:44:36.759757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-11-07 13:44:36.760142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-11-07 13:44:36.760186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-11-07 13:44:36.760565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-11-07 13:44:36.760606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-11-07 13:44:36.760841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-11-07 13:44:36.760897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-11-07 13:44:36.761296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-11-07 13:44:36.761336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-11-07 13:44:36.761694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-11-07 13:44:36.761734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-11-07 13:44:36.762112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-11-07 13:44:36.762155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-11-07 13:44:36.762522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-11-07 13:44:36.762565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-11-07 13:44:36.762941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-11-07 13:44:36.762983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-11-07 13:44:36.763343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-11-07 13:44:36.763384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-11-07 13:44:36.763662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.028 [2024-11-07 13:44:36.763703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.028 qpair failed and we were unable to recover it. 00:39:29.028 [2024-11-07 13:44:36.763925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-11-07 13:44:36.763969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-11-07 13:44:36.764325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-11-07 13:44:36.764367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-11-07 13:44:36.764696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-11-07 13:44:36.764738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-11-07 13:44:36.765111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-11-07 13:44:36.765160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-11-07 13:44:36.765533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-11-07 13:44:36.765575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-11-07 13:44:36.765953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-11-07 13:44:36.765994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-11-07 13:44:36.766399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-11-07 13:44:36.766439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-11-07 13:44:36.766821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-11-07 13:44:36.766875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-11-07 13:44:36.767227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-11-07 13:44:36.767268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-11-07 13:44:36.767633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-11-07 13:44:36.767674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-11-07 13:44:36.768049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-11-07 13:44:36.768093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-11-07 13:44:36.768520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-11-07 13:44:36.768563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-11-07 13:44:36.768791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-11-07 13:44:36.768847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-11-07 13:44:36.769232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-11-07 13:44:36.769274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-11-07 13:44:36.769638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-11-07 13:44:36.769678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-11-07 13:44:36.770025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-11-07 13:44:36.770067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-11-07 13:44:36.770433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-11-07 13:44:36.770475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-11-07 13:44:36.770844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-11-07 13:44:36.770894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-11-07 13:44:36.771272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-11-07 13:44:36.771313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-11-07 13:44:36.771677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-11-07 13:44:36.771719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-11-07 13:44:36.772087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-11-07 13:44:36.772130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-11-07 13:44:36.772481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-11-07 13:44:36.772523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-11-07 13:44:36.772887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-11-07 13:44:36.772929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-11-07 13:44:36.773309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-11-07 13:44:36.773349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-11-07 13:44:36.773731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-11-07 13:44:36.773771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-11-07 13:44:36.774115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-11-07 13:44:36.774157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-11-07 13:44:36.774550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-11-07 13:44:36.774591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-11-07 13:44:36.774969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-11-07 13:44:36.775012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-11-07 13:44:36.775409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.029 [2024-11-07 13:44:36.775450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.029 qpair failed and we were unable to recover it. 00:39:29.029 [2024-11-07 13:44:36.775814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-11-07 13:44:36.775854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-11-07 13:44:36.776270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-11-07 13:44:36.776312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-11-07 13:44:36.776575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-11-07 13:44:36.776615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-11-07 13:44:36.776977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-11-07 13:44:36.777020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-11-07 13:44:36.777438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-11-07 13:44:36.777479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-11-07 13:44:36.777838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-11-07 13:44:36.777888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-11-07 13:44:36.778260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-11-07 13:44:36.778302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-11-07 13:44:36.778671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-11-07 13:44:36.778711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-11-07 13:44:36.778996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-11-07 13:44:36.779038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-11-07 13:44:36.779450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-11-07 13:44:36.779490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-11-07 13:44:36.779906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-11-07 13:44:36.779950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-11-07 13:44:36.780307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-11-07 13:44:36.780348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-11-07 13:44:36.780713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-11-07 13:44:36.780753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-11-07 13:44:36.781147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-11-07 13:44:36.781189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-11-07 13:44:36.781573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-11-07 13:44:36.781621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-11-07 13:44:36.781991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-11-07 13:44:36.782034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-11-07 13:44:36.782373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-11-07 13:44:36.782413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-11-07 13:44:36.782791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-11-07 13:44:36.782832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-11-07 13:44:36.783215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-11-07 13:44:36.783259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-11-07 13:44:36.783544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-11-07 13:44:36.783583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-11-07 13:44:36.783907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-11-07 13:44:36.783949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-11-07 13:44:36.784314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-11-07 13:44:36.784353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-11-07 13:44:36.784703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-11-07 13:44:36.784744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-11-07 13:44:36.785111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-11-07 13:44:36.785153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-11-07 13:44:36.785491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-11-07 13:44:36.785531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-11-07 13:44:36.785883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-11-07 13:44:36.785924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-11-07 13:44:36.786298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-11-07 13:44:36.786340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-11-07 13:44:36.786710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-11-07 13:44:36.786750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-11-07 13:44:36.787149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-11-07 13:44:36.787190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-11-07 13:44:36.787454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-11-07 13:44:36.787492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-11-07 13:44:36.787745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-11-07 13:44:36.787787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-11-07 13:44:36.788176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-11-07 13:44:36.788219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-11-07 13:44:36.788573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-11-07 13:44:36.788614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-11-07 13:44:36.788968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-11-07 13:44:36.789010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.030 [2024-11-07 13:44:36.789363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.030 [2024-11-07 13:44:36.789411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.030 qpair failed and we were unable to recover it. 00:39:29.031 [2024-11-07 13:44:36.789773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-11-07 13:44:36.789812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-11-07 13:44:36.790257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-11-07 13:44:36.790299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-11-07 13:44:36.790640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-11-07 13:44:36.790681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-11-07 13:44:36.791068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-11-07 13:44:36.791111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-11-07 13:44:36.791474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-11-07 13:44:36.791514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-11-07 13:44:36.791887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-11-07 13:44:36.791929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-11-07 13:44:36.792306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-11-07 13:44:36.792346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-11-07 13:44:36.792609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-11-07 13:44:36.792651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-11-07 13:44:36.793036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-11-07 13:44:36.793077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-11-07 13:44:36.793434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-11-07 13:44:36.793474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-11-07 13:44:36.793840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-11-07 13:44:36.793888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-11-07 13:44:36.794301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-11-07 13:44:36.794343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-11-07 13:44:36.794707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-11-07 13:44:36.794762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-11-07 13:44:36.795128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-11-07 13:44:36.795170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-11-07 13:44:36.795549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-11-07 13:44:36.795590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-11-07 13:44:36.795969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-11-07 13:44:36.796012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-11-07 13:44:36.796379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-11-07 13:44:36.796420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-11-07 13:44:36.796793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-11-07 13:44:36.796833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-11-07 13:44:36.797213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-11-07 13:44:36.797254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-11-07 13:44:36.797633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-11-07 13:44:36.797678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-11-07 13:44:36.798056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-11-07 13:44:36.798098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-11-07 13:44:36.798471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-11-07 13:44:36.798511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-11-07 13:44:36.798879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-11-07 13:44:36.798920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-11-07 13:44:36.799303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-11-07 13:44:36.799344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-11-07 13:44:36.799709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-11-07 13:44:36.799749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-11-07 13:44:36.800124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-11-07 13:44:36.800166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-11-07 13:44:36.800608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-11-07 13:44:36.800648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-11-07 13:44:36.801025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-11-07 13:44:36.801068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-11-07 13:44:36.801427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-11-07 13:44:36.801468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-11-07 13:44:36.801839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-11-07 13:44:36.801890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-11-07 13:44:36.802292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-11-07 13:44:36.802332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-11-07 13:44:36.802706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-11-07 13:44:36.802747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-11-07 13:44:36.803116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-11-07 13:44:36.803159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.031 qpair failed and we were unable to recover it. 00:39:29.031 [2024-11-07 13:44:36.803534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.031 [2024-11-07 13:44:36.803575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-11-07 13:44:36.803942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-11-07 13:44:36.803984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-11-07 13:44:36.804256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-11-07 13:44:36.804300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-11-07 13:44:36.804584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-11-07 13:44:36.804628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-11-07 13:44:36.805033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-11-07 13:44:36.805074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-11-07 13:44:36.805414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-11-07 13:44:36.805453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-11-07 13:44:36.805853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-11-07 13:44:36.805904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-11-07 13:44:36.806268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-11-07 13:44:36.806307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-11-07 13:44:36.806647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-11-07 13:44:36.806687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-11-07 13:44:36.807059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-11-07 13:44:36.807101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-11-07 13:44:36.807485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-11-07 13:44:36.807526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-11-07 13:44:36.807946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-11-07 13:44:36.807989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-11-07 13:44:36.808363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-11-07 13:44:36.808405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-11-07 13:44:36.808777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-11-07 13:44:36.808820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-11-07 13:44:36.809224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-11-07 13:44:36.809266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-11-07 13:44:36.809631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-11-07 13:44:36.809671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-11-07 13:44:36.810014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-11-07 13:44:36.810055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-11-07 13:44:36.810392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-11-07 13:44:36.810432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-11-07 13:44:36.810687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-11-07 13:44:36.810727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-11-07 13:44:36.811097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-11-07 13:44:36.811138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-11-07 13:44:36.811541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-11-07 13:44:36.811583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-11-07 13:44:36.811948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-11-07 13:44:36.811991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-11-07 13:44:36.812369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-11-07 13:44:36.812410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-11-07 13:44:36.812782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-11-07 13:44:36.812822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-11-07 13:44:36.813096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-11-07 13:44:36.813142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-11-07 13:44:36.813413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-11-07 13:44:36.813453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-11-07 13:44:36.813718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-11-07 13:44:36.813766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-11-07 13:44:36.814145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-11-07 13:44:36.814186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-11-07 13:44:36.814553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-11-07 13:44:36.814594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-11-07 13:44:36.814959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-11-07 13:44:36.815001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-11-07 13:44:36.815373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-11-07 13:44:36.815415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-11-07 13:44:36.815782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-11-07 13:44:36.815822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-11-07 13:44:36.816221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-11-07 13:44:36.816263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-11-07 13:44:36.816502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-11-07 13:44:36.816542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-11-07 13:44:36.816916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-11-07 13:44:36.816959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-11-07 13:44:36.817310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-11-07 13:44:36.817350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-11-07 13:44:36.817726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-11-07 13:44:36.817765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-11-07 13:44:36.818130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-11-07 13:44:36.818171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.032 qpair failed and we were unable to recover it. 00:39:29.032 [2024-11-07 13:44:36.818559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.032 [2024-11-07 13:44:36.818601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.033 qpair failed and we were unable to recover it. 00:39:29.033 [2024-11-07 13:44:36.818844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.033 [2024-11-07 13:44:36.818892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.033 qpair failed and we were unable to recover it. 00:39:29.033 [2024-11-07 13:44:36.819331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.033 [2024-11-07 13:44:36.819372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.033 qpair failed and we were unable to recover it. 00:39:29.033 [2024-11-07 13:44:36.819745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.033 [2024-11-07 13:44:36.819785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.033 qpair failed and we were unable to recover it. 00:39:29.033 [2024-11-07 13:44:36.820173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.033 [2024-11-07 13:44:36.820217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.033 qpair failed and we were unable to recover it. 00:39:29.033 [2024-11-07 13:44:36.820578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.033 [2024-11-07 13:44:36.820633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.033 qpair failed and we were unable to recover it. 00:39:29.033 [2024-11-07 13:44:36.821021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.033 [2024-11-07 13:44:36.821064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.033 qpair failed and we were unable to recover it. 00:39:29.033 [2024-11-07 13:44:36.821440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.033 [2024-11-07 13:44:36.821480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.033 qpair failed and we were unable to recover it. 00:39:29.033 [2024-11-07 13:44:36.821763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.033 [2024-11-07 13:44:36.821808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.033 qpair failed and we were unable to recover it. 00:39:29.033 [2024-11-07 13:44:36.822190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.033 [2024-11-07 13:44:36.822231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.033 qpair failed and we were unable to recover it. 00:39:29.033 [2024-11-07 13:44:36.822592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.033 [2024-11-07 13:44:36.822631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.033 qpair failed and we were unable to recover it. 00:39:29.033 [2024-11-07 13:44:36.822995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.033 [2024-11-07 13:44:36.823036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.033 qpair failed and we were unable to recover it. 00:39:29.033 [2024-11-07 13:44:36.823418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.033 [2024-11-07 13:44:36.823459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.033 qpair failed and we were unable to recover it. 00:39:29.033 [2024-11-07 13:44:36.823823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.033 [2024-11-07 13:44:36.823871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.033 qpair failed and we were unable to recover it. 00:39:29.033 [2024-11-07 13:44:36.824251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.033 [2024-11-07 13:44:36.824291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.033 qpair failed and we were unable to recover it. 00:39:29.033 [2024-11-07 13:44:36.824656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.033 [2024-11-07 13:44:36.824697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.033 qpair failed and we were unable to recover it. 00:39:29.033 [2024-11-07 13:44:36.825070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.033 [2024-11-07 13:44:36.825113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.033 qpair failed and we were unable to recover it. 00:39:29.033 [2024-11-07 13:44:36.825395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.033 [2024-11-07 13:44:36.825436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.033 qpair failed and we were unable to recover it. 00:39:29.033 [2024-11-07 13:44:36.825672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.033 [2024-11-07 13:44:36.825711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.033 qpair failed and we were unable to recover it. 00:39:29.033 [2024-11-07 13:44:36.825955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.033 [2024-11-07 13:44:36.825996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.033 qpair failed and we were unable to recover it. 00:39:29.033 [2024-11-07 13:44:36.826313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.033 [2024-11-07 13:44:36.826356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.033 qpair failed and we were unable to recover it. 00:39:29.033 [2024-11-07 13:44:36.826697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.033 [2024-11-07 13:44:36.826736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.033 qpair failed and we were unable to recover it. 00:39:29.033 [2024-11-07 13:44:36.826959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.033 [2024-11-07 13:44:36.827001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.033 qpair failed and we were unable to recover it. 00:39:29.033 [2024-11-07 13:44:36.827383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.033 [2024-11-07 13:44:36.827423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.033 qpair failed and we were unable to recover it. 00:39:29.033 [2024-11-07 13:44:36.827739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.033 [2024-11-07 13:44:36.827779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.033 qpair failed and we were unable to recover it. 00:39:29.033 [2024-11-07 13:44:36.828162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.033 [2024-11-07 13:44:36.828206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.033 qpair failed and we were unable to recover it. 00:39:29.033 [2024-11-07 13:44:36.828562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.033 [2024-11-07 13:44:36.828604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.033 qpair failed and we were unable to recover it. 00:39:29.033 [2024-11-07 13:44:36.828944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.033 [2024-11-07 13:44:36.828986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.033 qpair failed and we were unable to recover it. 00:39:29.033 [2024-11-07 13:44:36.829358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.033 [2024-11-07 13:44:36.829405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.033 qpair failed and we were unable to recover it. 00:39:29.033 [2024-11-07 13:44:36.829767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.033 [2024-11-07 13:44:36.829806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.033 qpair failed and we were unable to recover it. 00:39:29.033 [2024-11-07 13:44:36.830181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.033 [2024-11-07 13:44:36.830222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.033 qpair failed and we were unable to recover it. 00:39:29.033 [2024-11-07 13:44:36.830566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.033 [2024-11-07 13:44:36.830607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.033 qpair failed and we were unable to recover it. 00:39:29.033 [2024-11-07 13:44:36.830915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.033 [2024-11-07 13:44:36.830957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.033 qpair failed and we were unable to recover it. 00:39:29.033 [2024-11-07 13:44:36.831321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.033 [2024-11-07 13:44:36.831361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.033 qpair failed and we were unable to recover it. 00:39:29.033 [2024-11-07 13:44:36.831726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.033 [2024-11-07 13:44:36.831766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.033 qpair failed and we were unable to recover it. 00:39:29.033 [2024-11-07 13:44:36.832098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.033 [2024-11-07 13:44:36.832139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.033 qpair failed and we were unable to recover it. 00:39:29.033 [2024-11-07 13:44:36.832511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.033 [2024-11-07 13:44:36.832552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.033 qpair failed and we were unable to recover it. 00:39:29.034 [2024-11-07 13:44:36.832913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.034 [2024-11-07 13:44:36.832954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.034 qpair failed and we were unable to recover it. 00:39:29.034 [2024-11-07 13:44:36.833324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.034 [2024-11-07 13:44:36.833363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.034 qpair failed and we were unable to recover it. 00:39:29.034 [2024-11-07 13:44:36.833604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.034 [2024-11-07 13:44:36.833643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.034 qpair failed and we were unable to recover it. 00:39:29.034 [2024-11-07 13:44:36.834046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.034 [2024-11-07 13:44:36.834089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.034 qpair failed and we were unable to recover it. 00:39:29.034 [2024-11-07 13:44:36.834450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.034 [2024-11-07 13:44:36.834490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.034 qpair failed and we were unable to recover it. 00:39:29.034 [2024-11-07 13:44:36.834876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.034 [2024-11-07 13:44:36.834918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.034 qpair failed and we were unable to recover it. 00:39:29.034 [2024-11-07 13:44:36.835273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.034 [2024-11-07 13:44:36.835315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.034 qpair failed and we were unable to recover it. 00:39:29.034 [2024-11-07 13:44:36.835676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.034 [2024-11-07 13:44:36.835717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.034 qpair failed and we were unable to recover it. 00:39:29.034 [2024-11-07 13:44:36.836080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.034 [2024-11-07 13:44:36.836122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.034 qpair failed and we were unable to recover it. 00:39:29.034 [2024-11-07 13:44:36.836491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.034 [2024-11-07 13:44:36.836532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.034 qpair failed and we were unable to recover it. 00:39:29.034 [2024-11-07 13:44:36.836769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.034 [2024-11-07 13:44:36.836808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.034 qpair failed and we were unable to recover it. 00:39:29.034 [2024-11-07 13:44:36.837196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.034 [2024-11-07 13:44:36.837238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.034 qpair failed and we were unable to recover it. 00:39:29.034 [2024-11-07 13:44:36.837475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.034 [2024-11-07 13:44:36.837518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.034 qpair failed and we were unable to recover it. 00:39:29.034 [2024-11-07 13:44:36.837883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.034 [2024-11-07 13:44:36.837925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.034 qpair failed and we were unable to recover it. 00:39:29.034 [2024-11-07 13:44:36.838315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.034 [2024-11-07 13:44:36.838356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.034 qpair failed and we were unable to recover it. 00:39:29.034 [2024-11-07 13:44:36.838764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.034 [2024-11-07 13:44:36.838805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.034 qpair failed and we were unable to recover it. 00:39:29.034 [2024-11-07 13:44:36.839177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.034 [2024-11-07 13:44:36.839219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.034 qpair failed and we were unable to recover it. 00:39:29.034 [2024-11-07 13:44:36.839608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.034 [2024-11-07 13:44:36.839648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.034 qpair failed and we were unable to recover it. 00:39:29.034 [2024-11-07 13:44:36.840028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.034 [2024-11-07 13:44:36.840078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.034 qpair failed and we were unable to recover it. 00:39:29.034 [2024-11-07 13:44:36.840463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.034 [2024-11-07 13:44:36.840512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.034 qpair failed and we were unable to recover it. 00:39:29.034 [2024-11-07 13:44:36.840843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.034 [2024-11-07 13:44:36.840894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.034 qpair failed and we were unable to recover it. 00:39:29.034 [2024-11-07 13:44:36.841271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.034 [2024-11-07 13:44:36.841311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.034 qpair failed and we were unable to recover it. 00:39:29.034 [2024-11-07 13:44:36.841674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.034 [2024-11-07 13:44:36.841715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.034 qpair failed and we were unable to recover it. 00:39:29.034 [2024-11-07 13:44:36.842086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.034 [2024-11-07 13:44:36.842129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.034 qpair failed and we were unable to recover it. 00:39:29.034 [2024-11-07 13:44:36.842482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.034 [2024-11-07 13:44:36.842522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.034 qpair failed and we were unable to recover it. 00:39:29.034 [2024-11-07 13:44:36.842770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.034 [2024-11-07 13:44:36.842813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.034 qpair failed and we were unable to recover it. 00:39:29.034 [2024-11-07 13:44:36.843156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.034 [2024-11-07 13:44:36.843198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.034 qpair failed and we were unable to recover it. 00:39:29.034 [2024-11-07 13:44:36.843542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.034 [2024-11-07 13:44:36.843584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.034 qpair failed and we were unable to recover it. 00:39:29.034 [2024-11-07 13:44:36.843831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.034 [2024-11-07 13:44:36.843878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.034 qpair failed and we were unable to recover it. 00:39:29.034 [2024-11-07 13:44:36.844127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.034 [2024-11-07 13:44:36.844167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.034 qpair failed and we were unable to recover it. 00:39:29.034 [2024-11-07 13:44:36.844506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.034 [2024-11-07 13:44:36.844546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.034 qpair failed and we were unable to recover it. 00:39:29.034 [2024-11-07 13:44:36.844991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.034 [2024-11-07 13:44:36.845035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.034 qpair failed and we were unable to recover it. 00:39:29.035 [2024-11-07 13:44:36.845409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.035 [2024-11-07 13:44:36.845464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.035 qpair failed and we were unable to recover it. 00:39:29.035 [2024-11-07 13:44:36.845680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.035 [2024-11-07 13:44:36.845721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.035 qpair failed and we were unable to recover it. 00:39:29.035 [2024-11-07 13:44:36.846090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.035 [2024-11-07 13:44:36.846131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.035 qpair failed and we were unable to recover it. 00:39:29.035 [2024-11-07 13:44:36.846394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.035 [2024-11-07 13:44:36.846436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.035 qpair failed and we were unable to recover it. 00:39:29.035 [2024-11-07 13:44:36.846705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.035 [2024-11-07 13:44:36.846750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.035 qpair failed and we were unable to recover it. 00:39:29.035 [2024-11-07 13:44:36.847108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.035 [2024-11-07 13:44:36.847149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.035 qpair failed and we were unable to recover it. 00:39:29.035 [2024-11-07 13:44:36.847519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.035 [2024-11-07 13:44:36.847559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.035 qpair failed and we were unable to recover it. 00:39:29.035 [2024-11-07 13:44:36.847897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.035 [2024-11-07 13:44:36.847940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.035 qpair failed and we were unable to recover it. 00:39:29.035 [2024-11-07 13:44:36.848280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.035 [2024-11-07 13:44:36.848321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.035 qpair failed and we were unable to recover it. 00:39:29.035 [2024-11-07 13:44:36.848700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.035 [2024-11-07 13:44:36.848741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.035 qpair failed and we were unable to recover it. 00:39:29.035 [2024-11-07 13:44:36.849013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.035 [2024-11-07 13:44:36.849054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.035 qpair failed and we were unable to recover it. 00:39:29.035 [2024-11-07 13:44:36.849435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.035 [2024-11-07 13:44:36.849477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.035 qpair failed and we were unable to recover it. 00:39:29.035 [2024-11-07 13:44:36.849875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.035 [2024-11-07 13:44:36.849916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.035 qpair failed and we were unable to recover it. 00:39:29.035 [2024-11-07 13:44:36.850284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.035 [2024-11-07 13:44:36.850325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.035 qpair failed and we were unable to recover it. 00:39:29.035 [2024-11-07 13:44:36.850688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.035 [2024-11-07 13:44:36.850729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.035 qpair failed and we were unable to recover it. 00:39:29.035 [2024-11-07 13:44:36.851023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.035 [2024-11-07 13:44:36.851066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.035 qpair failed and we were unable to recover it. 00:39:29.035 [2024-11-07 13:44:36.851431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.035 [2024-11-07 13:44:36.851471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.035 qpair failed and we were unable to recover it. 00:39:29.035 [2024-11-07 13:44:36.851829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.035 [2024-11-07 13:44:36.851883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.035 qpair failed and we were unable to recover it. 00:39:29.035 [2024-11-07 13:44:36.852259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.035 [2024-11-07 13:44:36.852300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.035 qpair failed and we were unable to recover it. 00:39:29.035 [2024-11-07 13:44:36.852716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.035 [2024-11-07 13:44:36.852757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.035 qpair failed and we were unable to recover it. 00:39:29.035 [2024-11-07 13:44:36.853140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.035 [2024-11-07 13:44:36.853183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.035 qpair failed and we were unable to recover it. 00:39:29.035 [2024-11-07 13:44:36.853415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.035 [2024-11-07 13:44:36.853458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.035 qpair failed and we were unable to recover it. 00:39:29.035 [2024-11-07 13:44:36.853860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.035 [2024-11-07 13:44:36.853909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.035 qpair failed and we were unable to recover it. 00:39:29.035 [2024-11-07 13:44:36.854289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.035 [2024-11-07 13:44:36.854329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.035 qpair failed and we were unable to recover it. 00:39:29.035 [2024-11-07 13:44:36.854588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.035 [2024-11-07 13:44:36.854630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.035 qpair failed and we were unable to recover it. 00:39:29.035 [2024-11-07 13:44:36.855017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.035 [2024-11-07 13:44:36.855059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.035 qpair failed and we were unable to recover it. 00:39:29.035 [2024-11-07 13:44:36.855436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.035 [2024-11-07 13:44:36.855482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.035 qpair failed and we were unable to recover it. 00:39:29.035 [2024-11-07 13:44:36.855824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.035 [2024-11-07 13:44:36.855889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.035 qpair failed and we were unable to recover it. 00:39:29.035 [2024-11-07 13:44:36.856299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.035 [2024-11-07 13:44:36.856340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.035 qpair failed and we were unable to recover it. 00:39:29.035 [2024-11-07 13:44:36.856751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.035 [2024-11-07 13:44:36.856790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.035 qpair failed and we were unable to recover it. 00:39:29.035 [2024-11-07 13:44:36.857175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.035 [2024-11-07 13:44:36.857216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.035 qpair failed and we were unable to recover it. 00:39:29.035 [2024-11-07 13:44:36.857627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.035 [2024-11-07 13:44:36.857669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.035 qpair failed and we were unable to recover it. 00:39:29.035 [2024-11-07 13:44:36.858034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.035 [2024-11-07 13:44:36.858077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.035 qpair failed and we were unable to recover it. 00:39:29.035 [2024-11-07 13:44:36.858421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.035 [2024-11-07 13:44:36.858461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.035 qpair failed and we were unable to recover it. 00:39:29.035 [2024-11-07 13:44:36.858796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.035 [2024-11-07 13:44:36.858837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.035 qpair failed and we were unable to recover it. 00:39:29.036 [2024-11-07 13:44:36.859213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.036 [2024-11-07 13:44:36.859256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.036 qpair failed and we were unable to recover it. 00:39:29.036 [2024-11-07 13:44:36.859631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.036 [2024-11-07 13:44:36.859671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.036 qpair failed and we were unable to recover it. 00:39:29.036 [2024-11-07 13:44:36.859930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.036 [2024-11-07 13:44:36.859970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.036 qpair failed and we were unable to recover it. 00:39:29.036 [2024-11-07 13:44:36.860339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.036 [2024-11-07 13:44:36.860379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.036 qpair failed and we were unable to recover it. 00:39:29.036 [2024-11-07 13:44:36.860407] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:29.036 [2024-11-07 13:44:36.860446] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:29.036 [2024-11-07 13:44:36.860461] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:29.036 [2024-11-07 13:44:36.860473] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:29.036 [2024-11-07 13:44:36.860483] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:29.036 [2024-11-07 13:44:36.860626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.036 [2024-11-07 13:44:36.860671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.036 qpair failed and we were unable to recover it. 00:39:29.036 [2024-11-07 13:44:36.860895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.036 [2024-11-07 13:44:36.860935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.036 qpair failed and we were unable to recover it. 00:39:29.036 [2024-11-07 13:44:36.861210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.036 [2024-11-07 13:44:36.861255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.036 qpair failed and we were unable to recover it. 00:39:29.036 [2024-11-07 13:44:36.861626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.036 [2024-11-07 13:44:36.861666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.036 qpair failed and we were unable to recover it. 00:39:29.036 [2024-11-07 13:44:36.862076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.036 [2024-11-07 13:44:36.862118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.036 qpair failed and we were unable to recover it. 00:39:29.036 [2024-11-07 13:44:36.862482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.036 [2024-11-07 13:44:36.862522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.036 qpair failed and we were unable to recover it. 00:39:29.036 [2024-11-07 13:44:36.862912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.036 [2024-11-07 13:44:36.862953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.036 qpair failed and we were unable to recover it. 00:39:29.036 [2024-11-07 13:44:36.862938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:39:29.036 [2024-11-07 13:44:36.863063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:39:29.036 [2024-11-07 13:44:36.863295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.036 [2024-11-07 13:44:36.863300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:39:29.036 [2024-11-07 13:44:36.863334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.036 qpair failed and we were unable to recover it. 00:39:29.036 [2024-11-07 13:44:36.863319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:39:29.036 [2024-11-07 13:44:36.863678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.036 [2024-11-07 13:44:36.863719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.036 qpair failed and we were unable to recover it. 00:39:29.036 [2024-11-07 13:44:36.864092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.036 [2024-11-07 13:44:36.864135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.036 qpair failed and we were unable to recover it. 00:39:29.036 [2024-11-07 13:44:36.864457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.036 [2024-11-07 13:44:36.864496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.036 qpair failed and we were unable to recover it. 00:39:29.036 [2024-11-07 13:44:36.864917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.036 [2024-11-07 13:44:36.864959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.036 qpair failed and we were unable to recover it. 00:39:29.036 [2024-11-07 13:44:36.865219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.036 [2024-11-07 13:44:36.865259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.036 qpair failed and we were unable to recover it. 00:39:29.036 [2024-11-07 13:44:36.865527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.036 [2024-11-07 13:44:36.865567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.036 qpair failed and we were unable to recover it. 00:39:29.036 [2024-11-07 13:44:36.865815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.036 [2024-11-07 13:44:36.865854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.036 qpair failed and we were unable to recover it. 00:39:29.036 [2024-11-07 13:44:36.866092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.036 [2024-11-07 13:44:36.866134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.036 qpair failed and we were unable to recover it. 00:39:29.036 [2024-11-07 13:44:36.866516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.036 [2024-11-07 13:44:36.866557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.036 qpair failed and we were unable to recover it. 00:39:29.036 [2024-11-07 13:44:36.866828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.036 [2024-11-07 13:44:36.866876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.036 qpair failed and we were unable to recover it. 00:39:29.036 [2024-11-07 13:44:36.867109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.036 [2024-11-07 13:44:36.867148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.036 qpair failed and we were unable to recover it. 00:39:29.036 [2024-11-07 13:44:36.867522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.036 [2024-11-07 13:44:36.867563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.036 qpair failed and we were unable to recover it. 00:39:29.036 [2024-11-07 13:44:36.867941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.036 [2024-11-07 13:44:36.867985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.036 qpair failed and we were unable to recover it. 00:39:29.036 [2024-11-07 13:44:36.868354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.036 [2024-11-07 13:44:36.868395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.036 qpair failed and we were unable to recover it. 00:39:29.036 [2024-11-07 13:44:36.868762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.036 [2024-11-07 13:44:36.868802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.036 qpair failed and we were unable to recover it. 00:39:29.036 [2024-11-07 13:44:36.869192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.036 [2024-11-07 13:44:36.869234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.036 qpair failed and we were unable to recover it. 00:39:29.036 [2024-11-07 13:44:36.869672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.036 [2024-11-07 13:44:36.869718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.036 qpair failed and we were unable to recover it. 00:39:29.036 [2024-11-07 13:44:36.870090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.036 [2024-11-07 13:44:36.870145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.036 qpair failed and we were unable to recover it. 00:39:29.036 [2024-11-07 13:44:36.870525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.036 [2024-11-07 13:44:36.870566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.036 qpair failed and we were unable to recover it. 00:39:29.036 [2024-11-07 13:44:36.870945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.036 [2024-11-07 13:44:36.870987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.036 qpair failed and we were unable to recover it. 00:39:29.036 [2024-11-07 13:44:36.871217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.036 [2024-11-07 13:44:36.871257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.036 qpair failed and we were unable to recover it. 00:39:29.037 [2024-11-07 13:44:36.871623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.037 [2024-11-07 13:44:36.871663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.037 qpair failed and we were unable to recover it. 00:39:29.037 [2024-11-07 13:44:36.872043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.037 [2024-11-07 13:44:36.872085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.037 qpair failed and we were unable to recover it. 00:39:29.037 [2024-11-07 13:44:36.872236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.037 [2024-11-07 13:44:36.872280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.037 qpair failed and we were unable to recover it. 00:39:29.037 [2024-11-07 13:44:36.872520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.037 [2024-11-07 13:44:36.872562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.037 qpair failed and we were unable to recover it. 00:39:29.037 [2024-11-07 13:44:36.872943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.037 [2024-11-07 13:44:36.872985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.037 qpair failed and we were unable to recover it. 00:39:29.037 [2024-11-07 13:44:36.873247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.037 [2024-11-07 13:44:36.873287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.037 qpair failed and we were unable to recover it. 00:39:29.037 [2024-11-07 13:44:36.873683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.037 [2024-11-07 13:44:36.873722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.037 qpair failed and we were unable to recover it. 00:39:29.037 [2024-11-07 13:44:36.874087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.037 [2024-11-07 13:44:36.874129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.037 qpair failed and we were unable to recover it. 00:39:29.037 [2024-11-07 13:44:36.874480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.037 [2024-11-07 13:44:36.874527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.037 qpair failed and we were unable to recover it. 00:39:29.037 [2024-11-07 13:44:36.874906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.037 [2024-11-07 13:44:36.874947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.037 qpair failed and we were unable to recover it. 00:39:29.037 [2024-11-07 13:44:36.875303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.037 [2024-11-07 13:44:36.875342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.037 qpair failed and we were unable to recover it. 00:39:29.037 [2024-11-07 13:44:36.875591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.037 [2024-11-07 13:44:36.875630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.037 qpair failed and we were unable to recover it. 00:39:29.037 [2024-11-07 13:44:36.875897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.037 [2024-11-07 13:44:36.875938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.037 qpair failed and we were unable to recover it. 00:39:29.037 [2024-11-07 13:44:36.876178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.037 [2024-11-07 13:44:36.876217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.037 qpair failed and we were unable to recover it. 00:39:29.037 [2024-11-07 13:44:36.876473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.037 [2024-11-07 13:44:36.876513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.037 qpair failed and we were unable to recover it. 00:39:29.037 [2024-11-07 13:44:36.876753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.037 [2024-11-07 13:44:36.876793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.037 qpair failed and we were unable to recover it. 00:39:29.037 [2024-11-07 13:44:36.877140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.037 [2024-11-07 13:44:36.877182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.037 qpair failed and we were unable to recover it. 00:39:29.037 [2024-11-07 13:44:36.877549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.037 [2024-11-07 13:44:36.877589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.037 qpair failed and we were unable to recover it. 00:39:29.037 [2024-11-07 13:44:36.877833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.037 [2024-11-07 13:44:36.877888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.037 qpair failed and we were unable to recover it. 00:39:29.037 [2024-11-07 13:44:36.878266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.037 [2024-11-07 13:44:36.878307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.037 qpair failed and we were unable to recover it. 00:39:29.037 [2024-11-07 13:44:36.878559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.037 [2024-11-07 13:44:36.878598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.037 qpair failed and we were unable to recover it. 00:39:29.037 [2024-11-07 13:44:36.878978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.037 [2024-11-07 13:44:36.879020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.037 qpair failed and we were unable to recover it. 00:39:29.037 [2024-11-07 13:44:36.879379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.037 [2024-11-07 13:44:36.879420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.037 qpair failed and we were unable to recover it. 00:39:29.037 [2024-11-07 13:44:36.879565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.037 [2024-11-07 13:44:36.879605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.037 qpair failed and we were unable to recover it. 00:39:29.037 [2024-11-07 13:44:36.879849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.037 [2024-11-07 13:44:36.879912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.037 qpair failed and we were unable to recover it. 00:39:29.037 [2024-11-07 13:44:36.880344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.037 [2024-11-07 13:44:36.880384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.037 qpair failed and we were unable to recover it. 00:39:29.037 [2024-11-07 13:44:36.880624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.037 [2024-11-07 13:44:36.880667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.037 qpair failed and we were unable to recover it. 00:39:29.037 [2024-11-07 13:44:36.881043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.037 [2024-11-07 13:44:36.881086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.037 qpair failed and we were unable to recover it. 00:39:29.037 [2024-11-07 13:44:36.881462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.037 [2024-11-07 13:44:36.881502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.037 qpair failed and we were unable to recover it. 00:39:29.037 [2024-11-07 13:44:36.881881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.037 [2024-11-07 13:44:36.881922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.037 qpair failed and we were unable to recover it. 00:39:29.037 [2024-11-07 13:44:36.882302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.037 [2024-11-07 13:44:36.882343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.037 qpair failed and we were unable to recover it. 00:39:29.037 [2024-11-07 13:44:36.882718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.037 [2024-11-07 13:44:36.882759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.037 qpair failed and we were unable to recover it. 00:39:29.037 [2024-11-07 13:44:36.883195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.037 [2024-11-07 13:44:36.883237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.037 qpair failed and we were unable to recover it. 00:39:29.037 [2024-11-07 13:44:36.883485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.037 [2024-11-07 13:44:36.883529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.037 qpair failed and we were unable to recover it. 00:39:29.037 [2024-11-07 13:44:36.883767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.037 [2024-11-07 13:44:36.883807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.037 qpair failed and we were unable to recover it. 00:39:29.037 [2024-11-07 13:44:36.884198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.037 [2024-11-07 13:44:36.884241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.037 qpair failed and we were unable to recover it. 00:39:29.037 [2024-11-07 13:44:36.884520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.037 [2024-11-07 13:44:36.884559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.037 qpair failed and we were unable to recover it. 00:39:29.038 [2024-11-07 13:44:36.884954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.038 [2024-11-07 13:44:36.884995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.038 qpair failed and we were unable to recover it. 00:39:29.038 [2024-11-07 13:44:36.885340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.038 [2024-11-07 13:44:36.885381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.038 qpair failed and we were unable to recover it. 00:39:29.038 [2024-11-07 13:44:36.885753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.038 [2024-11-07 13:44:36.885794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.038 qpair failed and we were unable to recover it. 00:39:29.038 [2024-11-07 13:44:36.886048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.038 [2024-11-07 13:44:36.886089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.038 qpair failed and we were unable to recover it. 00:39:29.038 [2024-11-07 13:44:36.886454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.038 [2024-11-07 13:44:36.886494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.038 qpair failed and we were unable to recover it. 00:39:29.038 [2024-11-07 13:44:36.886759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.038 [2024-11-07 13:44:36.886801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.038 qpair failed and we were unable to recover it. 00:39:29.038 [2024-11-07 13:44:36.887082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.038 [2024-11-07 13:44:36.887124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.038 qpair failed and we were unable to recover it. 00:39:29.038 [2024-11-07 13:44:36.887489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.038 [2024-11-07 13:44:36.887530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.038 qpair failed and we were unable to recover it. 00:39:29.038 [2024-11-07 13:44:36.887894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.038 [2024-11-07 13:44:36.887935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.038 qpair failed and we were unable to recover it. 00:39:29.038 [2024-11-07 13:44:36.888316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.038 [2024-11-07 13:44:36.888356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.038 qpair failed and we were unable to recover it. 00:39:29.038 [2024-11-07 13:44:36.888586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.038 [2024-11-07 13:44:36.888627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.038 qpair failed and we were unable to recover it. 00:39:29.038 [2024-11-07 13:44:36.888901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.038 [2024-11-07 13:44:36.888952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.038 qpair failed and we were unable to recover it. 00:39:29.038 [2024-11-07 13:44:36.889343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.038 [2024-11-07 13:44:36.889384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.038 qpair failed and we were unable to recover it. 00:39:29.038 [2024-11-07 13:44:36.889746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.038 [2024-11-07 13:44:36.889786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.038 qpair failed and we were unable to recover it. 00:39:29.038 [2024-11-07 13:44:36.890191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.038 [2024-11-07 13:44:36.890236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.038 qpair failed and we were unable to recover it. 00:39:29.038 [2024-11-07 13:44:36.890618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.038 [2024-11-07 13:44:36.890658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.038 qpair failed and we were unable to recover it. 00:39:29.038 [2024-11-07 13:44:36.891040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.038 [2024-11-07 13:44:36.891083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.038 qpair failed and we were unable to recover it. 00:39:29.038 [2024-11-07 13:44:36.891460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.038 [2024-11-07 13:44:36.891502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.038 qpair failed and we were unable to recover it. 00:39:29.038 [2024-11-07 13:44:36.891811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.038 [2024-11-07 13:44:36.891853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.038 qpair failed and we were unable to recover it. 00:39:29.038 [2024-11-07 13:44:36.892087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.038 [2024-11-07 13:44:36.892127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.038 qpair failed and we were unable to recover it. 00:39:29.038 [2024-11-07 13:44:36.892486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.038 [2024-11-07 13:44:36.892526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.038 qpair failed and we were unable to recover it. 00:39:29.038 [2024-11-07 13:44:36.892910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.038 [2024-11-07 13:44:36.892951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.038 qpair failed and we were unable to recover it. 00:39:29.038 [2024-11-07 13:44:36.893335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.038 [2024-11-07 13:44:36.893377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.038 qpair failed and we were unable to recover it. 00:39:29.038 [2024-11-07 13:44:36.893630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.038 [2024-11-07 13:44:36.893724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.038 qpair failed and we were unable to recover it. 00:39:29.038 [2024-11-07 13:44:36.894090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.038 [2024-11-07 13:44:36.894133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.038 qpair failed and we were unable to recover it. 00:39:29.038 [2024-11-07 13:44:36.894547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.038 [2024-11-07 13:44:36.894587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.038 qpair failed and we were unable to recover it. 00:39:29.038 [2024-11-07 13:44:36.894839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.038 [2024-11-07 13:44:36.894891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.038 qpair failed and we were unable to recover it. 00:39:29.038 [2024-11-07 13:44:36.895160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.038 [2024-11-07 13:44:36.895200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.038 qpair failed and we were unable to recover it. 00:39:29.038 [2024-11-07 13:44:36.895596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.038 [2024-11-07 13:44:36.895636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.038 qpair failed and we were unable to recover it. 00:39:29.038 [2024-11-07 13:44:36.895923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.038 [2024-11-07 13:44:36.895965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.038 qpair failed and we were unable to recover it. 00:39:29.038 [2024-11-07 13:44:36.896348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.038 [2024-11-07 13:44:36.896388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.038 qpair failed and we were unable to recover it. 00:39:29.038 [2024-11-07 13:44:36.896728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.039 [2024-11-07 13:44:36.896769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.039 qpair failed and we were unable to recover it. 00:39:29.039 [2024-11-07 13:44:36.897199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.039 [2024-11-07 13:44:36.897240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.039 qpair failed and we were unable to recover it. 00:39:29.039 [2024-11-07 13:44:36.897619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.039 [2024-11-07 13:44:36.897660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.039 qpair failed and we were unable to recover it. 00:39:29.039 [2024-11-07 13:44:36.898039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.039 [2024-11-07 13:44:36.898081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.039 qpair failed and we were unable to recover it. 00:39:29.039 [2024-11-07 13:44:36.898442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.039 [2024-11-07 13:44:36.898482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.039 qpair failed and we were unable to recover it. 00:39:29.039 [2024-11-07 13:44:36.898875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.039 [2024-11-07 13:44:36.898917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.039 qpair failed and we were unable to recover it. 00:39:29.039 [2024-11-07 13:44:36.899303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.039 [2024-11-07 13:44:36.899344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.039 qpair failed and we were unable to recover it. 00:39:29.039 [2024-11-07 13:44:36.899489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.039 [2024-11-07 13:44:36.899534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.039 qpair failed and we were unable to recover it. 00:39:29.039 [2024-11-07 13:44:36.899892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.039 [2024-11-07 13:44:36.899934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.039 qpair failed and we were unable to recover it. 00:39:29.039 [2024-11-07 13:44:36.900311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.039 [2024-11-07 13:44:36.900352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.039 qpair failed and we were unable to recover it. 00:39:29.039 [2024-11-07 13:44:36.900616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.039 [2024-11-07 13:44:36.900658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.039 qpair failed and we were unable to recover it. 00:39:29.039 [2024-11-07 13:44:36.900954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.039 [2024-11-07 13:44:36.900997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.039 qpair failed and we were unable to recover it. 00:39:29.039 [2024-11-07 13:44:36.901229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.039 [2024-11-07 13:44:36.901269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.039 qpair failed and we were unable to recover it. 00:39:29.039 [2024-11-07 13:44:36.901648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.039 [2024-11-07 13:44:36.901687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.039 qpair failed and we were unable to recover it. 00:39:29.039 [2024-11-07 13:44:36.902038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.039 [2024-11-07 13:44:36.902079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.039 qpair failed and we were unable to recover it. 00:39:29.039 [2024-11-07 13:44:36.902421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.039 [2024-11-07 13:44:36.902463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.039 qpair failed and we were unable to recover it. 00:39:29.039 [2024-11-07 13:44:36.902714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.039 [2024-11-07 13:44:36.902753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.039 qpair failed and we were unable to recover it. 00:39:29.039 [2024-11-07 13:44:36.903004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.039 [2024-11-07 13:44:36.903049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.039 qpair failed and we were unable to recover it. 00:39:29.039 [2024-11-07 13:44:36.903418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.039 [2024-11-07 13:44:36.903458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.039 qpair failed and we were unable to recover it. 00:39:29.039 [2024-11-07 13:44:36.903881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.039 [2024-11-07 13:44:36.903924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.039 qpair failed and we were unable to recover it. 00:39:29.039 [2024-11-07 13:44:36.904183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.039 [2024-11-07 13:44:36.904235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.039 qpair failed and we were unable to recover it. 00:39:29.039 [2024-11-07 13:44:36.904630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.039 [2024-11-07 13:44:36.904669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.039 qpair failed and we were unable to recover it. 00:39:29.039 [2024-11-07 13:44:36.905044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.039 [2024-11-07 13:44:36.905085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.039 qpair failed and we were unable to recover it. 00:39:29.039 [2024-11-07 13:44:36.905464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.039 [2024-11-07 13:44:36.905504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.039 qpair failed and we were unable to recover it. 00:39:29.039 [2024-11-07 13:44:36.905898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.039 [2024-11-07 13:44:36.905940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.039 qpair failed and we were unable to recover it. 00:39:29.039 [2024-11-07 13:44:36.906333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.039 [2024-11-07 13:44:36.906373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.039 qpair failed and we were unable to recover it. 00:39:29.039 [2024-11-07 13:44:36.906751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.039 [2024-11-07 13:44:36.906791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.039 qpair failed and we were unable to recover it. 00:39:29.039 [2024-11-07 13:44:36.907065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.039 [2024-11-07 13:44:36.907106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.039 qpair failed and we were unable to recover it. 00:39:29.039 [2024-11-07 13:44:36.907470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.039 [2024-11-07 13:44:36.907510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.039 qpair failed and we were unable to recover it. 00:39:29.039 [2024-11-07 13:44:36.907837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.039 [2024-11-07 13:44:36.907888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.039 qpair failed and we were unable to recover it. 00:39:29.039 [2024-11-07 13:44:36.908261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.039 [2024-11-07 13:44:36.908302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.039 qpair failed and we were unable to recover it. 00:39:29.040 [2024-11-07 13:44:36.908678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.040 [2024-11-07 13:44:36.908718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.040 qpair failed and we were unable to recover it. 00:39:29.040 [2024-11-07 13:44:36.908969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.040 [2024-11-07 13:44:36.909011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.040 qpair failed and we were unable to recover it. 00:39:29.040 [2024-11-07 13:44:36.909244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.040 [2024-11-07 13:44:36.909284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.040 qpair failed and we were unable to recover it. 00:39:29.040 [2024-11-07 13:44:36.909712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.040 [2024-11-07 13:44:36.909753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.040 qpair failed and we were unable to recover it. 00:39:29.040 [2024-11-07 13:44:36.910148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.040 [2024-11-07 13:44:36.910190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.040 qpair failed and we were unable to recover it. 00:39:29.040 [2024-11-07 13:44:36.910586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.040 [2024-11-07 13:44:36.910626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.040 qpair failed and we were unable to recover it. 00:39:29.040 [2024-11-07 13:44:36.910978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.040 [2024-11-07 13:44:36.911019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.040 qpair failed and we were unable to recover it. 00:39:29.040 [2024-11-07 13:44:36.911399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.040 [2024-11-07 13:44:36.911438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.040 qpair failed and we were unable to recover it. 00:39:29.040 [2024-11-07 13:44:36.911875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.040 [2024-11-07 13:44:36.911918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.040 qpair failed and we were unable to recover it. 00:39:29.040 [2024-11-07 13:44:36.912238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.040 [2024-11-07 13:44:36.912278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.040 qpair failed and we were unable to recover it. 00:39:29.040 [2024-11-07 13:44:36.912666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.040 [2024-11-07 13:44:36.912706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.040 qpair failed and we were unable to recover it. 00:39:29.040 [2024-11-07 13:44:36.913070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.040 [2024-11-07 13:44:36.913111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.040 qpair failed and we were unable to recover it. 00:39:29.040 [2024-11-07 13:44:36.913451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.040 [2024-11-07 13:44:36.913492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.040 qpair failed and we were unable to recover it. 00:39:29.040 [2024-11-07 13:44:36.913749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.040 [2024-11-07 13:44:36.913789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.040 qpair failed and we were unable to recover it. 00:39:29.040 [2024-11-07 13:44:36.914051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.040 [2024-11-07 13:44:36.914094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.040 qpair failed and we were unable to recover it. 00:39:29.040 [2024-11-07 13:44:36.914433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.040 [2024-11-07 13:44:36.914474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.040 qpair failed and we were unable to recover it. 00:39:29.040 [2024-11-07 13:44:36.914605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.040 [2024-11-07 13:44:36.914644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.040 qpair failed and we were unable to recover it. 00:39:29.040 [2024-11-07 13:44:36.914901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.040 [2024-11-07 13:44:36.914944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.040 qpair failed and we were unable to recover it. 00:39:29.040 [2024-11-07 13:44:36.915382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.040 [2024-11-07 13:44:36.915423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.040 qpair failed and we were unable to recover it. 00:39:29.040 [2024-11-07 13:44:36.915768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.040 [2024-11-07 13:44:36.915809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.040 qpair failed and we were unable to recover it. 00:39:29.040 [2024-11-07 13:44:36.916068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.040 [2024-11-07 13:44:36.916109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.040 qpair failed and we were unable to recover it. 00:39:29.040 [2024-11-07 13:44:36.916486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.040 [2024-11-07 13:44:36.916527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.040 qpair failed and we were unable to recover it. 00:39:29.040 [2024-11-07 13:44:36.916921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.040 [2024-11-07 13:44:36.916964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.040 qpair failed and we were unable to recover it. 00:39:29.040 [2024-11-07 13:44:36.917198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.040 [2024-11-07 13:44:36.917238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.040 qpair failed and we were unable to recover it. 00:39:29.040 [2024-11-07 13:44:36.917623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.040 [2024-11-07 13:44:36.917665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.040 qpair failed and we were unable to recover it. 00:39:29.040 [2024-11-07 13:44:36.917943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.040 [2024-11-07 13:44:36.918010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.040 qpair failed and we were unable to recover it. 00:39:29.040 [2024-11-07 13:44:36.918416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.040 [2024-11-07 13:44:36.918457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.040 qpair failed and we were unable to recover it. 00:39:29.040 [2024-11-07 13:44:36.918824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.040 [2024-11-07 13:44:36.918874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.040 qpair failed and we were unable to recover it. 00:39:29.040 [2024-11-07 13:44:36.919123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.040 [2024-11-07 13:44:36.919164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.040 qpair failed and we were unable to recover it. 00:39:29.040 [2024-11-07 13:44:36.919383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.040 [2024-11-07 13:44:36.919430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.040 qpair failed and we were unable to recover it. 00:39:29.040 [2024-11-07 13:44:36.919679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.040 [2024-11-07 13:44:36.919719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.040 qpair failed and we were unable to recover it. 00:39:29.040 [2024-11-07 13:44:36.920158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.040 [2024-11-07 13:44:36.920201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.040 qpair failed and we were unable to recover it. 00:39:29.040 [2024-11-07 13:44:36.920580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.040 [2024-11-07 13:44:36.920621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.040 qpair failed and we were unable to recover it. 00:39:29.040 [2024-11-07 13:44:36.920843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.040 [2024-11-07 13:44:36.920892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.040 qpair failed and we were unable to recover it. 00:39:29.040 [2024-11-07 13:44:36.921250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.040 [2024-11-07 13:44:36.921291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.040 qpair failed and we were unable to recover it. 00:39:29.040 [2024-11-07 13:44:36.921661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.041 [2024-11-07 13:44:36.921702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.041 qpair failed and we were unable to recover it. 00:39:29.041 [2024-11-07 13:44:36.922107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.041 [2024-11-07 13:44:36.922151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.041 qpair failed and we were unable to recover it. 00:39:29.041 [2024-11-07 13:44:36.922502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.041 [2024-11-07 13:44:36.922543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.041 qpair failed and we were unable to recover it. 00:39:29.041 [2024-11-07 13:44:36.922921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.041 [2024-11-07 13:44:36.922963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.041 qpair failed and we were unable to recover it. 00:39:29.041 [2024-11-07 13:44:36.923185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.041 [2024-11-07 13:44:36.923227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.041 qpair failed and we were unable to recover it. 00:39:29.041 [2024-11-07 13:44:36.923639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.041 [2024-11-07 13:44:36.923681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.041 qpair failed and we were unable to recover it. 00:39:29.041 [2024-11-07 13:44:36.924061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.041 [2024-11-07 13:44:36.924105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.041 qpair failed and we were unable to recover it. 00:39:29.041 [2024-11-07 13:44:36.924472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.041 [2024-11-07 13:44:36.924513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.041 qpair failed and we were unable to recover it. 00:39:29.041 [2024-11-07 13:44:36.924776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.041 [2024-11-07 13:44:36.924816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.041 qpair failed and we were unable to recover it. 00:39:29.041 [2024-11-07 13:44:36.925216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.041 [2024-11-07 13:44:36.925259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.041 qpair failed and we were unable to recover it. 00:39:29.041 [2024-11-07 13:44:36.925510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.041 [2024-11-07 13:44:36.925551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.041 qpair failed and we were unable to recover it. 00:39:29.041 [2024-11-07 13:44:36.925811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.041 [2024-11-07 13:44:36.925852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.041 qpair failed and we were unable to recover it. 00:39:29.041 [2024-11-07 13:44:36.926101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.041 [2024-11-07 13:44:36.926141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.041 qpair failed and we were unable to recover it. 00:39:29.041 [2024-11-07 13:44:36.926524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.041 [2024-11-07 13:44:36.926566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.041 qpair failed and we were unable to recover it. 00:39:29.041 [2024-11-07 13:44:36.926959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.041 [2024-11-07 13:44:36.927001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.041 qpair failed and we were unable to recover it. 00:39:29.041 [2024-11-07 13:44:36.927255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.041 [2024-11-07 13:44:36.927294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.041 qpair failed and we were unable to recover it. 00:39:29.041 [2024-11-07 13:44:36.927676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.041 [2024-11-07 13:44:36.927718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.041 qpair failed and we were unable to recover it. 00:39:29.041 [2024-11-07 13:44:36.927971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.041 [2024-11-07 13:44:36.928012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.041 qpair failed and we were unable to recover it. 00:39:29.041 [2024-11-07 13:44:36.928384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.041 [2024-11-07 13:44:36.928424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.041 qpair failed and we were unable to recover it. 00:39:29.041 [2024-11-07 13:44:36.928682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.041 [2024-11-07 13:44:36.928722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.041 qpair failed and we were unable to recover it. 00:39:29.041 [2024-11-07 13:44:36.929099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.041 [2024-11-07 13:44:36.929141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.041 qpair failed and we were unable to recover it. 00:39:29.041 [2024-11-07 13:44:36.929414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.041 [2024-11-07 13:44:36.929456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.041 qpair failed and we were unable to recover it. 00:39:29.041 [2024-11-07 13:44:36.929827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.041 [2024-11-07 13:44:36.929876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.041 qpair failed and we were unable to recover it. 00:39:29.041 [2024-11-07 13:44:36.930231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.041 [2024-11-07 13:44:36.930272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.041 qpair failed and we were unable to recover it. 00:39:29.041 [2024-11-07 13:44:36.930645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.041 [2024-11-07 13:44:36.930685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.041 qpair failed and we were unable to recover it. 00:39:29.041 [2024-11-07 13:44:36.930950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.041 [2024-11-07 13:44:36.930992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.041 qpair failed and we were unable to recover it. 00:39:29.041 [2024-11-07 13:44:36.931285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.041 [2024-11-07 13:44:36.931326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.041 qpair failed and we were unable to recover it. 00:39:29.041 [2024-11-07 13:44:36.931583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.041 [2024-11-07 13:44:36.931623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.041 qpair failed and we were unable to recover it. 00:39:29.041 [2024-11-07 13:44:36.932003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.041 [2024-11-07 13:44:36.932045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.041 qpair failed and we were unable to recover it. 00:39:29.041 [2024-11-07 13:44:36.932317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.041 [2024-11-07 13:44:36.932359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.041 qpair failed and we were unable to recover it. 00:39:29.041 [2024-11-07 13:44:36.932604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.041 [2024-11-07 13:44:36.932643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.041 qpair failed and we were unable to recover it. 00:39:29.041 [2024-11-07 13:44:36.933012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.041 [2024-11-07 13:44:36.933053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.041 qpair failed and we were unable to recover it. 00:39:29.041 [2024-11-07 13:44:36.933449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.041 [2024-11-07 13:44:36.933491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.041 qpair failed and we were unable to recover it. 00:39:29.041 [2024-11-07 13:44:36.933877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.041 [2024-11-07 13:44:36.933921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.041 qpair failed and we were unable to recover it. 00:39:29.041 [2024-11-07 13:44:36.934037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.041 [2024-11-07 13:44:36.934081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.041 qpair failed and we were unable to recover it. 00:39:29.041 [2024-11-07 13:44:36.934397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.041 [2024-11-07 13:44:36.934438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.041 qpair failed and we were unable to recover it. 00:39:29.041 [2024-11-07 13:44:36.934837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.041 [2024-11-07 13:44:36.934899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.041 qpair failed and we were unable to recover it. 00:39:29.041 [2024-11-07 13:44:36.935273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.042 [2024-11-07 13:44:36.935292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.042 qpair failed and we were unable to recover it. 00:39:29.042 [2024-11-07 13:44:36.935615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.042 [2024-11-07 13:44:36.935631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.042 qpair failed and we were unable to recover it. 00:39:29.042 [2024-11-07 13:44:36.936068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.042 [2024-11-07 13:44:36.936118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.042 qpair failed and we were unable to recover it. 00:39:29.042 [2024-11-07 13:44:36.936466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.042 [2024-11-07 13:44:36.936484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.042 qpair failed and we were unable to recover it. 00:39:29.042 [2024-11-07 13:44:36.936815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.042 [2024-11-07 13:44:36.936831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.042 qpair failed and we were unable to recover it. 00:39:29.042 [2024-11-07 13:44:36.937249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.042 [2024-11-07 13:44:36.937298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.042 qpair failed and we were unable to recover it. 00:39:29.042 [2024-11-07 13:44:36.937654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.042 [2024-11-07 13:44:36.937672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.042 qpair failed and we were unable to recover it. 00:39:29.042 [2024-11-07 13:44:36.938105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.042 [2024-11-07 13:44:36.938154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.042 qpair failed and we were unable to recover it. 00:39:29.042 [2024-11-07 13:44:36.938497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.042 [2024-11-07 13:44:36.938516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.042 qpair failed and we were unable to recover it. 00:39:29.042 [2024-11-07 13:44:36.938804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.042 [2024-11-07 13:44:36.938819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.042 qpair failed and we were unable to recover it. 00:39:29.042 [2024-11-07 13:44:36.939136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.042 [2024-11-07 13:44:36.939152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.042 qpair failed and we were unable to recover it. 00:39:29.042 [2024-11-07 13:44:36.939452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.042 [2024-11-07 13:44:36.939469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.042 qpair failed and we were unable to recover it. 00:39:29.042 [2024-11-07 13:44:36.939802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.042 [2024-11-07 13:44:36.939817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.042 qpair failed and we were unable to recover it. 00:39:29.042 [2024-11-07 13:44:36.940138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.042 [2024-11-07 13:44:36.940154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.042 qpair failed and we were unable to recover it. 00:39:29.042 [2024-11-07 13:44:36.940334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.042 [2024-11-07 13:44:36.940351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.042 qpair failed and we were unable to recover it. 00:39:29.042 [2024-11-07 13:44:36.940533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.042 [2024-11-07 13:44:36.940549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.042 qpair failed and we were unable to recover it. 00:39:29.042 [2024-11-07 13:44:36.940840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.042 [2024-11-07 13:44:36.940854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.042 qpair failed and we were unable to recover it. 00:39:29.042 [2024-11-07 13:44:36.941171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.042 [2024-11-07 13:44:36.941187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.042 qpair failed and we were unable to recover it. 00:39:29.042 [2024-11-07 13:44:36.941491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.042 [2024-11-07 13:44:36.941507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.042 qpair failed and we were unable to recover it. 00:39:29.042 [2024-11-07 13:44:36.941838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.042 [2024-11-07 13:44:36.941853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.042 qpair failed and we were unable to recover it. 00:39:29.042 [2024-11-07 13:44:36.942164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.042 [2024-11-07 13:44:36.942180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.042 qpair failed and we were unable to recover it. 00:39:29.042 [2024-11-07 13:44:36.942512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.042 [2024-11-07 13:44:36.942528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.042 qpair failed and we were unable to recover it. 00:39:29.042 [2024-11-07 13:44:36.942749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.042 [2024-11-07 13:44:36.942765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.042 qpair failed and we were unable to recover it. 00:39:29.042 [2024-11-07 13:44:36.943086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.042 [2024-11-07 13:44:36.943102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.042 qpair failed and we were unable to recover it. 00:39:29.042 [2024-11-07 13:44:36.943155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.042 [2024-11-07 13:44:36.943169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.042 qpair failed and we were unable to recover it. 00:39:29.042 [2024-11-07 13:44:36.943473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.042 [2024-11-07 13:44:36.943487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.042 qpair failed and we were unable to recover it. 00:39:29.042 [2024-11-07 13:44:36.943810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.042 [2024-11-07 13:44:36.943826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.042 qpair failed and we were unable to recover it. 00:39:29.042 [2024-11-07 13:44:36.944162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.042 [2024-11-07 13:44:36.944179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.042 qpair failed and we were unable to recover it. 00:39:29.042 [2024-11-07 13:44:36.944351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.042 [2024-11-07 13:44:36.944367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.042 qpair failed and we were unable to recover it. 00:39:29.042 [2024-11-07 13:44:36.944707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.042 [2024-11-07 13:44:36.944723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.042 qpair failed and we were unable to recover it. 00:39:29.042 [2024-11-07 13:44:36.944937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.042 [2024-11-07 13:44:36.944954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.042 qpair failed and we were unable to recover it. 00:39:29.042 [2024-11-07 13:44:36.945166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.042 [2024-11-07 13:44:36.945183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.042 qpair failed and we were unable to recover it. 00:39:29.042 [2024-11-07 13:44:36.945501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.042 [2024-11-07 13:44:36.945518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.042 qpair failed and we were unable to recover it. 00:39:29.042 [2024-11-07 13:44:36.945704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.042 [2024-11-07 13:44:36.945720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.042 qpair failed and we were unable to recover it. 00:39:29.042 [2024-11-07 13:44:36.946017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.042 [2024-11-07 13:44:36.946033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.042 qpair failed and we were unable to recover it. 00:39:29.042 [2024-11-07 13:44:36.946371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.043 [2024-11-07 13:44:36.946386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.043 qpair failed and we were unable to recover it. 00:39:29.043 [2024-11-07 13:44:36.946715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.043 [2024-11-07 13:44:36.946729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.043 qpair failed and we were unable to recover it. 00:39:29.043 [2024-11-07 13:44:36.947075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.043 [2024-11-07 13:44:36.947091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.043 qpair failed and we were unable to recover it. 00:39:29.043 [2024-11-07 13:44:36.947268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.043 [2024-11-07 13:44:36.947283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.043 qpair failed and we were unable to recover it. 00:39:29.043 [2024-11-07 13:44:36.947608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.043 [2024-11-07 13:44:36.947623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.043 qpair failed and we were unable to recover it. 00:39:29.043 [2024-11-07 13:44:36.947949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.043 [2024-11-07 13:44:36.947966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.043 qpair failed and we were unable to recover it. 00:39:29.043 [2024-11-07 13:44:36.948300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.043 [2024-11-07 13:44:36.948315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.043 qpair failed and we were unable to recover it. 00:39:29.043 [2024-11-07 13:44:36.948644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.043 [2024-11-07 13:44:36.948660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.043 qpair failed and we were unable to recover it. 00:39:29.043 [2024-11-07 13:44:36.949000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.043 [2024-11-07 13:44:36.949015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.043 qpair failed and we were unable to recover it. 00:39:29.043 [2024-11-07 13:44:36.949342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.043 [2024-11-07 13:44:36.949358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.043 qpair failed and we were unable to recover it. 00:39:29.043 [2024-11-07 13:44:36.949564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.043 [2024-11-07 13:44:36.949579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.043 qpair failed and we were unable to recover it. 00:39:29.043 [2024-11-07 13:44:36.949756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.043 [2024-11-07 13:44:36.949771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.043 qpair failed and we were unable to recover it. 00:39:29.043 [2024-11-07 13:44:36.950101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.043 [2024-11-07 13:44:36.950116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.043 qpair failed and we were unable to recover it. 00:39:29.043 [2024-11-07 13:44:36.950473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.043 [2024-11-07 13:44:36.950488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.043 qpair failed and we were unable to recover it. 00:39:29.043 [2024-11-07 13:44:36.950774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.043 [2024-11-07 13:44:36.950788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.043 qpair failed and we were unable to recover it. 00:39:29.043 [2024-11-07 13:44:36.951122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.043 [2024-11-07 13:44:36.951138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.043 qpair failed and we were unable to recover it. 00:39:29.043 [2024-11-07 13:44:36.951473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.043 [2024-11-07 13:44:36.951488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.043 qpair failed and we were unable to recover it. 00:39:29.043 [2024-11-07 13:44:36.951669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.043 [2024-11-07 13:44:36.951683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.043 qpair failed and we were unable to recover it. 00:39:29.043 [2024-11-07 13:44:36.952017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.043 [2024-11-07 13:44:36.952032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.043 qpair failed and we were unable to recover it. 00:39:29.043 [2024-11-07 13:44:36.952369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.043 [2024-11-07 13:44:36.952385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.043 qpair failed and we were unable to recover it. 00:39:29.043 [2024-11-07 13:44:36.952675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.043 [2024-11-07 13:44:36.952690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.043 qpair failed and we were unable to recover it. 00:39:29.043 [2024-11-07 13:44:36.953024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.043 [2024-11-07 13:44:36.953039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.043 qpair failed and we were unable to recover it. 00:39:29.043 [2024-11-07 13:44:36.953361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.043 [2024-11-07 13:44:36.953377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.043 qpair failed and we were unable to recover it. 00:39:29.043 [2024-11-07 13:44:36.953700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.043 [2024-11-07 13:44:36.953715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.043 qpair failed and we were unable to recover it. 00:39:29.043 [2024-11-07 13:44:36.953976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.043 [2024-11-07 13:44:36.953991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.043 qpair failed and we were unable to recover it. 00:39:29.043 [2024-11-07 13:44:36.954318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.043 [2024-11-07 13:44:36.954333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.043 qpair failed and we were unable to recover it. 00:39:29.043 [2024-11-07 13:44:36.954659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.043 [2024-11-07 13:44:36.954675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.043 qpair failed and we were unable to recover it. 00:39:29.043 [2024-11-07 13:44:36.954857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.043 [2024-11-07 13:44:36.954877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.043 qpair failed and we were unable to recover it. 00:39:29.043 [2024-11-07 13:44:36.955152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.043 [2024-11-07 13:44:36.955167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.043 qpair failed and we were unable to recover it. 00:39:29.043 [2024-11-07 13:44:36.955468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.043 [2024-11-07 13:44:36.955486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.043 qpair failed and we were unable to recover it. 00:39:29.043 [2024-11-07 13:44:36.955687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.043 [2024-11-07 13:44:36.955701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.043 qpair failed and we were unable to recover it. 00:39:29.043 [2024-11-07 13:44:36.955890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.043 [2024-11-07 13:44:36.955905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.043 qpair failed and we were unable to recover it. 00:39:29.043 [2024-11-07 13:44:36.956196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.043 [2024-11-07 13:44:36.956211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.043 qpair failed and we were unable to recover it. 00:39:29.043 [2024-11-07 13:44:36.956592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.043 [2024-11-07 13:44:36.956607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.043 qpair failed and we were unable to recover it. 00:39:29.043 [2024-11-07 13:44:36.956915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.043 [2024-11-07 13:44:36.956939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.043 qpair failed and we were unable to recover it. 00:39:29.043 [2024-11-07 13:44:36.957308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.043 [2024-11-07 13:44:36.957323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.043 qpair failed and we were unable to recover it. 00:39:29.043 [2024-11-07 13:44:36.957609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.043 [2024-11-07 13:44:36.957624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.043 qpair failed and we were unable to recover it. 00:39:29.043 [2024-11-07 13:44:36.957932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.043 [2024-11-07 13:44:36.957948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.043 qpair failed and we were unable to recover it. 00:39:29.044 [2024-11-07 13:44:36.958146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.044 [2024-11-07 13:44:36.958161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.044 qpair failed and we were unable to recover it. 00:39:29.044 [2024-11-07 13:44:36.958337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.044 [2024-11-07 13:44:36.958352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.044 qpair failed and we were unable to recover it. 00:39:29.044 [2024-11-07 13:44:36.958674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.044 [2024-11-07 13:44:36.958689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.044 qpair failed and we were unable to recover it. 00:39:29.044 [2024-11-07 13:44:36.959027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.044 [2024-11-07 13:44:36.959043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.044 qpair failed and we were unable to recover it. 00:39:29.044 [2024-11-07 13:44:36.959372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.044 [2024-11-07 13:44:36.959386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.044 qpair failed and we were unable to recover it. 00:39:29.044 [2024-11-07 13:44:36.959583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.044 [2024-11-07 13:44:36.959598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.044 qpair failed and we were unable to recover it. 00:39:29.044 [2024-11-07 13:44:36.959882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.044 [2024-11-07 13:44:36.959897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.044 qpair failed and we were unable to recover it. 00:39:29.044 [2024-11-07 13:44:36.960192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.044 [2024-11-07 13:44:36.960208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.044 qpair failed and we were unable to recover it. 00:39:29.044 [2024-11-07 13:44:36.960537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.044 [2024-11-07 13:44:36.960553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.044 qpair failed and we were unable to recover it. 00:39:29.044 [2024-11-07 13:44:36.960720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.044 [2024-11-07 13:44:36.960735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.044 qpair failed and we were unable to recover it. 00:39:29.044 [2024-11-07 13:44:36.960944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.044 [2024-11-07 13:44:36.960960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.044 qpair failed and we were unable to recover it. 00:39:29.044 [2024-11-07 13:44:36.961242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.044 [2024-11-07 13:44:36.961257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.044 qpair failed and we were unable to recover it. 00:39:29.044 [2024-11-07 13:44:36.961592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.044 [2024-11-07 13:44:36.961607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.044 qpair failed and we were unable to recover it. 00:39:29.044 [2024-11-07 13:44:36.961671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.044 [2024-11-07 13:44:36.961684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.044 qpair failed and we were unable to recover it. 00:39:29.044 [2024-11-07 13:44:36.962018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.044 [2024-11-07 13:44:36.962033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.044 qpair failed and we were unable to recover it. 00:39:29.044 [2024-11-07 13:44:36.962090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.044 [2024-11-07 13:44:36.962104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.044 qpair failed and we were unable to recover it. 00:39:29.044 [2024-11-07 13:44:36.962371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.044 [2024-11-07 13:44:36.962386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.044 qpair failed and we were unable to recover it. 00:39:29.044 [2024-11-07 13:44:36.962679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.044 [2024-11-07 13:44:36.962695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.044 qpair failed and we were unable to recover it. 00:39:29.044 [2024-11-07 13:44:36.962904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.044 [2024-11-07 13:44:36.962920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.044 qpair failed and we were unable to recover it. 00:39:29.044 [2024-11-07 13:44:36.963106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.044 [2024-11-07 13:44:36.963120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.044 qpair failed and we were unable to recover it. 00:39:29.044 [2024-11-07 13:44:36.963325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.044 [2024-11-07 13:44:36.963341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.044 qpair failed and we were unable to recover it. 00:39:29.044 [2024-11-07 13:44:36.963668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.044 [2024-11-07 13:44:36.963683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.044 qpair failed and we were unable to recover it. 00:39:29.044 [2024-11-07 13:44:36.964005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.044 [2024-11-07 13:44:36.964021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.044 qpair failed and we were unable to recover it. 00:39:29.044 [2024-11-07 13:44:36.964212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.044 [2024-11-07 13:44:36.964227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.044 qpair failed and we were unable to recover it. 00:39:29.044 [2024-11-07 13:44:36.964555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.044 [2024-11-07 13:44:36.964570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.044 qpair failed and we were unable to recover it. 00:39:29.044 [2024-11-07 13:44:36.964910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.044 [2024-11-07 13:44:36.964925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.044 qpair failed and we were unable to recover it. 00:39:29.044 [2024-11-07 13:44:36.965222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.044 [2024-11-07 13:44:36.965238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.044 qpair failed and we were unable to recover it. 00:39:29.044 [2024-11-07 13:44:36.965565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.045 [2024-11-07 13:44:36.965579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.045 qpair failed and we were unable to recover it. 00:39:29.045 [2024-11-07 13:44:36.965859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.045 [2024-11-07 13:44:36.965878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.045 qpair failed and we were unable to recover it. 00:39:29.045 [2024-11-07 13:44:36.966204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.045 [2024-11-07 13:44:36.966219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.045 qpair failed and we were unable to recover it. 00:39:29.045 [2024-11-07 13:44:36.966536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.045 [2024-11-07 13:44:36.966552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.045 qpair failed and we were unable to recover it. 00:39:29.045 [2024-11-07 13:44:36.966831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.045 [2024-11-07 13:44:36.966848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.045 qpair failed and we were unable to recover it. 00:39:29.045 [2024-11-07 13:44:36.967150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.045 [2024-11-07 13:44:36.967166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.045 qpair failed and we were unable to recover it. 00:39:29.045 [2024-11-07 13:44:36.967359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.045 [2024-11-07 13:44:36.967375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.045 qpair failed and we were unable to recover it. 00:39:29.045 [2024-11-07 13:44:36.967555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.045 [2024-11-07 13:44:36.967570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.045 qpair failed and we were unable to recover it. 00:39:29.045 [2024-11-07 13:44:36.967893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.045 [2024-11-07 13:44:36.967909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.045 qpair failed and we were unable to recover it. 00:39:29.045 [2024-11-07 13:44:36.968197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.045 [2024-11-07 13:44:36.968212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.045 qpair failed and we were unable to recover it. 00:39:29.045 [2024-11-07 13:44:36.968545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.045 [2024-11-07 13:44:36.968560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.045 qpair failed and we were unable to recover it. 00:39:29.045 [2024-11-07 13:44:36.968841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.045 [2024-11-07 13:44:36.968855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.045 qpair failed and we were unable to recover it. 00:39:29.045 [2024-11-07 13:44:36.969148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.045 [2024-11-07 13:44:36.969163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.045 qpair failed and we were unable to recover it. 00:39:29.045 [2024-11-07 13:44:36.969482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.045 [2024-11-07 13:44:36.969498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.045 qpair failed and we were unable to recover it. 00:39:29.045 [2024-11-07 13:44:36.969675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.045 [2024-11-07 13:44:36.969691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.045 qpair failed and we were unable to recover it. 00:39:29.045 [2024-11-07 13:44:36.970018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.045 [2024-11-07 13:44:36.970034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.045 qpair failed and we were unable to recover it. 00:39:29.045 [2024-11-07 13:44:36.970372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.045 [2024-11-07 13:44:36.970388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.045 qpair failed and we were unable to recover it. 00:39:29.045 [2024-11-07 13:44:36.970720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.045 [2024-11-07 13:44:36.970734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.045 qpair failed and we were unable to recover it. 00:39:29.045 [2024-11-07 13:44:36.970953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.045 [2024-11-07 13:44:36.970968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.045 qpair failed and we were unable to recover it. 00:39:29.045 [2024-11-07 13:44:36.971302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.045 [2024-11-07 13:44:36.971317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.045 qpair failed and we were unable to recover it. 00:39:29.045 [2024-11-07 13:44:36.971584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.045 [2024-11-07 13:44:36.971598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.045 qpair failed and we were unable to recover it. 00:39:29.045 [2024-11-07 13:44:36.971814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.045 [2024-11-07 13:44:36.971829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.045 qpair failed and we were unable to recover it. 00:39:29.045 [2024-11-07 13:44:36.972138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.045 [2024-11-07 13:44:36.972154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.045 qpair failed and we were unable to recover it. 00:39:29.045 [2024-11-07 13:44:36.972490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.045 [2024-11-07 13:44:36.972505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.045 qpair failed and we were unable to recover it. 00:39:29.045 [2024-11-07 13:44:36.972673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.045 [2024-11-07 13:44:36.972690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.045 qpair failed and we were unable to recover it. 00:39:29.045 [2024-11-07 13:44:36.972858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.045 [2024-11-07 13:44:36.972878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.045 qpair failed and we were unable to recover it. 00:39:29.045 [2024-11-07 13:44:36.973212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.045 [2024-11-07 13:44:36.973228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.045 qpair failed and we were unable to recover it. 00:39:29.045 [2024-11-07 13:44:36.973404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.045 [2024-11-07 13:44:36.973419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.045 qpair failed and we were unable to recover it. 00:39:29.045 [2024-11-07 13:44:36.973762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.045 [2024-11-07 13:44:36.973777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.045 qpair failed and we were unable to recover it. 00:39:29.045 [2024-11-07 13:44:36.974123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.045 [2024-11-07 13:44:36.974141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.045 qpair failed and we were unable to recover it. 00:39:29.045 [2024-11-07 13:44:36.974492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.045 [2024-11-07 13:44:36.974508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.045 qpair failed and we were unable to recover it. 00:39:29.045 [2024-11-07 13:44:36.974679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.045 [2024-11-07 13:44:36.974694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.045 qpair failed and we were unable to recover it. 00:39:29.045 [2024-11-07 13:44:36.974986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.045 [2024-11-07 13:44:36.975002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.045 qpair failed and we were unable to recover it. 00:39:29.045 [2024-11-07 13:44:36.975338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.045 [2024-11-07 13:44:36.975355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.045 qpair failed and we were unable to recover it. 00:39:29.045 [2024-11-07 13:44:36.975522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.045 [2024-11-07 13:44:36.975537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.045 qpair failed and we were unable to recover it. 00:39:29.045 [2024-11-07 13:44:36.975736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.045 [2024-11-07 13:44:36.975751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.045 qpair failed and we were unable to recover it. 00:39:29.045 [2024-11-07 13:44:36.976024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.045 [2024-11-07 13:44:36.976040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.045 qpair failed and we were unable to recover it. 00:39:29.046 [2024-11-07 13:44:36.976368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.046 [2024-11-07 13:44:36.976384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.046 qpair failed and we were unable to recover it. 00:39:29.046 [2024-11-07 13:44:36.976674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.046 [2024-11-07 13:44:36.976690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.046 qpair failed and we were unable to recover it. 00:39:29.046 [2024-11-07 13:44:36.977028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.046 [2024-11-07 13:44:36.977044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.046 qpair failed and we were unable to recover it. 00:39:29.046 [2024-11-07 13:44:36.977102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.046 [2024-11-07 13:44:36.977115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.046 qpair failed and we were unable to recover it. 00:39:29.046 [2024-11-07 13:44:36.977428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.046 [2024-11-07 13:44:36.977443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.046 qpair failed and we were unable to recover it. 00:39:29.046 [2024-11-07 13:44:36.977787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.046 [2024-11-07 13:44:36.977802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.046 qpair failed and we were unable to recover it. 00:39:29.046 [2024-11-07 13:44:36.978104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.046 [2024-11-07 13:44:36.978121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.046 qpair failed and we were unable to recover it. 00:39:29.046 [2024-11-07 13:44:36.978445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.046 [2024-11-07 13:44:36.978464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.046 qpair failed and we were unable to recover it. 00:39:29.046 [2024-11-07 13:44:36.978648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.046 [2024-11-07 13:44:36.978662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.046 qpair failed and we were unable to recover it. 00:39:29.046 [2024-11-07 13:44:36.978971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.046 [2024-11-07 13:44:36.978987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.046 qpair failed and we were unable to recover it. 00:39:29.046 [2024-11-07 13:44:36.979316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.046 [2024-11-07 13:44:36.979332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.046 qpair failed and we were unable to recover it. 00:39:29.046 [2024-11-07 13:44:36.979523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.046 [2024-11-07 13:44:36.979538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.046 qpair failed and we were unable to recover it. 00:39:29.046 [2024-11-07 13:44:36.979829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.046 [2024-11-07 13:44:36.979844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.046 qpair failed and we were unable to recover it. 00:39:29.046 [2024-11-07 13:44:36.980190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.046 [2024-11-07 13:44:36.980207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.046 qpair failed and we were unable to recover it. 00:39:29.046 [2024-11-07 13:44:36.980546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.046 [2024-11-07 13:44:36.980561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.046 qpair failed and we were unable to recover it. 00:39:29.046 [2024-11-07 13:44:36.980899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.046 [2024-11-07 13:44:36.980914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.046 qpair failed and we were unable to recover it. 00:39:29.046 [2024-11-07 13:44:36.981233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.046 [2024-11-07 13:44:36.981248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.046 qpair failed and we were unable to recover it. 00:39:29.046 [2024-11-07 13:44:36.981577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.046 [2024-11-07 13:44:36.981592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.046 qpair failed and we were unable to recover it. 00:39:29.046 [2024-11-07 13:44:36.981920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.046 [2024-11-07 13:44:36.981936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.046 qpair failed and we were unable to recover it. 00:39:29.046 [2024-11-07 13:44:36.982266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.046 [2024-11-07 13:44:36.982281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.046 qpair failed and we were unable to recover it. 00:39:29.046 [2024-11-07 13:44:36.982572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.046 [2024-11-07 13:44:36.982588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.046 qpair failed and we were unable to recover it. 00:39:29.046 [2024-11-07 13:44:36.982880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.046 [2024-11-07 13:44:36.982896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.046 qpair failed and we were unable to recover it. 00:39:29.046 [2024-11-07 13:44:36.983215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.046 [2024-11-07 13:44:36.983230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.046 qpair failed and we were unable to recover it. 00:39:29.046 [2024-11-07 13:44:36.983520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.046 [2024-11-07 13:44:36.983535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.046 qpair failed and we were unable to recover it. 00:39:29.046 [2024-11-07 13:44:36.983755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.046 [2024-11-07 13:44:36.983770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.046 qpair failed and we were unable to recover it. 00:39:29.046 [2024-11-07 13:44:36.984049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.046 [2024-11-07 13:44:36.984064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.046 qpair failed and we were unable to recover it. 00:39:29.046 [2024-11-07 13:44:36.984392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.046 [2024-11-07 13:44:36.984407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.046 qpair failed and we were unable to recover it. 00:39:29.046 [2024-11-07 13:44:36.984733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.046 [2024-11-07 13:44:36.984748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.046 qpair failed and we were unable to recover it. 00:39:29.046 [2024-11-07 13:44:36.984981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.046 [2024-11-07 13:44:36.984998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.046 qpair failed and we were unable to recover it. 00:39:29.046 [2024-11-07 13:44:36.985280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.046 [2024-11-07 13:44:36.985296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.046 qpair failed and we were unable to recover it. 00:39:29.046 [2024-11-07 13:44:36.985485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.046 [2024-11-07 13:44:36.985502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.046 qpair failed and we were unable to recover it. 00:39:29.046 [2024-11-07 13:44:36.985682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.046 [2024-11-07 13:44:36.985699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.046 qpair failed and we were unable to recover it. 00:39:29.046 [2024-11-07 13:44:36.986017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.046 [2024-11-07 13:44:36.986032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.046 qpair failed and we were unable to recover it. 00:39:29.046 [2024-11-07 13:44:36.986289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.046 [2024-11-07 13:44:36.986304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.046 qpair failed and we were unable to recover it. 00:39:29.046 [2024-11-07 13:44:36.986638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.046 [2024-11-07 13:44:36.986653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.046 qpair failed and we were unable to recover it. 00:39:29.046 [2024-11-07 13:44:36.986992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.046 [2024-11-07 13:44:36.987009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.046 qpair failed and we were unable to recover it. 00:39:29.046 [2024-11-07 13:44:36.987335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.047 [2024-11-07 13:44:36.987349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.047 qpair failed and we were unable to recover it. 00:39:29.047 [2024-11-07 13:44:36.987539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.047 [2024-11-07 13:44:36.987553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.047 qpair failed and we were unable to recover it. 00:39:29.047 [2024-11-07 13:44:36.987854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.047 [2024-11-07 13:44:36.987873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.047 qpair failed and we were unable to recover it. 00:39:29.047 [2024-11-07 13:44:36.988169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.047 [2024-11-07 13:44:36.988184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.047 qpair failed and we were unable to recover it. 00:39:29.047 [2024-11-07 13:44:36.988514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.047 [2024-11-07 13:44:36.988529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.047 qpair failed and we were unable to recover it. 00:39:29.047 [2024-11-07 13:44:36.988871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.047 [2024-11-07 13:44:36.988887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.047 qpair failed and we were unable to recover it. 00:39:29.047 [2024-11-07 13:44:36.989065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.047 [2024-11-07 13:44:36.989080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.047 qpair failed and we were unable to recover it. 00:39:29.047 [2024-11-07 13:44:36.989402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.047 [2024-11-07 13:44:36.989417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.047 qpair failed and we were unable to recover it. 00:39:29.047 [2024-11-07 13:44:36.989612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.047 [2024-11-07 13:44:36.989627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.047 qpair failed and we were unable to recover it. 00:39:29.047 [2024-11-07 13:44:36.989797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.047 [2024-11-07 13:44:36.989812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.047 qpair failed and we were unable to recover it. 00:39:29.047 [2024-11-07 13:44:36.990137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.047 [2024-11-07 13:44:36.990153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.047 qpair failed and we were unable to recover it. 00:39:29.047 [2024-11-07 13:44:36.990438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.047 [2024-11-07 13:44:36.990455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.047 qpair failed and we were unable to recover it. 00:39:29.047 [2024-11-07 13:44:36.990655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.047 [2024-11-07 13:44:36.990669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.047 qpair failed and we were unable to recover it. 00:39:29.047 [2024-11-07 13:44:36.990850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.047 [2024-11-07 13:44:36.990875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.047 qpair failed and we were unable to recover it. 00:39:29.047 [2024-11-07 13:44:36.991245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.047 [2024-11-07 13:44:36.991260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.047 qpair failed and we were unable to recover it. 00:39:29.047 [2024-11-07 13:44:36.991607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.047 [2024-11-07 13:44:36.991623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.047 qpair failed and we were unable to recover it. 00:39:29.047 [2024-11-07 13:44:36.991959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.047 [2024-11-07 13:44:36.991975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.047 qpair failed and we were unable to recover it. 00:39:29.047 [2024-11-07 13:44:36.992313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.047 [2024-11-07 13:44:36.992328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.047 qpair failed and we were unable to recover it. 00:39:29.047 [2024-11-07 13:44:36.992652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.047 [2024-11-07 13:44:36.992667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.047 qpair failed and we were unable to recover it. 00:39:29.047 [2024-11-07 13:44:36.992976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.047 [2024-11-07 13:44:36.992991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.047 qpair failed and we were unable to recover it. 00:39:29.047 [2024-11-07 13:44:36.993315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.047 [2024-11-07 13:44:36.993330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.047 qpair failed and we were unable to recover it. 00:39:29.047 [2024-11-07 13:44:36.993587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.047 [2024-11-07 13:44:36.993602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.047 qpair failed and we were unable to recover it. 00:39:29.047 [2024-11-07 13:44:36.993929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.047 [2024-11-07 13:44:36.993945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.047 qpair failed and we were unable to recover it. 00:39:29.047 [2024-11-07 13:44:36.994286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.047 [2024-11-07 13:44:36.994300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.047 qpair failed and we were unable to recover it. 00:39:29.047 [2024-11-07 13:44:36.994483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.047 [2024-11-07 13:44:36.994497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.047 qpair failed and we were unable to recover it. 00:39:29.047 [2024-11-07 13:44:36.994701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.047 [2024-11-07 13:44:36.994716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.047 qpair failed and we were unable to recover it. 00:39:29.047 [2024-11-07 13:44:36.995026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.047 [2024-11-07 13:44:36.995041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.047 qpair failed and we were unable to recover it. 00:39:29.047 [2024-11-07 13:44:36.995370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.047 [2024-11-07 13:44:36.995386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.047 qpair failed and we were unable to recover it. 00:39:29.047 [2024-11-07 13:44:36.995663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.047 [2024-11-07 13:44:36.995678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.047 qpair failed and we were unable to recover it. 00:39:29.047 [2024-11-07 13:44:36.995982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.047 [2024-11-07 13:44:36.995996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.047 qpair failed and we were unable to recover it. 00:39:29.047 [2024-11-07 13:44:36.996329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.047 [2024-11-07 13:44:36.996343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.047 qpair failed and we were unable to recover it. 00:39:29.047 [2024-11-07 13:44:36.996547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.047 [2024-11-07 13:44:36.996563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.047 qpair failed and we were unable to recover it. 00:39:29.047 [2024-11-07 13:44:36.996871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.047 [2024-11-07 13:44:36.996887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.047 qpair failed and we were unable to recover it. 00:39:29.047 [2024-11-07 13:44:36.997068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.047 [2024-11-07 13:44:36.997082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.047 qpair failed and we were unable to recover it. 00:39:29.047 [2024-11-07 13:44:36.997408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.047 [2024-11-07 13:44:36.997423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.047 qpair failed and we were unable to recover it. 00:39:29.047 [2024-11-07 13:44:36.997777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.047 [2024-11-07 13:44:36.997793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.047 qpair failed and we were unable to recover it. 00:39:29.047 [2024-11-07 13:44:36.997979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.047 [2024-11-07 13:44:36.997995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.047 qpair failed and we were unable to recover it. 00:39:29.047 [2024-11-07 13:44:36.998185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.048 [2024-11-07 13:44:36.998201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.048 qpair failed and we were unable to recover it. 00:39:29.048 [2024-11-07 13:44:36.998434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.048 [2024-11-07 13:44:36.998450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.048 qpair failed and we were unable to recover it. 00:39:29.048 [2024-11-07 13:44:36.998630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.048 [2024-11-07 13:44:36.998644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.048 qpair failed and we were unable to recover it. 00:39:29.048 [2024-11-07 13:44:36.998952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.048 [2024-11-07 13:44:36.998968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.048 qpair failed and we were unable to recover it. 00:39:29.048 [2024-11-07 13:44:36.999264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.048 [2024-11-07 13:44:36.999279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.048 qpair failed and we were unable to recover it. 00:39:29.048 [2024-11-07 13:44:36.999612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.048 [2024-11-07 13:44:36.999627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.048 qpair failed and we were unable to recover it. 00:39:29.048 [2024-11-07 13:44:36.999816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.048 [2024-11-07 13:44:36.999832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.048 qpair failed and we were unable to recover it. 00:39:29.048 [2024-11-07 13:44:37.000161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.048 [2024-11-07 13:44:37.000177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.048 qpair failed and we were unable to recover it. 00:39:29.048 [2024-11-07 13:44:37.000508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.048 [2024-11-07 13:44:37.000524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.048 qpair failed and we were unable to recover it. 00:39:29.048 [2024-11-07 13:44:37.000841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.048 [2024-11-07 13:44:37.000857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.048 qpair failed and we were unable to recover it. 00:39:29.048 [2024-11-07 13:44:37.001225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.048 [2024-11-07 13:44:37.001240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.048 qpair failed and we were unable to recover it. 00:39:29.048 [2024-11-07 13:44:37.001562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.048 [2024-11-07 13:44:37.001577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.048 qpair failed and we were unable to recover it. 00:39:29.048 [2024-11-07 13:44:37.001875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.048 [2024-11-07 13:44:37.001891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.048 qpair failed and we were unable to recover it. 00:39:29.048 [2024-11-07 13:44:37.002256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.048 [2024-11-07 13:44:37.002270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.048 qpair failed and we were unable to recover it. 00:39:29.048 [2024-11-07 13:44:37.002586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.048 [2024-11-07 13:44:37.002604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.048 qpair failed and we were unable to recover it. 00:39:29.048 [2024-11-07 13:44:37.002918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.048 [2024-11-07 13:44:37.002934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.048 qpair failed and we were unable to recover it. 00:39:29.048 [2024-11-07 13:44:37.003246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.048 [2024-11-07 13:44:37.003263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.048 qpair failed and we were unable to recover it. 00:39:29.048 [2024-11-07 13:44:37.003591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.048 [2024-11-07 13:44:37.003607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.048 qpair failed and we were unable to recover it. 00:39:29.048 [2024-11-07 13:44:37.003811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.048 [2024-11-07 13:44:37.003824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.048 qpair failed and we were unable to recover it. 00:39:29.048 [2024-11-07 13:44:37.004148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.048 [2024-11-07 13:44:37.004165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.048 qpair failed and we were unable to recover it. 00:39:29.048 [2024-11-07 13:44:37.004356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.048 [2024-11-07 13:44:37.004372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.048 qpair failed and we were unable to recover it. 00:39:29.048 [2024-11-07 13:44:37.004437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.048 [2024-11-07 13:44:37.004452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.048 qpair failed and we were unable to recover it. 00:39:29.048 [2024-11-07 13:44:37.004632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.048 [2024-11-07 13:44:37.004648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.048 qpair failed and we were unable to recover it. 00:39:29.048 [2024-11-07 13:44:37.005016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.048 [2024-11-07 13:44:37.005032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.048 qpair failed and we were unable to recover it. 00:39:29.048 [2024-11-07 13:44:37.005367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.048 [2024-11-07 13:44:37.005382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.048 qpair failed and we were unable to recover it. 00:39:29.048 [2024-11-07 13:44:37.005717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.048 [2024-11-07 13:44:37.005732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.048 qpair failed and we were unable to recover it. 00:39:29.048 [2024-11-07 13:44:37.005928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.048 [2024-11-07 13:44:37.005945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.048 qpair failed and we were unable to recover it. 00:39:29.048 [2024-11-07 13:44:37.006234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.048 [2024-11-07 13:44:37.006249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.048 qpair failed and we were unable to recover it. 00:39:29.048 [2024-11-07 13:44:37.006578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.048 [2024-11-07 13:44:37.006594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.048 qpair failed and we were unable to recover it. 00:39:29.048 [2024-11-07 13:44:37.006765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.048 [2024-11-07 13:44:37.006781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.048 qpair failed and we were unable to recover it. 00:39:29.048 [2024-11-07 13:44:37.006967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.048 [2024-11-07 13:44:37.006983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.048 qpair failed and we were unable to recover it. 00:39:29.048 [2024-11-07 13:44:37.007327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.048 [2024-11-07 13:44:37.007343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.048 qpair failed and we were unable to recover it. 00:39:29.048 [2024-11-07 13:44:37.007650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.048 [2024-11-07 13:44:37.007664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.048 qpair failed and we were unable to recover it. 00:39:29.048 [2024-11-07 13:44:37.008000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.048 [2024-11-07 13:44:37.008015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.048 qpair failed and we were unable to recover it. 00:39:29.048 [2024-11-07 13:44:37.008319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.048 [2024-11-07 13:44:37.008335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.048 qpair failed and we were unable to recover it. 00:39:29.048 [2024-11-07 13:44:37.008672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.048 [2024-11-07 13:44:37.008688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.048 qpair failed and we were unable to recover it. 00:39:29.049 [2024-11-07 13:44:37.008936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.049 [2024-11-07 13:44:37.008952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.049 qpair failed and we were unable to recover it. 00:39:29.049 [2024-11-07 13:44:37.009248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.049 [2024-11-07 13:44:37.009264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.049 qpair failed and we were unable to recover it. 00:39:29.049 [2024-11-07 13:44:37.009593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.049 [2024-11-07 13:44:37.009608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.049 qpair failed and we were unable to recover it. 00:39:29.049 [2024-11-07 13:44:37.009927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.049 [2024-11-07 13:44:37.009943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.049 qpair failed and we were unable to recover it. 00:39:29.049 [2024-11-07 13:44:37.010269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.049 [2024-11-07 13:44:37.010285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.049 qpair failed and we were unable to recover it. 00:39:29.049 [2024-11-07 13:44:37.010627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.049 [2024-11-07 13:44:37.010642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.049 qpair failed and we were unable to recover it. 00:39:29.049 [2024-11-07 13:44:37.010983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.049 [2024-11-07 13:44:37.010999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.049 qpair failed and we were unable to recover it. 00:39:29.049 [2024-11-07 13:44:37.011322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.049 [2024-11-07 13:44:37.011337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.049 qpair failed and we were unable to recover it. 00:39:29.049 [2024-11-07 13:44:37.011686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.049 [2024-11-07 13:44:37.011701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.049 qpair failed and we were unable to recover it. 00:39:29.049 [2024-11-07 13:44:37.011976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.049 [2024-11-07 13:44:37.011991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.049 qpair failed and we were unable to recover it. 00:39:29.049 [2024-11-07 13:44:37.012218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.049 [2024-11-07 13:44:37.012234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.049 qpair failed and we were unable to recover it. 00:39:29.049 [2024-11-07 13:44:37.012388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.049 [2024-11-07 13:44:37.012405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.049 qpair failed and we were unable to recover it. 00:39:29.049 [2024-11-07 13:44:37.012613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.049 [2024-11-07 13:44:37.012630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.049 qpair failed and we were unable to recover it. 00:39:29.049 [2024-11-07 13:44:37.012979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.049 [2024-11-07 13:44:37.012996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.049 qpair failed and we were unable to recover it. 00:39:29.049 [2024-11-07 13:44:37.013351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.049 [2024-11-07 13:44:37.013366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.049 qpair failed and we were unable to recover it. 00:39:29.049 [2024-11-07 13:44:37.013704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.049 [2024-11-07 13:44:37.013720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.049 qpair failed and we were unable to recover it. 00:39:29.049 [2024-11-07 13:44:37.014060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.049 [2024-11-07 13:44:37.014075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.049 qpair failed and we were unable to recover it. 00:39:29.049 [2024-11-07 13:44:37.014375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.049 [2024-11-07 13:44:37.014390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.049 qpair failed and we were unable to recover it. 00:39:29.049 [2024-11-07 13:44:37.014723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.049 [2024-11-07 13:44:37.014741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.049 qpair failed and we were unable to recover it. 00:39:29.049 [2024-11-07 13:44:37.015036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.049 [2024-11-07 13:44:37.015051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.049 qpair failed and we were unable to recover it. 00:39:29.049 [2024-11-07 13:44:37.015348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.049 [2024-11-07 13:44:37.015363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.049 qpair failed and we were unable to recover it. 00:39:29.049 [2024-11-07 13:44:37.015661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.049 [2024-11-07 13:44:37.015677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.049 qpair failed and we were unable to recover it. 00:39:29.325 [2024-11-07 13:44:37.015859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.325 [2024-11-07 13:44:37.015881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.325 qpair failed and we were unable to recover it. 00:39:29.325 [2024-11-07 13:44:37.016200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.325 [2024-11-07 13:44:37.016216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.325 qpair failed and we were unable to recover it. 00:39:29.325 [2024-11-07 13:44:37.016537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.325 [2024-11-07 13:44:37.016552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.325 qpair failed and we were unable to recover it. 00:39:29.325 [2024-11-07 13:44:37.016875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.325 [2024-11-07 13:44:37.016891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.325 qpair failed and we were unable to recover it. 00:39:29.325 [2024-11-07 13:44:37.017227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.325 [2024-11-07 13:44:37.017241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.325 qpair failed and we were unable to recover it. 00:39:29.325 [2024-11-07 13:44:37.017555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.325 [2024-11-07 13:44:37.017570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.325 qpair failed and we were unable to recover it. 00:39:29.325 [2024-11-07 13:44:37.017898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.325 [2024-11-07 13:44:37.017914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.325 qpair failed and we were unable to recover it. 00:39:29.325 [2024-11-07 13:44:37.018191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.325 [2024-11-07 13:44:37.018207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.325 qpair failed and we were unable to recover it. 00:39:29.325 [2024-11-07 13:44:37.018540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.325 [2024-11-07 13:44:37.018555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.325 qpair failed and we were unable to recover it. 00:39:29.325 [2024-11-07 13:44:37.018883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.325 [2024-11-07 13:44:37.018898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.325 qpair failed and we were unable to recover it. 00:39:29.325 [2024-11-07 13:44:37.019087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.325 [2024-11-07 13:44:37.019102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.325 qpair failed and we were unable to recover it. 00:39:29.325 [2024-11-07 13:44:37.019275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.325 [2024-11-07 13:44:37.019290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.325 qpair failed and we were unable to recover it. 00:39:29.325 [2024-11-07 13:44:37.019453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.325 [2024-11-07 13:44:37.019469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.325 qpair failed and we were unable to recover it. 00:39:29.325 [2024-11-07 13:44:37.019783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.325 [2024-11-07 13:44:37.019797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.325 qpair failed and we were unable to recover it. 00:39:29.325 [2024-11-07 13:44:37.019980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.325 [2024-11-07 13:44:37.019996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.325 qpair failed and we were unable to recover it. 00:39:29.325 [2024-11-07 13:44:37.020289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.325 [2024-11-07 13:44:37.020304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.325 qpair failed and we were unable to recover it. 00:39:29.325 [2024-11-07 13:44:37.020506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.325 [2024-11-07 13:44:37.020521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.325 qpair failed and we were unable to recover it. 00:39:29.325 [2024-11-07 13:44:37.020695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.325 [2024-11-07 13:44:37.020711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.325 qpair failed and we were unable to recover it. 00:39:29.325 [2024-11-07 13:44:37.021049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.325 [2024-11-07 13:44:37.021066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.325 qpair failed and we were unable to recover it. 00:39:29.325 [2024-11-07 13:44:37.021391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.325 [2024-11-07 13:44:37.021407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.325 qpair failed and we were unable to recover it. 00:39:29.325 [2024-11-07 13:44:37.021717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.325 [2024-11-07 13:44:37.021732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.325 qpair failed and we were unable to recover it. 00:39:29.325 [2024-11-07 13:44:37.022077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.325 [2024-11-07 13:44:37.022095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.325 qpair failed and we were unable to recover it. 00:39:29.325 [2024-11-07 13:44:37.022432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.325 [2024-11-07 13:44:37.022446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.325 qpair failed and we were unable to recover it. 00:39:29.325 [2024-11-07 13:44:37.022773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.325 [2024-11-07 13:44:37.022789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.325 qpair failed and we were unable to recover it. 00:39:29.325 [2024-11-07 13:44:37.023117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.325 [2024-11-07 13:44:37.023132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.325 qpair failed and we were unable to recover it. 00:39:29.325 [2024-11-07 13:44:37.023481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.325 [2024-11-07 13:44:37.023497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.325 qpair failed and we were unable to recover it. 00:39:29.325 [2024-11-07 13:44:37.023835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.325 [2024-11-07 13:44:37.023852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.325 qpair failed and we were unable to recover it. 00:39:29.325 [2024-11-07 13:44:37.024139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.325 [2024-11-07 13:44:37.024156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.325 qpair failed and we were unable to recover it. 00:39:29.325 [2024-11-07 13:44:37.024503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.325 [2024-11-07 13:44:37.024519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.325 qpair failed and we were unable to recover it. 00:39:29.325 [2024-11-07 13:44:37.024845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.325 [2024-11-07 13:44:37.024867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.325 qpair failed and we were unable to recover it. 00:39:29.325 [2024-11-07 13:44:37.025200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.325 [2024-11-07 13:44:37.025216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.326 qpair failed and we were unable to recover it. 00:39:29.326 [2024-11-07 13:44:37.025561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.326 [2024-11-07 13:44:37.025577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.326 qpair failed and we were unable to recover it. 00:39:29.326 [2024-11-07 13:44:37.025895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.326 [2024-11-07 13:44:37.025910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.326 qpair failed and we were unable to recover it. 00:39:29.326 [2024-11-07 13:44:37.026227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.326 [2024-11-07 13:44:37.026242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.326 qpair failed and we were unable to recover it. 00:39:29.326 [2024-11-07 13:44:37.026572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.326 [2024-11-07 13:44:37.026588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.326 qpair failed and we were unable to recover it. 00:39:29.326 [2024-11-07 13:44:37.026919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.326 [2024-11-07 13:44:37.026936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.326 qpair failed and we were unable to recover it. 00:39:29.326 [2024-11-07 13:44:37.027287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.326 [2024-11-07 13:44:37.027305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.326 qpair failed and we were unable to recover it. 00:39:29.326 [2024-11-07 13:44:37.027480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.326 [2024-11-07 13:44:37.027496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.326 qpair failed and we were unable to recover it. 00:39:29.326 [2024-11-07 13:44:37.027704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.326 [2024-11-07 13:44:37.027719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.326 qpair failed and we were unable to recover it. 00:39:29.326 [2024-11-07 13:44:37.027899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.326 [2024-11-07 13:44:37.027916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.326 qpair failed and we were unable to recover it. 00:39:29.326 [2024-11-07 13:44:37.028137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.326 [2024-11-07 13:44:37.028153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.326 qpair failed and we were unable to recover it. 00:39:29.326 [2024-11-07 13:44:37.028478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.326 [2024-11-07 13:44:37.028493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.326 qpair failed and we were unable to recover it. 00:39:29.326 [2024-11-07 13:44:37.028827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.326 [2024-11-07 13:44:37.028842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.326 qpair failed and we were unable to recover it. 00:39:29.326 [2024-11-07 13:44:37.029176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.326 [2024-11-07 13:44:37.029192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.326 qpair failed and we were unable to recover it. 00:39:29.326 [2024-11-07 13:44:37.029510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.326 [2024-11-07 13:44:37.029527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.326 qpair failed and we were unable to recover it. 00:39:29.326 [2024-11-07 13:44:37.029860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.326 [2024-11-07 13:44:37.029881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.326 qpair failed and we were unable to recover it. 00:39:29.326 [2024-11-07 13:44:37.030212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.326 [2024-11-07 13:44:37.030229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.326 qpair failed and we were unable to recover it. 00:39:29.326 [2024-11-07 13:44:37.030554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.326 [2024-11-07 13:44:37.030569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.326 qpair failed and we were unable to recover it. 00:39:29.326 [2024-11-07 13:44:37.030896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.326 [2024-11-07 13:44:37.030912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.326 qpair failed and we were unable to recover it. 00:39:29.326 [2024-11-07 13:44:37.031213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.326 [2024-11-07 13:44:37.031228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.326 qpair failed and we were unable to recover it. 00:39:29.326 [2024-11-07 13:44:37.031305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.326 [2024-11-07 13:44:37.031318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.326 qpair failed and we were unable to recover it. 00:39:29.326 [2024-11-07 13:44:37.031613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.326 [2024-11-07 13:44:37.031627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.326 qpair failed and we were unable to recover it. 00:39:29.326 [2024-11-07 13:44:37.031961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.326 [2024-11-07 13:44:37.031977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.326 qpair failed and we were unable to recover it. 00:39:29.326 [2024-11-07 13:44:37.032330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.326 [2024-11-07 13:44:37.032347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.326 qpair failed and we were unable to recover it. 00:39:29.326 [2024-11-07 13:44:37.032663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.326 [2024-11-07 13:44:37.032677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.326 qpair failed and we were unable to recover it. 00:39:29.326 [2024-11-07 13:44:37.033008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.326 [2024-11-07 13:44:37.033024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.326 qpair failed and we were unable to recover it. 00:39:29.326 [2024-11-07 13:44:37.033083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.326 [2024-11-07 13:44:37.033098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.326 qpair failed and we were unable to recover it. 00:39:29.326 [2024-11-07 13:44:37.033294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.326 [2024-11-07 13:44:37.033310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.326 qpair failed and we were unable to recover it. 00:39:29.326 [2024-11-07 13:44:37.033644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.326 [2024-11-07 13:44:37.033659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.326 qpair failed and we were unable to recover it. 00:39:29.326 [2024-11-07 13:44:37.033987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.326 [2024-11-07 13:44:37.034002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.326 qpair failed and we were unable to recover it. 00:39:29.326 [2024-11-07 13:44:37.034334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.326 [2024-11-07 13:44:37.034349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.326 qpair failed and we were unable to recover it. 00:39:29.326 [2024-11-07 13:44:37.034513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.326 [2024-11-07 13:44:37.034530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.326 qpair failed and we were unable to recover it. 00:39:29.326 [2024-11-07 13:44:37.034703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.326 [2024-11-07 13:44:37.034718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.326 qpair failed and we were unable to recover it. 00:39:29.326 [2024-11-07 13:44:37.035021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.326 [2024-11-07 13:44:37.035037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.326 qpair failed and we were unable to recover it. 00:39:29.326 [2024-11-07 13:44:37.035364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.326 [2024-11-07 13:44:37.035379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.326 qpair failed and we were unable to recover it. 00:39:29.326 [2024-11-07 13:44:37.035561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.326 [2024-11-07 13:44:37.035575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.326 qpair failed and we were unable to recover it. 00:39:29.326 [2024-11-07 13:44:37.035747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.326 [2024-11-07 13:44:37.035763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.326 qpair failed and we were unable to recover it. 00:39:29.326 [2024-11-07 13:44:37.036070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.326 [2024-11-07 13:44:37.036086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.326 qpair failed and we were unable to recover it. 00:39:29.326 [2024-11-07 13:44:37.036274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.327 [2024-11-07 13:44:37.036288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.327 qpair failed and we were unable to recover it. 00:39:29.327 [2024-11-07 13:44:37.036471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.327 [2024-11-07 13:44:37.036486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.327 qpair failed and we were unable to recover it. 00:39:29.327 [2024-11-07 13:44:37.036824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.327 [2024-11-07 13:44:37.036840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.327 qpair failed and we were unable to recover it. 00:39:29.327 [2024-11-07 13:44:37.037186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.327 [2024-11-07 13:44:37.037202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.327 qpair failed and we were unable to recover it. 00:39:29.327 [2024-11-07 13:44:37.037540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.327 [2024-11-07 13:44:37.037556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.327 qpair failed and we were unable to recover it. 00:39:29.327 [2024-11-07 13:44:37.037878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.327 [2024-11-07 13:44:37.037893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.327 qpair failed and we were unable to recover it. 00:39:29.327 [2024-11-07 13:44:37.038200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.327 [2024-11-07 13:44:37.038215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.327 qpair failed and we were unable to recover it. 00:39:29.327 [2024-11-07 13:44:37.038411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.327 [2024-11-07 13:44:37.038426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.327 qpair failed and we were unable to recover it. 00:39:29.327 [2024-11-07 13:44:37.038751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.327 [2024-11-07 13:44:37.038772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.327 qpair failed and we were unable to recover it. 00:39:29.327 [2024-11-07 13:44:37.038975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.327 [2024-11-07 13:44:37.038991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.327 qpair failed and we were unable to recover it. 00:39:29.327 [2024-11-07 13:44:37.039365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.327 [2024-11-07 13:44:37.039381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.327 qpair failed and we were unable to recover it. 00:39:29.327 [2024-11-07 13:44:37.039547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.327 [2024-11-07 13:44:37.039563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.327 qpair failed and we were unable to recover it. 00:39:29.327 [2024-11-07 13:44:37.039887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.327 [2024-11-07 13:44:37.039903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.327 qpair failed and we were unable to recover it. 00:39:29.327 [2024-11-07 13:44:37.040230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.327 [2024-11-07 13:44:37.040244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.327 qpair failed and we were unable to recover it. 00:39:29.327 [2024-11-07 13:44:37.040433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.327 [2024-11-07 13:44:37.040448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.327 qpair failed and we were unable to recover it. 00:39:29.327 [2024-11-07 13:44:37.040657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.327 [2024-11-07 13:44:37.040671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.327 qpair failed and we were unable to recover it. 00:39:29.327 [2024-11-07 13:44:37.041040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.327 [2024-11-07 13:44:37.041055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.327 qpair failed and we were unable to recover it. 00:39:29.327 [2024-11-07 13:44:37.041388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.327 [2024-11-07 13:44:37.041403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.327 qpair failed and we were unable to recover it. 00:39:29.327 [2024-11-07 13:44:37.041616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.327 [2024-11-07 13:44:37.041630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.327 qpair failed and we were unable to recover it. 00:39:29.327 [2024-11-07 13:44:37.041996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.327 [2024-11-07 13:44:37.042012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.327 qpair failed and we were unable to recover it. 00:39:29.327 [2024-11-07 13:44:37.042217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.327 [2024-11-07 13:44:37.042231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.327 qpair failed and we were unable to recover it. 00:39:29.327 [2024-11-07 13:44:37.042580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.327 [2024-11-07 13:44:37.042595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.327 qpair failed and we were unable to recover it. 00:39:29.327 [2024-11-07 13:44:37.042941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.327 [2024-11-07 13:44:37.042959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.327 qpair failed and we were unable to recover it. 00:39:29.327 [2024-11-07 13:44:37.043282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.327 [2024-11-07 13:44:37.043297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.327 qpair failed and we were unable to recover it. 00:39:29.327 [2024-11-07 13:44:37.043358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.327 [2024-11-07 13:44:37.043371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.327 qpair failed and we were unable to recover it. 00:39:29.327 [2024-11-07 13:44:37.043646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.327 [2024-11-07 13:44:37.043661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.327 qpair failed and we were unable to recover it. 00:39:29.327 [2024-11-07 13:44:37.043993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.327 [2024-11-07 13:44:37.044008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.327 qpair failed and we were unable to recover it. 00:39:29.327 [2024-11-07 13:44:37.044387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.327 [2024-11-07 13:44:37.044402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.327 qpair failed and we were unable to recover it. 00:39:29.327 [2024-11-07 13:44:37.044586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.327 [2024-11-07 13:44:37.044602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.327 qpair failed and we were unable to recover it. 00:39:29.327 [2024-11-07 13:44:37.044929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.327 [2024-11-07 13:44:37.044946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.327 qpair failed and we were unable to recover it. 00:39:29.327 [2024-11-07 13:44:37.045218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.327 [2024-11-07 13:44:37.045233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.327 qpair failed and we were unable to recover it. 00:39:29.327 [2024-11-07 13:44:37.045568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.327 [2024-11-07 13:44:37.045584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.327 qpair failed and we were unable to recover it. 00:39:29.327 [2024-11-07 13:44:37.045913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.327 [2024-11-07 13:44:37.045930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.327 qpair failed and we were unable to recover it. 00:39:29.327 [2024-11-07 13:44:37.046265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.327 [2024-11-07 13:44:37.046280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.327 qpair failed and we were unable to recover it. 00:39:29.327 [2024-11-07 13:44:37.046623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.327 [2024-11-07 13:44:37.046640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.327 qpair failed and we were unable to recover it. 00:39:29.327 [2024-11-07 13:44:37.046816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.327 [2024-11-07 13:44:37.046832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.327 qpair failed and we were unable to recover it. 00:39:29.327 [2024-11-07 13:44:37.047011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.327 [2024-11-07 13:44:37.047025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.327 qpair failed and we were unable to recover it. 00:39:29.327 [2024-11-07 13:44:37.047225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.327 [2024-11-07 13:44:37.047240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.327 qpair failed and we were unable to recover it. 00:39:29.327 [2024-11-07 13:44:37.047522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.327 [2024-11-07 13:44:37.047537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.328 qpair failed and we were unable to recover it. 00:39:29.328 [2024-11-07 13:44:37.047877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.328 [2024-11-07 13:44:37.047892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.328 qpair failed and we were unable to recover it. 00:39:29.328 [2024-11-07 13:44:37.048131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.328 [2024-11-07 13:44:37.048146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.328 qpair failed and we were unable to recover it. 00:39:29.328 [2024-11-07 13:44:37.048402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.328 [2024-11-07 13:44:37.048416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.328 qpair failed and we were unable to recover it. 00:39:29.328 [2024-11-07 13:44:37.048735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.328 [2024-11-07 13:44:37.048750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.328 qpair failed and we were unable to recover it. 00:39:29.328 [2024-11-07 13:44:37.048919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.328 [2024-11-07 13:44:37.048935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.328 qpair failed and we were unable to recover it. 00:39:29.328 [2024-11-07 13:44:37.049270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.328 [2024-11-07 13:44:37.049286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.328 qpair failed and we were unable to recover it. 00:39:29.328 [2024-11-07 13:44:37.049613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.328 [2024-11-07 13:44:37.049629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.328 qpair failed and we were unable to recover it. 00:39:29.328 [2024-11-07 13:44:37.049806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.328 [2024-11-07 13:44:37.049822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.328 qpair failed and we were unable to recover it. 00:39:29.328 [2024-11-07 13:44:37.050147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.328 [2024-11-07 13:44:37.050163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.328 qpair failed and we were unable to recover it. 00:39:29.328 [2024-11-07 13:44:37.050329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.328 [2024-11-07 13:44:37.050347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.328 qpair failed and we were unable to recover it. 00:39:29.328 [2024-11-07 13:44:37.050689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.328 [2024-11-07 13:44:37.050704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.328 qpair failed and we were unable to recover it. 00:39:29.328 [2024-11-07 13:44:37.051027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.328 [2024-11-07 13:44:37.051042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.328 qpair failed and we were unable to recover it. 00:39:29.328 [2024-11-07 13:44:37.051360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.328 [2024-11-07 13:44:37.051375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.328 qpair failed and we were unable to recover it. 00:39:29.328 [2024-11-07 13:44:37.051679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.328 [2024-11-07 13:44:37.051695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.328 qpair failed and we were unable to recover it. 00:39:29.328 [2024-11-07 13:44:37.052006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.328 [2024-11-07 13:44:37.052022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.328 qpair failed and we were unable to recover it. 00:39:29.328 [2024-11-07 13:44:37.052223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.328 [2024-11-07 13:44:37.052237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.328 qpair failed and we were unable to recover it. 00:39:29.328 [2024-11-07 13:44:37.052576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.328 [2024-11-07 13:44:37.052593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.328 qpair failed and we were unable to recover it. 00:39:29.328 [2024-11-07 13:44:37.052774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.328 [2024-11-07 13:44:37.052791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.328 qpair failed and we were unable to recover it. 00:39:29.328 [2024-11-07 13:44:37.053115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.328 [2024-11-07 13:44:37.053131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.328 qpair failed and we were unable to recover it. 00:39:29.328 [2024-11-07 13:44:37.053432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.328 [2024-11-07 13:44:37.053448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.328 qpair failed and we were unable to recover it. 00:39:29.328 [2024-11-07 13:44:37.053785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.328 [2024-11-07 13:44:37.053800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.328 qpair failed and we were unable to recover it. 00:39:29.328 [2024-11-07 13:44:37.054115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.328 [2024-11-07 13:44:37.054131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.328 qpair failed and we were unable to recover it. 00:39:29.328 [2024-11-07 13:44:37.054337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.328 [2024-11-07 13:44:37.054354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.328 qpair failed and we were unable to recover it. 00:39:29.328 [2024-11-07 13:44:37.054655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.328 [2024-11-07 13:44:37.054670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.328 qpair failed and we were unable to recover it. 00:39:29.328 [2024-11-07 13:44:37.054982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.328 [2024-11-07 13:44:37.054998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.328 qpair failed and we were unable to recover it. 00:39:29.328 [2024-11-07 13:44:37.055313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.328 [2024-11-07 13:44:37.055328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.328 qpair failed and we were unable to recover it. 00:39:29.328 [2024-11-07 13:44:37.055663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.328 [2024-11-07 13:44:37.055677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.328 qpair failed and we were unable to recover it. 00:39:29.328 [2024-11-07 13:44:37.056007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.328 [2024-11-07 13:44:37.056022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.328 qpair failed and we were unable to recover it. 00:39:29.328 [2024-11-07 13:44:37.056360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.328 [2024-11-07 13:44:37.056375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.328 qpair failed and we were unable to recover it. 00:39:29.328 [2024-11-07 13:44:37.056716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.328 [2024-11-07 13:44:37.056730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.328 qpair failed and we were unable to recover it. 00:39:29.328 [2024-11-07 13:44:37.057057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.328 [2024-11-07 13:44:37.057073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.328 qpair failed and we were unable to recover it. 00:39:29.328 [2024-11-07 13:44:37.057241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.328 [2024-11-07 13:44:37.057255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.328 qpair failed and we were unable to recover it. 00:39:29.328 [2024-11-07 13:44:37.057518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.328 [2024-11-07 13:44:37.057533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.328 qpair failed and we were unable to recover it. 00:39:29.328 [2024-11-07 13:44:37.057879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.328 [2024-11-07 13:44:37.057895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.328 qpair failed and we were unable to recover it. 00:39:29.328 [2024-11-07 13:44:37.058225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.328 [2024-11-07 13:44:37.058240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.328 qpair failed and we were unable to recover it. 00:39:29.328 [2024-11-07 13:44:37.058562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.328 [2024-11-07 13:44:37.058577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.328 qpair failed and we were unable to recover it. 00:39:29.328 [2024-11-07 13:44:37.058751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.328 [2024-11-07 13:44:37.058767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.328 qpair failed and we were unable to recover it. 00:39:29.328 [2024-11-07 13:44:37.059078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.328 [2024-11-07 13:44:37.059094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.328 qpair failed and we were unable to recover it. 00:39:29.329 [2024-11-07 13:44:37.059417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.329 [2024-11-07 13:44:37.059433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.329 qpair failed and we were unable to recover it. 00:39:29.329 [2024-11-07 13:44:37.059765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.329 [2024-11-07 13:44:37.059780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.329 qpair failed and we were unable to recover it. 00:39:29.329 [2024-11-07 13:44:37.060090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.329 [2024-11-07 13:44:37.060105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.329 qpair failed and we were unable to recover it. 00:39:29.329 [2024-11-07 13:44:37.060297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.329 [2024-11-07 13:44:37.060312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.329 qpair failed and we were unable to recover it. 00:39:29.329 [2024-11-07 13:44:37.060468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.329 [2024-11-07 13:44:37.060482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.329 qpair failed and we were unable to recover it. 00:39:29.329 [2024-11-07 13:44:37.060803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.329 [2024-11-07 13:44:37.060818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.329 qpair failed and we were unable to recover it. 00:39:29.329 [2024-11-07 13:44:37.061001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.329 [2024-11-07 13:44:37.061016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.329 qpair failed and we were unable to recover it. 00:39:29.329 [2024-11-07 13:44:37.061228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.329 [2024-11-07 13:44:37.061242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.329 qpair failed and we were unable to recover it. 00:39:29.329 [2024-11-07 13:44:37.061441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.329 [2024-11-07 13:44:37.061457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.329 qpair failed and we were unable to recover it. 00:39:29.329 [2024-11-07 13:44:37.061665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.329 [2024-11-07 13:44:37.061680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.329 qpair failed and we were unable to recover it. 00:39:29.329 [2024-11-07 13:44:37.061997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.329 [2024-11-07 13:44:37.062012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.329 qpair failed and we were unable to recover it. 00:39:29.329 [2024-11-07 13:44:37.062339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.329 [2024-11-07 13:44:37.062356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.329 qpair failed and we were unable to recover it. 00:39:29.329 [2024-11-07 13:44:37.062681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.329 [2024-11-07 13:44:37.062696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.329 qpair failed and we were unable to recover it. 00:39:29.329 [2024-11-07 13:44:37.062876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.329 [2024-11-07 13:44:37.062891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.329 qpair failed and we were unable to recover it. 00:39:29.329 [2024-11-07 13:44:37.063190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.329 [2024-11-07 13:44:37.063205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.329 qpair failed and we were unable to recover it. 00:39:29.329 [2024-11-07 13:44:37.063530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.329 [2024-11-07 13:44:37.063546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.329 qpair failed and we were unable to recover it. 00:39:29.329 [2024-11-07 13:44:37.063874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.329 [2024-11-07 13:44:37.063890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.329 qpair failed and we were unable to recover it. 00:39:29.329 [2024-11-07 13:44:37.064196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.329 [2024-11-07 13:44:37.064211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.329 qpair failed and we were unable to recover it. 00:39:29.329 [2024-11-07 13:44:37.064538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.329 [2024-11-07 13:44:37.064553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.329 qpair failed and we were unable to recover it. 00:39:29.329 [2024-11-07 13:44:37.064874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.329 [2024-11-07 13:44:37.064891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.329 qpair failed and we were unable to recover it. 00:39:29.329 [2024-11-07 13:44:37.065078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.329 [2024-11-07 13:44:37.065094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.329 qpair failed and we were unable to recover it. 00:39:29.329 [2024-11-07 13:44:37.065428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.329 [2024-11-07 13:44:37.065444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.329 qpair failed and we were unable to recover it. 00:39:29.329 [2024-11-07 13:44:37.065774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.329 [2024-11-07 13:44:37.065789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.329 qpair failed and we were unable to recover it. 00:39:29.329 [2024-11-07 13:44:37.065973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.329 [2024-11-07 13:44:37.065988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.329 qpair failed and we were unable to recover it. 00:39:29.329 [2024-11-07 13:44:37.066268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.329 [2024-11-07 13:44:37.066283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.329 qpair failed and we were unable to recover it. 00:39:29.329 [2024-11-07 13:44:37.066465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.329 [2024-11-07 13:44:37.066479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.329 qpair failed and we were unable to recover it. 00:39:29.329 [2024-11-07 13:44:37.066657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.329 [2024-11-07 13:44:37.066671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.329 qpair failed and we were unable to recover it. 00:39:29.329 [2024-11-07 13:44:37.066964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.329 [2024-11-07 13:44:37.066979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.329 qpair failed and we were unable to recover it. 00:39:29.329 [2024-11-07 13:44:37.067318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.329 [2024-11-07 13:44:37.067333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.329 qpair failed and we were unable to recover it. 00:39:29.329 [2024-11-07 13:44:37.067672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.329 [2024-11-07 13:44:37.067689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.329 qpair failed and we were unable to recover it. 00:39:29.329 [2024-11-07 13:44:37.068003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.329 [2024-11-07 13:44:37.068018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.329 qpair failed and we were unable to recover it. 00:39:29.329 [2024-11-07 13:44:37.068322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.329 [2024-11-07 13:44:37.068337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.330 qpair failed and we were unable to recover it. 00:39:29.330 [2024-11-07 13:44:37.068644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.330 [2024-11-07 13:44:37.068660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.330 qpair failed and we were unable to recover it. 00:39:29.330 [2024-11-07 13:44:37.068981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.330 [2024-11-07 13:44:37.068997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.330 qpair failed and we were unable to recover it. 00:39:29.330 [2024-11-07 13:44:37.069342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.330 [2024-11-07 13:44:37.069357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.330 qpair failed and we were unable to recover it. 00:39:29.330 [2024-11-07 13:44:37.069657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.330 [2024-11-07 13:44:37.069673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.330 qpair failed and we were unable to recover it. 00:39:29.330 [2024-11-07 13:44:37.069964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.330 [2024-11-07 13:44:37.069979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.330 qpair failed and we were unable to recover it. 00:39:29.330 [2024-11-07 13:44:37.070170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.330 [2024-11-07 13:44:37.070186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.330 qpair failed and we were unable to recover it. 00:39:29.330 [2024-11-07 13:44:37.070510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.330 [2024-11-07 13:44:37.070525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.330 qpair failed and we were unable to recover it. 00:39:29.330 [2024-11-07 13:44:37.070582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.330 [2024-11-07 13:44:37.070594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.330 qpair failed and we were unable to recover it. 00:39:29.330 [2024-11-07 13:44:37.070879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.330 [2024-11-07 13:44:37.070895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.330 qpair failed and we were unable to recover it. 00:39:29.330 [2024-11-07 13:44:37.071088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.330 [2024-11-07 13:44:37.071104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.330 qpair failed and we were unable to recover it. 00:39:29.330 [2024-11-07 13:44:37.071434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.330 [2024-11-07 13:44:37.071449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.330 qpair failed and we were unable to recover it. 00:39:29.330 [2024-11-07 13:44:37.071767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.330 [2024-11-07 13:44:37.071783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.330 qpair failed and we were unable to recover it. 00:39:29.330 [2024-11-07 13:44:37.072124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.330 [2024-11-07 13:44:37.072139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.330 qpair failed and we were unable to recover it. 00:39:29.330 [2024-11-07 13:44:37.072444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.330 [2024-11-07 13:44:37.072460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.330 qpair failed and we were unable to recover it. 00:39:29.330 [2024-11-07 13:44:37.072814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.330 [2024-11-07 13:44:37.072828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.330 qpair failed and we were unable to recover it. 00:39:29.330 [2024-11-07 13:44:37.072996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.330 [2024-11-07 13:44:37.073010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.330 qpair failed and we were unable to recover it. 00:39:29.330 [2024-11-07 13:44:37.073353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.330 [2024-11-07 13:44:37.073367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.330 qpair failed and we were unable to recover it. 00:39:29.330 [2024-11-07 13:44:37.073668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.330 [2024-11-07 13:44:37.073682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.330 qpair failed and we were unable to recover it. 00:39:29.330 [2024-11-07 13:44:37.073876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.330 [2024-11-07 13:44:37.073893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.330 qpair failed and we were unable to recover it. 00:39:29.330 [2024-11-07 13:44:37.074236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.330 [2024-11-07 13:44:37.074253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.330 qpair failed and we were unable to recover it. 00:39:29.330 [2024-11-07 13:44:37.074575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.330 [2024-11-07 13:44:37.074591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.330 qpair failed and we were unable to recover it. 00:39:29.330 [2024-11-07 13:44:37.074770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.330 [2024-11-07 13:44:37.074785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.330 qpair failed and we were unable to recover it. 00:39:29.330 [2024-11-07 13:44:37.075006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.330 [2024-11-07 13:44:37.075021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.330 qpair failed and we were unable to recover it. 00:39:29.330 [2024-11-07 13:44:37.075337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.330 [2024-11-07 13:44:37.075351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.330 qpair failed and we were unable to recover it. 00:39:29.330 [2024-11-07 13:44:37.075655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.330 [2024-11-07 13:44:37.075670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.330 qpair failed and we were unable to recover it. 00:39:29.330 [2024-11-07 13:44:37.076005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.330 [2024-11-07 13:44:37.076022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.330 qpair failed and we were unable to recover it. 00:39:29.330 [2024-11-07 13:44:37.076182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.330 [2024-11-07 13:44:37.076197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.330 qpair failed and we were unable to recover it. 00:39:29.330 [2024-11-07 13:44:37.076417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.330 [2024-11-07 13:44:37.076432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.330 qpair failed and we were unable to recover it. 00:39:29.330 [2024-11-07 13:44:37.076604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.330 [2024-11-07 13:44:37.076619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.330 qpair failed and we were unable to recover it. 00:39:29.330 [2024-11-07 13:44:37.076947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.330 [2024-11-07 13:44:37.076962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.330 qpair failed and we were unable to recover it. 00:39:29.330 [2024-11-07 13:44:37.077301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.330 [2024-11-07 13:44:37.077317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.330 qpair failed and we were unable to recover it. 00:39:29.330 [2024-11-07 13:44:37.077542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.330 [2024-11-07 13:44:37.077556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.330 qpair failed and we were unable to recover it. 00:39:29.330 [2024-11-07 13:44:37.077839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.330 [2024-11-07 13:44:37.077853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.330 qpair failed and we were unable to recover it. 00:39:29.330 [2024-11-07 13:44:37.078163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.330 [2024-11-07 13:44:37.078178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.330 qpair failed and we were unable to recover it. 00:39:29.330 [2024-11-07 13:44:37.078517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.330 [2024-11-07 13:44:37.078533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.330 qpair failed and we were unable to recover it. 00:39:29.330 [2024-11-07 13:44:37.078705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.330 [2024-11-07 13:44:37.078721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.330 qpair failed and we were unable to recover it. 00:39:29.330 [2024-11-07 13:44:37.079053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.330 [2024-11-07 13:44:37.079069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.330 qpair failed and we were unable to recover it. 00:39:29.330 [2024-11-07 13:44:37.079390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.330 [2024-11-07 13:44:37.079406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.330 qpair failed and we were unable to recover it. 00:39:29.331 [2024-11-07 13:44:37.079603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.331 [2024-11-07 13:44:37.079617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.331 qpair failed and we were unable to recover it. 00:39:29.331 [2024-11-07 13:44:37.079920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.331 [2024-11-07 13:44:37.079935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.331 qpair failed and we were unable to recover it. 00:39:29.331 [2024-11-07 13:44:37.080234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.331 [2024-11-07 13:44:37.080249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.331 qpair failed and we were unable to recover it. 00:39:29.331 [2024-11-07 13:44:37.080529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.331 [2024-11-07 13:44:37.080544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.331 qpair failed and we were unable to recover it. 00:39:29.331 [2024-11-07 13:44:37.080712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.331 [2024-11-07 13:44:37.080726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.331 qpair failed and we were unable to recover it. 00:39:29.331 [2024-11-07 13:44:37.081038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.331 [2024-11-07 13:44:37.081053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.331 qpair failed and we were unable to recover it. 00:39:29.331 [2024-11-07 13:44:37.081368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.331 [2024-11-07 13:44:37.081383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.331 qpair failed and we were unable to recover it. 00:39:29.331 [2024-11-07 13:44:37.081720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.331 [2024-11-07 13:44:37.081735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.331 qpair failed and we were unable to recover it. 00:39:29.331 [2024-11-07 13:44:37.082024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.331 [2024-11-07 13:44:37.082039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.331 qpair failed and we were unable to recover it. 00:39:29.331 [2024-11-07 13:44:37.082241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.331 [2024-11-07 13:44:37.082257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.331 qpair failed and we were unable to recover it. 00:39:29.331 [2024-11-07 13:44:37.082536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.331 [2024-11-07 13:44:37.082552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.331 qpair failed and we were unable to recover it. 00:39:29.331 [2024-11-07 13:44:37.082878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.331 [2024-11-07 13:44:37.082894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.331 qpair failed and we were unable to recover it. 00:39:29.331 [2024-11-07 13:44:37.083193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.331 [2024-11-07 13:44:37.083207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.331 qpair failed and we were unable to recover it. 00:39:29.331 [2024-11-07 13:44:37.083369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.331 [2024-11-07 13:44:37.083384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.331 qpair failed and we were unable to recover it. 00:39:29.331 [2024-11-07 13:44:37.083700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.331 [2024-11-07 13:44:37.083714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.331 qpair failed and we were unable to recover it. 00:39:29.331 [2024-11-07 13:44:37.084034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.331 [2024-11-07 13:44:37.084050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.331 qpair failed and we were unable to recover it. 00:39:29.331 [2024-11-07 13:44:37.084342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.331 [2024-11-07 13:44:37.084356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.331 qpair failed and we were unable to recover it. 00:39:29.331 [2024-11-07 13:44:37.084709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.331 [2024-11-07 13:44:37.084723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.331 qpair failed and we were unable to recover it. 00:39:29.331 [2024-11-07 13:44:37.084949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.331 [2024-11-07 13:44:37.084964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.331 qpair failed and we were unable to recover it. 00:39:29.331 [2024-11-07 13:44:37.085302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.331 [2024-11-07 13:44:37.085317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.331 qpair failed and we were unable to recover it. 00:39:29.331 [2024-11-07 13:44:37.085645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.331 [2024-11-07 13:44:37.085660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.331 qpair failed and we were unable to recover it. 00:39:29.331 [2024-11-07 13:44:37.085984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.331 [2024-11-07 13:44:37.086001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.331 qpair failed and we were unable to recover it. 00:39:29.331 [2024-11-07 13:44:37.086202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.331 [2024-11-07 13:44:37.086217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.331 qpair failed and we were unable to recover it. 00:39:29.331 [2024-11-07 13:44:37.086276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.331 [2024-11-07 13:44:37.086290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.331 qpair failed and we were unable to recover it. 00:39:29.331 [2024-11-07 13:44:37.086596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.331 [2024-11-07 13:44:37.086611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.331 qpair failed and we were unable to recover it. 00:39:29.331 [2024-11-07 13:44:37.086921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.331 [2024-11-07 13:44:37.086936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.331 qpair failed and we were unable to recover it. 00:39:29.331 [2024-11-07 13:44:37.087230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.331 [2024-11-07 13:44:37.087244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.331 qpair failed and we were unable to recover it. 00:39:29.331 [2024-11-07 13:44:37.087553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.331 [2024-11-07 13:44:37.087569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.331 qpair failed and we were unable to recover it. 00:39:29.331 [2024-11-07 13:44:37.087869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.331 [2024-11-07 13:44:37.087885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.331 qpair failed and we were unable to recover it. 00:39:29.331 [2024-11-07 13:44:37.088183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.331 [2024-11-07 13:44:37.088198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.331 qpair failed and we were unable to recover it. 00:39:29.331 [2024-11-07 13:44:37.088558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.331 [2024-11-07 13:44:37.088572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.331 qpair failed and we were unable to recover it. 00:39:29.331 [2024-11-07 13:44:37.088894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.331 [2024-11-07 13:44:37.088909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.331 qpair failed and we were unable to recover it. 00:39:29.331 [2024-11-07 13:44:37.089210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.331 [2024-11-07 13:44:37.089224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.331 qpair failed and we were unable to recover it. 00:39:29.331 [2024-11-07 13:44:37.089533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.331 [2024-11-07 13:44:37.089549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.331 qpair failed and we were unable to recover it. 00:39:29.331 [2024-11-07 13:44:37.089769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.331 [2024-11-07 13:44:37.089785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.331 qpair failed and we were unable to recover it. 00:39:29.331 [2024-11-07 13:44:37.090029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.331 [2024-11-07 13:44:37.090044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.331 qpair failed and we were unable to recover it. 00:39:29.331 [2024-11-07 13:44:37.090242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.331 [2024-11-07 13:44:37.090256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.331 qpair failed and we were unable to recover it. 00:39:29.331 [2024-11-07 13:44:37.090445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.331 [2024-11-07 13:44:37.090461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.331 qpair failed and we were unable to recover it. 00:39:29.331 [2024-11-07 13:44:37.090781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.332 [2024-11-07 13:44:37.090796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.332 qpair failed and we were unable to recover it. 00:39:29.332 [2024-11-07 13:44:37.091087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.332 [2024-11-07 13:44:37.091103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.332 qpair failed and we were unable to recover it. 00:39:29.332 [2024-11-07 13:44:37.091438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.332 [2024-11-07 13:44:37.091453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.332 qpair failed and we were unable to recover it. 00:39:29.332 [2024-11-07 13:44:37.091654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.332 [2024-11-07 13:44:37.091670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.332 qpair failed and we were unable to recover it. 00:39:29.332 [2024-11-07 13:44:37.091841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.332 [2024-11-07 13:44:37.091857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.332 qpair failed and we were unable to recover it. 00:39:29.332 [2024-11-07 13:44:37.092163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.332 [2024-11-07 13:44:37.092180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.332 qpair failed and we were unable to recover it. 00:39:29.332 [2024-11-07 13:44:37.092359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.332 [2024-11-07 13:44:37.092375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.332 qpair failed and we were unable to recover it. 00:39:29.332 [2024-11-07 13:44:37.092716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.332 [2024-11-07 13:44:37.092733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.332 qpair failed and we were unable to recover it. 00:39:29.332 [2024-11-07 13:44:37.092910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.332 [2024-11-07 13:44:37.092927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.332 qpair failed and we were unable to recover it. 00:39:29.332 [2024-11-07 13:44:37.093096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.332 [2024-11-07 13:44:37.093111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.332 qpair failed and we were unable to recover it. 00:39:29.332 [2024-11-07 13:44:37.093426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.332 [2024-11-07 13:44:37.093441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.332 qpair failed and we were unable to recover it. 00:39:29.332 [2024-11-07 13:44:37.093590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.332 [2024-11-07 13:44:37.093606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.332 qpair failed and we were unable to recover it. 00:39:29.332 [2024-11-07 13:44:37.093915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.332 [2024-11-07 13:44:37.093931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.332 qpair failed and we were unable to recover it. 00:39:29.332 [2024-11-07 13:44:37.094210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.332 [2024-11-07 13:44:37.094226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.332 qpair failed and we were unable to recover it. 00:39:29.332 [2024-11-07 13:44:37.094296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.332 [2024-11-07 13:44:37.094310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.332 qpair failed and we were unable to recover it. 00:39:29.332 [2024-11-07 13:44:37.094640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.332 [2024-11-07 13:44:37.094655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.332 qpair failed and we were unable to recover it. 00:39:29.332 [2024-11-07 13:44:37.094939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.332 [2024-11-07 13:44:37.094958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.332 qpair failed and we were unable to recover it. 00:39:29.332 [2024-11-07 13:44:37.095279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.332 [2024-11-07 13:44:37.095297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.332 qpair failed and we were unable to recover it. 00:39:29.332 [2024-11-07 13:44:37.095622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.332 [2024-11-07 13:44:37.095638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.332 qpair failed and we were unable to recover it. 00:39:29.332 [2024-11-07 13:44:37.095980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.332 [2024-11-07 13:44:37.095996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.332 qpair failed and we were unable to recover it. 00:39:29.332 [2024-11-07 13:44:37.096334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.332 [2024-11-07 13:44:37.096348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.332 qpair failed and we were unable to recover it. 00:39:29.332 [2024-11-07 13:44:37.096540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.332 [2024-11-07 13:44:37.096554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.332 qpair failed and we were unable to recover it. 00:39:29.332 [2024-11-07 13:44:37.096768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.332 [2024-11-07 13:44:37.096784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.332 qpair failed and we were unable to recover it. 00:39:29.332 [2024-11-07 13:44:37.097075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.332 [2024-11-07 13:44:37.097090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.332 qpair failed and we were unable to recover it. 00:39:29.332 [2024-11-07 13:44:37.097370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.332 [2024-11-07 13:44:37.097385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.332 qpair failed and we were unable to recover it. 00:39:29.332 [2024-11-07 13:44:37.097692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.332 [2024-11-07 13:44:37.097707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.332 qpair failed and we were unable to recover it. 00:39:29.332 [2024-11-07 13:44:37.098085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.332 [2024-11-07 13:44:37.098100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.332 qpair failed and we were unable to recover it. 00:39:29.332 [2024-11-07 13:44:37.098273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.332 [2024-11-07 13:44:37.098288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.332 qpair failed and we were unable to recover it. 00:39:29.332 [2024-11-07 13:44:37.098505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.332 [2024-11-07 13:44:37.098522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.332 qpair failed and we were unable to recover it. 00:39:29.332 [2024-11-07 13:44:37.098835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.332 [2024-11-07 13:44:37.098851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.332 qpair failed and we were unable to recover it. 00:39:29.332 [2024-11-07 13:44:37.098915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.332 [2024-11-07 13:44:37.098931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.332 qpair failed and we were unable to recover it. 00:39:29.332 [2024-11-07 13:44:37.099230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.332 [2024-11-07 13:44:37.099246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.332 qpair failed and we were unable to recover it. 00:39:29.332 [2024-11-07 13:44:37.099549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.332 [2024-11-07 13:44:37.099565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.332 qpair failed and we were unable to recover it. 00:39:29.332 [2024-11-07 13:44:37.099891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.332 [2024-11-07 13:44:37.099907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.332 qpair failed and we were unable to recover it. 00:39:29.332 [2024-11-07 13:44:37.100226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.332 [2024-11-07 13:44:37.100242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.332 qpair failed and we were unable to recover it. 00:39:29.332 [2024-11-07 13:44:37.100538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.332 [2024-11-07 13:44:37.100552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.332 qpair failed and we were unable to recover it. 00:39:29.332 [2024-11-07 13:44:37.100887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.332 [2024-11-07 13:44:37.100903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.332 qpair failed and we were unable to recover it. 00:39:29.332 [2024-11-07 13:44:37.101232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.332 [2024-11-07 13:44:37.101247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.332 qpair failed and we were unable to recover it. 00:39:29.332 [2024-11-07 13:44:37.101452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.332 [2024-11-07 13:44:37.101466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.333 qpair failed and we were unable to recover it. 00:39:29.333 [2024-11-07 13:44:37.101767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.333 [2024-11-07 13:44:37.101782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.333 qpair failed and we were unable to recover it. 00:39:29.333 [2024-11-07 13:44:37.102107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.333 [2024-11-07 13:44:37.102124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.333 qpair failed and we were unable to recover it. 00:39:29.333 [2024-11-07 13:44:37.102423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.333 [2024-11-07 13:44:37.102438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.333 qpair failed and we were unable to recover it. 00:39:29.333 [2024-11-07 13:44:37.102764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.333 [2024-11-07 13:44:37.102780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.333 qpair failed and we were unable to recover it. 00:39:29.333 [2024-11-07 13:44:37.103108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.333 [2024-11-07 13:44:37.103123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.333 qpair failed and we were unable to recover it. 00:39:29.333 [2024-11-07 13:44:37.103454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.333 [2024-11-07 13:44:37.103470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.333 qpair failed and we were unable to recover it. 00:39:29.333 [2024-11-07 13:44:37.103831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.333 [2024-11-07 13:44:37.103846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.333 qpair failed and we were unable to recover it. 00:39:29.333 [2024-11-07 13:44:37.104070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.333 [2024-11-07 13:44:37.104085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.333 qpair failed and we were unable to recover it. 00:39:29.333 [2024-11-07 13:44:37.104283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.333 [2024-11-07 13:44:37.104298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.333 qpair failed and we were unable to recover it. 00:39:29.333 [2024-11-07 13:44:37.104579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.333 [2024-11-07 13:44:37.104595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.333 qpair failed and we were unable to recover it. 00:39:29.333 [2024-11-07 13:44:37.104906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.333 [2024-11-07 13:44:37.104922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.333 qpair failed and we were unable to recover it. 00:39:29.333 [2024-11-07 13:44:37.105099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.333 [2024-11-07 13:44:37.105116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.333 qpair failed and we were unable to recover it. 00:39:29.333 [2024-11-07 13:44:37.105451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.333 [2024-11-07 13:44:37.105466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.333 qpair failed and we were unable to recover it. 00:39:29.333 [2024-11-07 13:44:37.105798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.333 [2024-11-07 13:44:37.105813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.333 qpair failed and we were unable to recover it. 00:39:29.333 [2024-11-07 13:44:37.106152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.333 [2024-11-07 13:44:37.106168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.333 qpair failed and we were unable to recover it. 00:39:29.333 [2024-11-07 13:44:37.106511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.333 [2024-11-07 13:44:37.106528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.333 qpair failed and we were unable to recover it. 00:39:29.333 [2024-11-07 13:44:37.106843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.333 [2024-11-07 13:44:37.106857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.333 qpair failed and we were unable to recover it. 00:39:29.333 [2024-11-07 13:44:37.107176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.333 [2024-11-07 13:44:37.107191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.333 qpair failed and we were unable to recover it. 00:39:29.333 [2024-11-07 13:44:37.107409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.333 [2024-11-07 13:44:37.107424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.333 qpair failed and we were unable to recover it. 00:39:29.333 [2024-11-07 13:44:37.107628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.333 [2024-11-07 13:44:37.107642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.333 qpair failed and we were unable to recover it. 00:39:29.333 [2024-11-07 13:44:37.107842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.333 [2024-11-07 13:44:37.107856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.333 qpair failed and we were unable to recover it. 00:39:29.333 [2024-11-07 13:44:37.108175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.333 [2024-11-07 13:44:37.108190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.333 qpair failed and we were unable to recover it. 00:39:29.333 [2024-11-07 13:44:37.108524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.333 [2024-11-07 13:44:37.108538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.333 qpair failed and we were unable to recover it. 00:39:29.333 [2024-11-07 13:44:37.108719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.333 [2024-11-07 13:44:37.108733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.333 qpair failed and we were unable to recover it. 00:39:29.333 [2024-11-07 13:44:37.109033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.333 [2024-11-07 13:44:37.109049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.333 qpair failed and we were unable to recover it. 00:39:29.333 [2024-11-07 13:44:37.109372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.333 [2024-11-07 13:44:37.109387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.333 qpair failed and we were unable to recover it. 00:39:29.333 [2024-11-07 13:44:37.109555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.333 [2024-11-07 13:44:37.109570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.333 qpair failed and we were unable to recover it. 00:39:29.333 [2024-11-07 13:44:37.109914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.333 [2024-11-07 13:44:37.109929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.333 qpair failed and we were unable to recover it. 00:39:29.333 [2024-11-07 13:44:37.110266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.333 [2024-11-07 13:44:37.110282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.333 qpair failed and we were unable to recover it. 00:39:29.333 [2024-11-07 13:44:37.110480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.333 [2024-11-07 13:44:37.110495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.333 qpair failed and we were unable to recover it. 00:39:29.333 [2024-11-07 13:44:37.110818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.333 [2024-11-07 13:44:37.110834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.333 qpair failed and we were unable to recover it. 00:39:29.333 [2024-11-07 13:44:37.111027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.333 [2024-11-07 13:44:37.111044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.333 qpair failed and we were unable to recover it. 00:39:29.333 [2024-11-07 13:44:37.111365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.333 [2024-11-07 13:44:37.111381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.333 qpair failed and we were unable to recover it. 00:39:29.333 [2024-11-07 13:44:37.111562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.333 [2024-11-07 13:44:37.111578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.333 qpair failed and we were unable to recover it. 00:39:29.333 [2024-11-07 13:44:37.111750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.333 [2024-11-07 13:44:37.111765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.333 qpair failed and we were unable to recover it. 00:39:29.333 [2024-11-07 13:44:37.112121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.333 [2024-11-07 13:44:37.112137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.333 qpair failed and we were unable to recover it. 00:39:29.333 [2024-11-07 13:44:37.112467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.333 [2024-11-07 13:44:37.112482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.333 qpair failed and we were unable to recover it. 00:39:29.333 [2024-11-07 13:44:37.112813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.333 [2024-11-07 13:44:37.112829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.333 qpair failed and we were unable to recover it. 00:39:29.333 [2024-11-07 13:44:37.113184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.334 [2024-11-07 13:44:37.113200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.334 qpair failed and we were unable to recover it. 00:39:29.334 [2024-11-07 13:44:37.113382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.334 [2024-11-07 13:44:37.113397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.334 qpair failed and we were unable to recover it. 00:39:29.334 [2024-11-07 13:44:37.113561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.334 [2024-11-07 13:44:37.113576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.334 qpair failed and we were unable to recover it. 00:39:29.334 [2024-11-07 13:44:37.114029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.334 [2024-11-07 13:44:37.114148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.334 qpair failed and we were unable to recover it. 00:39:29.334 [2024-11-07 13:44:37.114650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.334 [2024-11-07 13:44:37.114703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500041ff80 with addr=10.0.0.2, port=4420 00:39:29.334 qpair failed and we were unable to recover it. 00:39:29.334 [2024-11-07 13:44:37.115058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.334 [2024-11-07 13:44:37.115075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.334 qpair failed and we were unable to recover it. 00:39:29.334 [2024-11-07 13:44:37.115407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.334 [2024-11-07 13:44:37.115422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.334 qpair failed and we were unable to recover it. 00:39:29.334 [2024-11-07 13:44:37.115486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.334 [2024-11-07 13:44:37.115501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.334 qpair failed and we were unable to recover it. 00:39:29.334 [2024-11-07 13:44:37.115664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.334 [2024-11-07 13:44:37.115680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.334 qpair failed and we were unable to recover it. 00:39:29.334 [2024-11-07 13:44:37.116109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.334 [2024-11-07 13:44:37.116124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.334 qpair failed and we were unable to recover it. 00:39:29.334 [2024-11-07 13:44:37.116460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.334 [2024-11-07 13:44:37.116475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.334 qpair failed and we were unable to recover it. 00:39:29.334 [2024-11-07 13:44:37.116649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.334 [2024-11-07 13:44:37.116663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.334 qpair failed and we were unable to recover it. 00:39:29.334 [2024-11-07 13:44:37.116996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.334 [2024-11-07 13:44:37.117012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.334 qpair failed and we were unable to recover it. 00:39:29.334 [2024-11-07 13:44:37.117342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.334 [2024-11-07 13:44:37.117359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.334 qpair failed and we were unable to recover it. 00:39:29.334 [2024-11-07 13:44:37.117682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.334 [2024-11-07 13:44:37.117697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.334 qpair failed and we were unable to recover it. 00:39:29.334 [2024-11-07 13:44:37.118034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.334 [2024-11-07 13:44:37.118050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.334 qpair failed and we were unable to recover it. 00:39:29.334 [2024-11-07 13:44:37.118389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.334 [2024-11-07 13:44:37.118406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.334 qpair failed and we were unable to recover it. 00:39:29.334 [2024-11-07 13:44:37.118728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.334 [2024-11-07 13:44:37.118743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.334 qpair failed and we were unable to recover it. 00:39:29.334 [2024-11-07 13:44:37.118893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.334 [2024-11-07 13:44:37.118907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.334 qpair failed and we were unable to recover it. 00:39:29.334 [2024-11-07 13:44:37.119229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.334 [2024-11-07 13:44:37.119244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.334 qpair failed and we were unable to recover it. 00:39:29.334 [2024-11-07 13:44:37.119640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.334 [2024-11-07 13:44:37.119655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.334 qpair failed and we were unable to recover it. 00:39:29.334 [2024-11-07 13:44:37.119948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.334 [2024-11-07 13:44:37.119966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.334 qpair failed and we were unable to recover it. 00:39:29.334 [2024-11-07 13:44:37.120149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.334 [2024-11-07 13:44:37.120164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.334 qpair failed and we were unable to recover it. 00:39:29.334 [2024-11-07 13:44:37.120342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.334 [2024-11-07 13:44:37.120357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.334 qpair failed and we were unable to recover it. 00:39:29.334 [2024-11-07 13:44:37.120692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.334 [2024-11-07 13:44:37.120706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.334 qpair failed and we were unable to recover it. 00:39:29.334 [2024-11-07 13:44:37.121037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.334 [2024-11-07 13:44:37.121053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.334 qpair failed and we were unable to recover it. 00:39:29.334 [2024-11-07 13:44:37.121378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.334 [2024-11-07 13:44:37.121393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.334 qpair failed and we were unable to recover it. 00:39:29.334 [2024-11-07 13:44:37.121573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.334 [2024-11-07 13:44:37.121589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.334 qpair failed and we were unable to recover it. 00:39:29.334 [2024-11-07 13:44:37.121786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.334 [2024-11-07 13:44:37.121801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.334 qpair failed and we were unable to recover it. 00:39:29.334 [2024-11-07 13:44:37.122126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.334 [2024-11-07 13:44:37.122142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.334 qpair failed and we were unable to recover it. 00:39:29.334 [2024-11-07 13:44:37.122474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.334 [2024-11-07 13:44:37.122490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.334 qpair failed and we were unable to recover it. 00:39:29.334 [2024-11-07 13:44:37.122684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.334 [2024-11-07 13:44:37.122700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.334 qpair failed and we were unable to recover it. 00:39:29.334 [2024-11-07 13:44:37.123039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.334 [2024-11-07 13:44:37.123056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.334 qpair failed and we were unable to recover it. 00:39:29.334 [2024-11-07 13:44:37.123381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.334 [2024-11-07 13:44:37.123396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.334 qpair failed and we were unable to recover it. 00:39:29.335 [2024-11-07 13:44:37.123563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.335 [2024-11-07 13:44:37.123577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.335 qpair failed and we were unable to recover it. 00:39:29.335 [2024-11-07 13:44:37.123928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.335 [2024-11-07 13:44:37.123943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.335 qpair failed and we were unable to recover it. 00:39:29.335 [2024-11-07 13:44:37.124279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.335 [2024-11-07 13:44:37.124294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.335 qpair failed and we were unable to recover it. 00:39:29.335 [2024-11-07 13:44:37.124497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.335 [2024-11-07 13:44:37.124512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.335 qpair failed and we were unable to recover it. 00:39:29.335 [2024-11-07 13:44:37.124778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.335 [2024-11-07 13:44:37.124793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.335 qpair failed and we were unable to recover it. 00:39:29.335 [2024-11-07 13:44:37.124963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.335 [2024-11-07 13:44:37.124979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.335 qpair failed and we were unable to recover it. 00:39:29.335 [2024-11-07 13:44:37.125295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.335 [2024-11-07 13:44:37.125310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.335 qpair failed and we were unable to recover it. 00:39:29.335 [2024-11-07 13:44:37.125622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.335 [2024-11-07 13:44:37.125637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.335 qpair failed and we were unable to recover it. 00:39:29.335 [2024-11-07 13:44:37.125816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.335 [2024-11-07 13:44:37.125831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.335 qpair failed and we were unable to recover it. 00:39:29.335 [2024-11-07 13:44:37.126037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.335 [2024-11-07 13:44:37.126052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.335 qpair failed and we were unable to recover it. 00:39:29.335 [2024-11-07 13:44:37.126376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.335 [2024-11-07 13:44:37.126392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.335 qpair failed and we were unable to recover it. 00:39:29.335 [2024-11-07 13:44:37.126727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.335 [2024-11-07 13:44:37.126743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.335 qpair failed and we were unable to recover it. 00:39:29.335 [2024-11-07 13:44:37.127051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.335 [2024-11-07 13:44:37.127066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.335 qpair failed and we were unable to recover it. 00:39:29.335 [2024-11-07 13:44:37.127406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.335 [2024-11-07 13:44:37.127422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.335 qpair failed and we were unable to recover it. 00:39:29.335 [2024-11-07 13:44:37.127614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.335 [2024-11-07 13:44:37.127629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.335 qpair failed and we were unable to recover it. 00:39:29.335 [2024-11-07 13:44:37.127938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.335 [2024-11-07 13:44:37.127953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.335 qpair failed and we were unable to recover it. 00:39:29.335 [2024-11-07 13:44:37.128289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.335 [2024-11-07 13:44:37.128304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.335 qpair failed and we were unable to recover it. 00:39:29.335 [2024-11-07 13:44:37.128622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.335 [2024-11-07 13:44:37.128637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.335 qpair failed and we were unable to recover it. 00:39:29.335 [2024-11-07 13:44:37.128938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.335 [2024-11-07 13:44:37.128953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.335 qpair failed and we were unable to recover it. 00:39:29.335 [2024-11-07 13:44:37.129245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.335 [2024-11-07 13:44:37.129262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.335 qpair failed and we were unable to recover it. 00:39:29.335 [2024-11-07 13:44:37.129587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.335 [2024-11-07 13:44:37.129603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.335 qpair failed and we were unable to recover it. 00:39:29.335 [2024-11-07 13:44:37.129939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.335 [2024-11-07 13:44:37.129954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.335 qpair failed and we were unable to recover it. 00:39:29.335 [2024-11-07 13:44:37.130132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.335 [2024-11-07 13:44:37.130147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.335 qpair failed and we were unable to recover it. 00:39:29.335 [2024-11-07 13:44:37.130411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.335 [2024-11-07 13:44:37.130427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.335 qpair failed and we were unable to recover it. 00:39:29.335 [2024-11-07 13:44:37.130755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.335 [2024-11-07 13:44:37.130769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.335 qpair failed and we were unable to recover it. 00:39:29.335 [2024-11-07 13:44:37.131128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.335 [2024-11-07 13:44:37.131144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.335 qpair failed and we were unable to recover it. 00:39:29.335 [2024-11-07 13:44:37.131478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.335 [2024-11-07 13:44:37.131493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.335 qpair failed and we were unable to recover it. 00:39:29.335 [2024-11-07 13:44:37.131827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.335 [2024-11-07 13:44:37.131842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.335 qpair failed and we were unable to recover it. 00:39:29.335 [2024-11-07 13:44:37.132026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.335 [2024-11-07 13:44:37.132043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.335 qpair failed and we were unable to recover it. 00:39:29.335 [2024-11-07 13:44:37.132376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.335 [2024-11-07 13:44:37.132391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.335 qpair failed and we were unable to recover it. 00:39:29.335 [2024-11-07 13:44:37.132710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.335 [2024-11-07 13:44:37.132726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.335 qpair failed and we were unable to recover it. 00:39:29.335 [2024-11-07 13:44:37.133018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.335 [2024-11-07 13:44:37.133034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.335 qpair failed and we were unable to recover it. 00:39:29.335 [2024-11-07 13:44:37.133358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.335 [2024-11-07 13:44:37.133372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.335 qpair failed and we were unable to recover it. 00:39:29.335 [2024-11-07 13:44:37.133572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.335 [2024-11-07 13:44:37.133587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.335 qpair failed and we were unable to recover it. 00:39:29.335 [2024-11-07 13:44:37.133905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.335 [2024-11-07 13:44:37.133922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.335 qpair failed and we were unable to recover it. 00:39:29.335 [2024-11-07 13:44:37.134122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.335 [2024-11-07 13:44:37.134136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.335 qpair failed and we were unable to recover it. 00:39:29.335 [2024-11-07 13:44:37.134432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.335 [2024-11-07 13:44:37.134447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.335 qpair failed and we were unable to recover it. 00:39:29.335 [2024-11-07 13:44:37.134614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.335 [2024-11-07 13:44:37.134628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.335 qpair failed and we were unable to recover it. 00:39:29.335 [2024-11-07 13:44:37.134818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.336 [2024-11-07 13:44:37.134833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.336 qpair failed and we were unable to recover it. 00:39:29.336 [2024-11-07 13:44:37.135032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.336 [2024-11-07 13:44:37.135047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.336 qpair failed and we were unable to recover it. 00:39:29.336 [2024-11-07 13:44:37.135366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.336 [2024-11-07 13:44:37.135381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.336 qpair failed and we were unable to recover it. 00:39:29.336 [2024-11-07 13:44:37.135683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.336 [2024-11-07 13:44:37.135699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.336 qpair failed and we were unable to recover it. 00:39:29.336 [2024-11-07 13:44:37.136067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.336 [2024-11-07 13:44:37.136082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.336 qpair failed and we were unable to recover it. 00:39:29.336 [2024-11-07 13:44:37.136412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.336 [2024-11-07 13:44:37.136427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.336 qpair failed and we were unable to recover it. 00:39:29.336 [2024-11-07 13:44:37.136603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.336 [2024-11-07 13:44:37.136618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.336 qpair failed and we were unable to recover it. 00:39:29.336 [2024-11-07 13:44:37.136784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.336 [2024-11-07 13:44:37.136797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.336 qpair failed and we were unable to recover it. 00:39:29.336 [2024-11-07 13:44:37.136987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.336 [2024-11-07 13:44:37.137002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.336 qpair failed and we were unable to recover it. 00:39:29.336 [2024-11-07 13:44:37.137288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.336 [2024-11-07 13:44:37.137304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.336 qpair failed and we were unable to recover it. 00:39:29.336 [2024-11-07 13:44:37.137630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.336 [2024-11-07 13:44:37.137646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.336 qpair failed and we were unable to recover it. 00:39:29.336 [2024-11-07 13:44:37.137827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.336 [2024-11-07 13:44:37.137843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.336 qpair failed and we were unable to recover it. 00:39:29.336 [2024-11-07 13:44:37.138204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.336 [2024-11-07 13:44:37.138221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.336 qpair failed and we were unable to recover it. 00:39:29.336 [2024-11-07 13:44:37.138528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.336 [2024-11-07 13:44:37.138542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.336 qpair failed and we were unable to recover it. 00:39:29.336 [2024-11-07 13:44:37.138867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.336 [2024-11-07 13:44:37.138882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.336 qpair failed and we were unable to recover it. 00:39:29.336 [2024-11-07 13:44:37.139076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.336 [2024-11-07 13:44:37.139091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.336 qpair failed and we were unable to recover it. 00:39:29.336 [2024-11-07 13:44:37.139419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.336 [2024-11-07 13:44:37.139434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.336 qpair failed and we were unable to recover it. 00:39:29.336 [2024-11-07 13:44:37.139726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.336 [2024-11-07 13:44:37.139741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.336 qpair failed and we were unable to recover it. 00:39:29.336 [2024-11-07 13:44:37.140061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.336 [2024-11-07 13:44:37.140077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.336 qpair failed and we were unable to recover it. 00:39:29.336 [2024-11-07 13:44:37.140380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.336 [2024-11-07 13:44:37.140396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.336 qpair failed and we were unable to recover it. 00:39:29.336 [2024-11-07 13:44:37.140599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.336 [2024-11-07 13:44:37.140614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.336 qpair failed and we were unable to recover it. 00:39:29.336 [2024-11-07 13:44:37.140794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.336 [2024-11-07 13:44:37.140811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.336 qpair failed and we were unable to recover it. 00:39:29.336 [2024-11-07 13:44:37.140995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.336 [2024-11-07 13:44:37.141010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.336 qpair failed and we were unable to recover it. 00:39:29.336 [2024-11-07 13:44:37.141344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.336 [2024-11-07 13:44:37.141360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.336 qpair failed and we were unable to recover it. 00:39:29.336 [2024-11-07 13:44:37.141728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.336 [2024-11-07 13:44:37.141743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.336 qpair failed and we were unable to recover it. 00:39:29.336 [2024-11-07 13:44:37.142036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.336 [2024-11-07 13:44:37.142051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.336 qpair failed and we were unable to recover it. 00:39:29.336 [2024-11-07 13:44:37.142386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.336 [2024-11-07 13:44:37.142401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.336 qpair failed and we were unable to recover it. 00:39:29.336 [2024-11-07 13:44:37.142570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.336 [2024-11-07 13:44:37.142586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.336 qpair failed and we were unable to recover it. 00:39:29.336 [2024-11-07 13:44:37.142767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.336 [2024-11-07 13:44:37.142781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.336 qpair failed and we were unable to recover it. 00:39:29.336 [2024-11-07 13:44:37.143066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.336 [2024-11-07 13:44:37.143082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.336 qpair failed and we were unable to recover it. 00:39:29.336 [2024-11-07 13:44:37.143282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.336 [2024-11-07 13:44:37.143296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.336 qpair failed and we were unable to recover it. 00:39:29.336 [2024-11-07 13:44:37.143577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.336 [2024-11-07 13:44:37.143592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.336 qpair failed and we were unable to recover it. 00:39:29.336 [2024-11-07 13:44:37.143901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.336 [2024-11-07 13:44:37.143924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.336 qpair failed and we were unable to recover it. 00:39:29.336 [2024-11-07 13:44:37.144249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.336 [2024-11-07 13:44:37.144264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.336 qpair failed and we were unable to recover it. 00:39:29.336 [2024-11-07 13:44:37.144602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.336 [2024-11-07 13:44:37.144618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.336 qpair failed and we were unable to recover it. 00:39:29.336 [2024-11-07 13:44:37.144806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.336 [2024-11-07 13:44:37.144822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.336 qpair failed and we were unable to recover it. 00:39:29.336 [2024-11-07 13:44:37.145038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.336 [2024-11-07 13:44:37.145054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.336 qpair failed and we were unable to recover it. 00:39:29.336 [2024-11-07 13:44:37.145362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.336 [2024-11-07 13:44:37.145377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.336 qpair failed and we were unable to recover it. 00:39:29.336 [2024-11-07 13:44:37.145697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.336 [2024-11-07 13:44:37.145714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.337 qpair failed and we were unable to recover it. 00:39:29.337 [2024-11-07 13:44:37.146074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.337 [2024-11-07 13:44:37.146089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.337 qpair failed and we were unable to recover it. 00:39:29.337 [2024-11-07 13:44:37.146292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.337 [2024-11-07 13:44:37.146306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.337 qpair failed and we were unable to recover it. 00:39:29.337 [2024-11-07 13:44:37.146630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.337 [2024-11-07 13:44:37.146644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.337 qpair failed and we were unable to recover it. 00:39:29.337 [2024-11-07 13:44:37.146969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.337 [2024-11-07 13:44:37.146984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.337 qpair failed and we were unable to recover it. 00:39:29.337 [2024-11-07 13:44:37.147289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.337 [2024-11-07 13:44:37.147304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.337 qpair failed and we were unable to recover it. 00:39:29.337 [2024-11-07 13:44:37.147487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.337 [2024-11-07 13:44:37.147501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.337 qpair failed and we were unable to recover it. 00:39:29.337 [2024-11-07 13:44:37.147871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.337 [2024-11-07 13:44:37.147886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.337 qpair failed and we were unable to recover it. 00:39:29.337 [2024-11-07 13:44:37.148070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.337 [2024-11-07 13:44:37.148084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.337 qpair failed and we were unable to recover it. 00:39:29.337 [2024-11-07 13:44:37.148424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.337 [2024-11-07 13:44:37.148439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.337 qpair failed and we were unable to recover it. 00:39:29.337 [2024-11-07 13:44:37.148776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.337 [2024-11-07 13:44:37.148791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.337 qpair failed and we were unable to recover it. 00:39:29.337 [2024-11-07 13:44:37.149121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.337 [2024-11-07 13:44:37.149138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.337 qpair failed and we were unable to recover it. 00:39:29.337 [2024-11-07 13:44:37.149459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.337 [2024-11-07 13:44:37.149474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.337 qpair failed and we were unable to recover it. 00:39:29.337 [2024-11-07 13:44:37.149788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.337 [2024-11-07 13:44:37.149804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.337 qpair failed and we were unable to recover it. 00:39:29.337 [2024-11-07 13:44:37.150036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.337 [2024-11-07 13:44:37.150052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.337 qpair failed and we were unable to recover it. 00:39:29.337 [2024-11-07 13:44:37.150225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.337 [2024-11-07 13:44:37.150241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.337 qpair failed and we were unable to recover it. 00:39:29.337 [2024-11-07 13:44:37.150415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.337 [2024-11-07 13:44:37.150430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.337 qpair failed and we were unable to recover it. 00:39:29.337 [2024-11-07 13:44:37.150755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.337 [2024-11-07 13:44:37.150771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.337 qpair failed and we were unable to recover it. 00:39:29.337 [2024-11-07 13:44:37.151111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.337 [2024-11-07 13:44:37.151126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.337 qpair failed and we were unable to recover it. 00:39:29.337 [2024-11-07 13:44:37.151456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.337 [2024-11-07 13:44:37.151472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.337 qpair failed and we were unable to recover it. 00:39:29.337 [2024-11-07 13:44:37.151809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.337 [2024-11-07 13:44:37.151825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.337 qpair failed and we were unable to recover it. 00:39:29.337 [2024-11-07 13:44:37.152176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.337 [2024-11-07 13:44:37.152192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.337 qpair failed and we were unable to recover it. 00:39:29.337 [2024-11-07 13:44:37.152357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.337 [2024-11-07 13:44:37.152372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.337 qpair failed and we were unable to recover it. 00:39:29.337 [2024-11-07 13:44:37.152708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.337 [2024-11-07 13:44:37.152727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.337 qpair failed and we were unable to recover it. 00:39:29.337 [2024-11-07 13:44:37.153029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.337 [2024-11-07 13:44:37.153045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.337 qpair failed and we were unable to recover it. 00:39:29.337 [2024-11-07 13:44:37.153220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.337 [2024-11-07 13:44:37.153234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.337 qpair failed and we were unable to recover it. 00:39:29.337 [2024-11-07 13:44:37.153559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.337 [2024-11-07 13:44:37.153574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.337 qpair failed and we were unable to recover it. 00:39:29.337 [2024-11-07 13:44:37.153902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.337 [2024-11-07 13:44:37.153918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.337 qpair failed and we were unable to recover it. 00:39:29.337 [2024-11-07 13:44:37.154247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.337 [2024-11-07 13:44:37.154262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.337 qpair failed and we were unable to recover it. 00:39:29.337 [2024-11-07 13:44:37.154596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.337 [2024-11-07 13:44:37.154612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.337 qpair failed and we were unable to recover it. 00:39:29.337 [2024-11-07 13:44:37.154787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.337 [2024-11-07 13:44:37.154802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.337 qpair failed and we were unable to recover it. 00:39:29.337 [2024-11-07 13:44:37.154973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.337 [2024-11-07 13:44:37.154988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.337 qpair failed and we were unable to recover it. 00:39:29.337 [2024-11-07 13:44:37.155291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.337 [2024-11-07 13:44:37.155306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.337 qpair failed and we were unable to recover it. 00:39:29.337 [2024-11-07 13:44:37.155488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.337 [2024-11-07 13:44:37.155504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.337 qpair failed and we were unable to recover it. 00:39:29.337 [2024-11-07 13:44:37.155716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.337 [2024-11-07 13:44:37.155730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.337 qpair failed and we were unable to recover it. 00:39:29.337 [2024-11-07 13:44:37.156046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.337 [2024-11-07 13:44:37.156063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.337 qpair failed and we were unable to recover it. 00:39:29.337 [2024-11-07 13:44:37.156345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.337 [2024-11-07 13:44:37.156359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.337 qpair failed and we were unable to recover it. 00:39:29.337 [2024-11-07 13:44:37.156696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.337 [2024-11-07 13:44:37.156711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.337 qpair failed and we were unable to recover it. 00:39:29.337 [2024-11-07 13:44:37.157045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.337 [2024-11-07 13:44:37.157060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.338 qpair failed and we were unable to recover it. 00:39:29.338 [2024-11-07 13:44:37.157256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.338 [2024-11-07 13:44:37.157276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.338 qpair failed and we were unable to recover it. 00:39:29.338 [2024-11-07 13:44:37.157513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.338 [2024-11-07 13:44:37.157527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.338 qpair failed and we were unable to recover it. 00:39:29.338 [2024-11-07 13:44:37.157709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.338 [2024-11-07 13:44:37.157722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.338 qpair failed and we were unable to recover it. 00:39:29.338 [2024-11-07 13:44:37.157990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.338 [2024-11-07 13:44:37.158005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.338 qpair failed and we were unable to recover it. 00:39:29.338 [2024-11-07 13:44:37.158379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.338 [2024-11-07 13:44:37.158393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.338 qpair failed and we were unable to recover it. 00:39:29.338 [2024-11-07 13:44:37.158705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.338 [2024-11-07 13:44:37.158720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.338 qpair failed and we were unable to recover it. 00:39:29.338 [2024-11-07 13:44:37.158907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.338 [2024-11-07 13:44:37.158921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.338 qpair failed and we were unable to recover it. 00:39:29.338 [2024-11-07 13:44:37.159206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.338 [2024-11-07 13:44:37.159221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.338 qpair failed and we were unable to recover it. 00:39:29.338 [2024-11-07 13:44:37.159410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.338 [2024-11-07 13:44:37.159424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.338 qpair failed and we were unable to recover it. 00:39:29.338 [2024-11-07 13:44:37.159723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.338 [2024-11-07 13:44:37.159737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.338 qpair failed and we were unable to recover it. 00:39:29.338 [2024-11-07 13:44:37.160038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.338 [2024-11-07 13:44:37.160053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.338 qpair failed and we were unable to recover it. 00:39:29.338 [2024-11-07 13:44:37.160353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.338 [2024-11-07 13:44:37.160369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.338 qpair failed and we were unable to recover it. 00:39:29.338 [2024-11-07 13:44:37.160692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.338 [2024-11-07 13:44:37.160708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.338 qpair failed and we were unable to recover it. 00:39:29.338 [2024-11-07 13:44:37.161028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.338 [2024-11-07 13:44:37.161044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.338 qpair failed and we were unable to recover it. 00:39:29.338 [2024-11-07 13:44:37.161334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.338 [2024-11-07 13:44:37.161348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.338 qpair failed and we were unable to recover it. 00:39:29.338 [2024-11-07 13:44:37.161678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.338 [2024-11-07 13:44:37.161694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.338 qpair failed and we were unable to recover it. 00:39:29.338 [2024-11-07 13:44:37.162023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.338 [2024-11-07 13:44:37.162038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.338 qpair failed and we were unable to recover it. 00:39:29.338 [2024-11-07 13:44:37.162384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.338 [2024-11-07 13:44:37.162401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.338 qpair failed and we were unable to recover it. 00:39:29.338 [2024-11-07 13:44:37.162573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.338 [2024-11-07 13:44:37.162588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.338 qpair failed and we were unable to recover it. 00:39:29.338 [2024-11-07 13:44:37.162908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.338 [2024-11-07 13:44:37.162924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.338 qpair failed and we were unable to recover it. 00:39:29.338 [2024-11-07 13:44:37.163256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.338 [2024-11-07 13:44:37.163272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.338 qpair failed and we were unable to recover it. 00:39:29.338 [2024-11-07 13:44:37.163603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.338 [2024-11-07 13:44:37.163619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.338 qpair failed and we were unable to recover it. 00:39:29.338 [2024-11-07 13:44:37.163796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.338 [2024-11-07 13:44:37.163812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.338 qpair failed and we were unable to recover it. 00:39:29.338 [2024-11-07 13:44:37.163992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.338 [2024-11-07 13:44:37.164009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.338 qpair failed and we were unable to recover it. 00:39:29.338 [2024-11-07 13:44:37.164330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.338 [2024-11-07 13:44:37.164348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.338 qpair failed and we were unable to recover it. 00:39:29.338 [2024-11-07 13:44:37.164732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.338 [2024-11-07 13:44:37.164747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.338 qpair failed and we were unable to recover it. 00:39:29.338 [2024-11-07 13:44:37.165028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.338 [2024-11-07 13:44:37.165044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.338 qpair failed and we were unable to recover it. 00:39:29.338 [2024-11-07 13:44:37.165365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.338 [2024-11-07 13:44:37.165381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.338 qpair failed and we were unable to recover it. 00:39:29.338 [2024-11-07 13:44:37.165706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.338 [2024-11-07 13:44:37.165721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.338 qpair failed and we were unable to recover it. 00:39:29.338 [2024-11-07 13:44:37.166084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.338 [2024-11-07 13:44:37.166099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.338 qpair failed and we were unable to recover it. 00:39:29.338 [2024-11-07 13:44:37.166381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.338 [2024-11-07 13:44:37.166396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.338 qpair failed and we were unable to recover it. 00:39:29.338 [2024-11-07 13:44:37.166698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.338 [2024-11-07 13:44:37.166713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.338 qpair failed and we were unable to recover it. 00:39:29.338 [2024-11-07 13:44:37.166881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.338 [2024-11-07 13:44:37.166896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.338 qpair failed and we were unable to recover it. 00:39:29.338 [2024-11-07 13:44:37.167225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.338 [2024-11-07 13:44:37.167240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.338 qpair failed and we were unable to recover it. 00:39:29.339 [2024-11-07 13:44:37.167614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.339 [2024-11-07 13:44:37.167630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.339 qpair failed and we were unable to recover it. 00:39:29.339 [2024-11-07 13:44:37.167796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.339 [2024-11-07 13:44:37.167811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.339 qpair failed and we were unable to recover it. 00:39:29.339 [2024-11-07 13:44:37.168183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.339 [2024-11-07 13:44:37.168198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.339 qpair failed and we were unable to recover it. 00:39:29.339 [2024-11-07 13:44:37.168403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.339 [2024-11-07 13:44:37.168418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.339 qpair failed and we were unable to recover it. 00:39:29.339 [2024-11-07 13:44:37.168747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.339 [2024-11-07 13:44:37.168762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.339 qpair failed and we were unable to recover it. 00:39:29.339 [2024-11-07 13:44:37.169096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.339 [2024-11-07 13:44:37.169113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.339 qpair failed and we were unable to recover it. 00:39:29.339 [2024-11-07 13:44:37.169445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.339 [2024-11-07 13:44:37.169461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.339 qpair failed and we were unable to recover it. 00:39:29.339 [2024-11-07 13:44:37.169880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.339 [2024-11-07 13:44:37.169897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.339 qpair failed and we were unable to recover it. 00:39:29.339 [2024-11-07 13:44:37.170221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.339 [2024-11-07 13:44:37.170235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.339 qpair failed and we were unable to recover it. 00:39:29.339 [2024-11-07 13:44:37.170556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.339 [2024-11-07 13:44:37.170572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.339 qpair failed and we were unable to recover it. 00:39:29.339 [2024-11-07 13:44:37.170901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.339 [2024-11-07 13:44:37.170917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.339 qpair failed and we were unable to recover it. 00:39:29.339 [2024-11-07 13:44:37.171248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.339 [2024-11-07 13:44:37.171264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.339 qpair failed and we were unable to recover it. 00:39:29.339 [2024-11-07 13:44:37.171555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.339 [2024-11-07 13:44:37.171570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.339 qpair failed and we were unable to recover it. 00:39:29.339 [2024-11-07 13:44:37.171928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.339 [2024-11-07 13:44:37.171944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.339 qpair failed and we were unable to recover it. 00:39:29.339 [2024-11-07 13:44:37.172292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.339 [2024-11-07 13:44:37.172306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.339 qpair failed and we were unable to recover it. 00:39:29.339 [2024-11-07 13:44:37.172649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.339 [2024-11-07 13:44:37.172664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.339 qpair failed and we were unable to recover it. 00:39:29.339 [2024-11-07 13:44:37.173000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.339 [2024-11-07 13:44:37.173015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.339 qpair failed and we were unable to recover it. 00:39:29.339 [2024-11-07 13:44:37.173389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.339 [2024-11-07 13:44:37.173405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.339 qpair failed and we were unable to recover it. 00:39:29.339 [2024-11-07 13:44:37.173601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.339 [2024-11-07 13:44:37.173616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.339 qpair failed and we were unable to recover it. 00:39:29.339 [2024-11-07 13:44:37.173924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.339 [2024-11-07 13:44:37.173939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.339 qpair failed and we were unable to recover it. 00:39:29.339 [2024-11-07 13:44:37.174123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.339 [2024-11-07 13:44:37.174137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.339 qpair failed and we were unable to recover it. 00:39:29.339 [2024-11-07 13:44:37.174479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.339 [2024-11-07 13:44:37.174494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.339 qpair failed and we were unable to recover it. 00:39:29.339 [2024-11-07 13:44:37.174848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.339 [2024-11-07 13:44:37.174867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.339 qpair failed and we were unable to recover it. 00:39:29.339 [2024-11-07 13:44:37.175162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.339 [2024-11-07 13:44:37.175176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.339 qpair failed and we were unable to recover it. 00:39:29.339 [2024-11-07 13:44:37.175459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.339 [2024-11-07 13:44:37.175473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.339 qpair failed and we were unable to recover it. 00:39:29.339 [2024-11-07 13:44:37.175658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.339 [2024-11-07 13:44:37.175673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.339 qpair failed and we were unable to recover it. 00:39:29.339 [2024-11-07 13:44:37.175989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.339 [2024-11-07 13:44:37.176005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.339 qpair failed and we were unable to recover it. 00:39:29.339 [2024-11-07 13:44:37.176335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.339 [2024-11-07 13:44:37.176351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.339 qpair failed and we were unable to recover it. 00:39:29.339 [2024-11-07 13:44:37.176694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.339 [2024-11-07 13:44:37.176710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.339 qpair failed and we were unable to recover it. 00:39:29.339 [2024-11-07 13:44:37.177023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.339 [2024-11-07 13:44:37.177039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.339 qpair failed and we were unable to recover it. 00:39:29.339 [2024-11-07 13:44:37.177207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.339 [2024-11-07 13:44:37.177224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.339 qpair failed and we were unable to recover it. 00:39:29.339 [2024-11-07 13:44:37.177424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.339 [2024-11-07 13:44:37.177438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.339 qpair failed and we were unable to recover it. 00:39:29.339 [2024-11-07 13:44:37.177783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.339 [2024-11-07 13:44:37.177797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.339 qpair failed and we were unable to recover it. 00:39:29.339 [2024-11-07 13:44:37.178126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.339 [2024-11-07 13:44:37.178143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.339 qpair failed and we were unable to recover it. 00:39:29.339 [2024-11-07 13:44:37.178494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.339 [2024-11-07 13:44:37.178509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.339 qpair failed and we were unable to recover it. 00:39:29.339 [2024-11-07 13:44:37.178831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.339 [2024-11-07 13:44:37.178846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.339 qpair failed and we were unable to recover it. 00:39:29.339 [2024-11-07 13:44:37.179049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.339 [2024-11-07 13:44:37.179064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.339 qpair failed and we were unable to recover it. 00:39:29.339 [2024-11-07 13:44:37.179245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.339 [2024-11-07 13:44:37.179259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.339 qpair failed and we were unable to recover it. 00:39:29.339 [2024-11-07 13:44:37.179597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.340 [2024-11-07 13:44:37.179612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.340 qpair failed and we were unable to recover it. 00:39:29.340 [2024-11-07 13:44:37.179828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.340 [2024-11-07 13:44:37.179843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.340 qpair failed and we were unable to recover it. 00:39:29.340 [2024-11-07 13:44:37.180164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.340 [2024-11-07 13:44:37.180179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.340 qpair failed and we were unable to recover it. 00:39:29.340 [2024-11-07 13:44:37.180515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.340 [2024-11-07 13:44:37.180530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.340 qpair failed and we were unable to recover it. 00:39:29.340 [2024-11-07 13:44:37.180867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.340 [2024-11-07 13:44:37.180882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.340 qpair failed and we were unable to recover it. 00:39:29.340 [2024-11-07 13:44:37.181166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.340 [2024-11-07 13:44:37.181180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.340 qpair failed and we were unable to recover it. 00:39:29.340 [2024-11-07 13:44:37.181361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.340 [2024-11-07 13:44:37.181376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.340 qpair failed and we were unable to recover it. 00:39:29.340 [2024-11-07 13:44:37.181701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.340 [2024-11-07 13:44:37.181716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.340 qpair failed and we were unable to recover it. 00:39:29.340 [2024-11-07 13:44:37.182040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.340 [2024-11-07 13:44:37.182056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.340 qpair failed and we were unable to recover it. 00:39:29.340 [2024-11-07 13:44:37.182118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.340 [2024-11-07 13:44:37.182133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.340 qpair failed and we were unable to recover it. 00:39:29.340 [2024-11-07 13:44:37.182426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.340 [2024-11-07 13:44:37.182441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.340 qpair failed and we were unable to recover it. 00:39:29.340 [2024-11-07 13:44:37.182609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.340 [2024-11-07 13:44:37.182623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.340 qpair failed and we were unable to recover it. 00:39:29.340 [2024-11-07 13:44:37.182959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.340 [2024-11-07 13:44:37.182975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.340 qpair failed and we were unable to recover it. 00:39:29.340 [2024-11-07 13:44:37.183148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.340 [2024-11-07 13:44:37.183163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.340 qpair failed and we were unable to recover it. 00:39:29.340 [2024-11-07 13:44:37.183538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.340 [2024-11-07 13:44:37.183553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.340 qpair failed and we were unable to recover it. 00:39:29.340 [2024-11-07 13:44:37.183747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.340 [2024-11-07 13:44:37.183761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.340 qpair failed and we were unable to recover it. 00:39:29.340 [2024-11-07 13:44:37.184098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.340 [2024-11-07 13:44:37.184113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.340 qpair failed and we were unable to recover it. 00:39:29.340 [2024-11-07 13:44:37.184450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.340 [2024-11-07 13:44:37.184465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.340 qpair failed and we were unable to recover it. 00:39:29.340 [2024-11-07 13:44:37.184648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.340 [2024-11-07 13:44:37.184663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.340 qpair failed and we were unable to recover it. 00:39:29.340 [2024-11-07 13:44:37.184970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.340 [2024-11-07 13:44:37.184986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.340 qpair failed and we were unable to recover it. 00:39:29.340 [2024-11-07 13:44:37.185357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.340 [2024-11-07 13:44:37.185372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.340 qpair failed and we were unable to recover it. 00:39:29.340 [2024-11-07 13:44:37.185712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.340 [2024-11-07 13:44:37.185727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.340 qpair failed and we were unable to recover it. 00:39:29.340 [2024-11-07 13:44:37.185902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.340 [2024-11-07 13:44:37.185916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.340 qpair failed and we were unable to recover it. 00:39:29.340 [2024-11-07 13:44:37.186198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.340 [2024-11-07 13:44:37.186213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.340 qpair failed and we were unable to recover it. 00:39:29.340 [2024-11-07 13:44:37.186439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.340 [2024-11-07 13:44:37.186454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.340 qpair failed and we were unable to recover it. 00:39:29.340 [2024-11-07 13:44:37.186624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.340 [2024-11-07 13:44:37.186638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.340 qpair failed and we were unable to recover it. 00:39:29.340 [2024-11-07 13:44:37.186978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.340 [2024-11-07 13:44:37.186994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.340 qpair failed and we were unable to recover it. 00:39:29.340 [2024-11-07 13:44:37.187175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.340 [2024-11-07 13:44:37.187189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.340 qpair failed and we were unable to recover it. 00:39:29.340 [2024-11-07 13:44:37.187373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.340 [2024-11-07 13:44:37.187387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.340 qpair failed and we were unable to recover it. 00:39:29.340 [2024-11-07 13:44:37.187716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.340 [2024-11-07 13:44:37.187731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.340 qpair failed and we were unable to recover it. 00:39:29.340 [2024-11-07 13:44:37.188071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.340 [2024-11-07 13:44:37.188087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.340 qpair failed and we were unable to recover it. 00:39:29.340 [2024-11-07 13:44:37.188298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.340 [2024-11-07 13:44:37.188312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.340 qpair failed and we were unable to recover it. 00:39:29.340 [2024-11-07 13:44:37.188716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.340 [2024-11-07 13:44:37.188838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:29.340 qpair failed and we were unable to recover it. 00:39:29.340 [2024-11-07 13:44:37.189324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.340 [2024-11-07 13:44:37.189376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000440000 with addr=10.0.0.2, port=4420 00:39:29.340 qpair failed and we were unable to recover it. 00:39:29.340 [2024-11-07 13:44:37.189594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.340 [2024-11-07 13:44:37.189610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.340 qpair failed and we were unable to recover it. 00:39:29.340 [2024-11-07 13:44:37.189916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.340 [2024-11-07 13:44:37.189931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.340 qpair failed and we were unable to recover it. 00:39:29.340 [2024-11-07 13:44:37.190130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.340 [2024-11-07 13:44:37.190144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.340 qpair failed and we were unable to recover it. 00:39:29.340 [2024-11-07 13:44:37.190321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.340 [2024-11-07 13:44:37.190337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.340 qpair failed and we were unable to recover it. 00:39:29.340 [2024-11-07 13:44:37.190512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.340 [2024-11-07 13:44:37.190527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.341 qpair failed and we were unable to recover it. 00:39:29.341 [2024-11-07 13:44:37.190575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.341 [2024-11-07 13:44:37.190588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.341 qpair failed and we were unable to recover it. 00:39:29.341 [2024-11-07 13:44:37.190886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.341 [2024-11-07 13:44:37.190900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.341 qpair failed and we were unable to recover it. 00:39:29.341 [2024-11-07 13:44:37.191119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.341 [2024-11-07 13:44:37.191134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.341 qpair failed and we were unable to recover it. 00:39:29.341 [2024-11-07 13:44:37.191462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.341 [2024-11-07 13:44:37.191476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.341 qpair failed and we were unable to recover it. 00:39:29.341 [2024-11-07 13:44:37.191605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.341 [2024-11-07 13:44:37.191619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.341 qpair failed and we were unable to recover it. 00:39:29.341 [2024-11-07 13:44:37.191794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.341 [2024-11-07 13:44:37.191808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.341 qpair failed and we were unable to recover it. 00:39:29.341 [2024-11-07 13:44:37.192043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.341 [2024-11-07 13:44:37.192059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.341 qpair failed and we were unable to recover it. 00:39:29.341 [2024-11-07 13:44:37.192363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.341 [2024-11-07 13:44:37.192378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.341 qpair failed and we were unable to recover it. 00:39:29.341 [2024-11-07 13:44:37.192722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.341 [2024-11-07 13:44:37.192737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.341 qpair failed and we were unable to recover it. 00:39:29.341 [2024-11-07 13:44:37.193125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.341 [2024-11-07 13:44:37.193139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.341 qpair failed and we were unable to recover it. 00:39:29.341 [2024-11-07 13:44:37.193479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.341 [2024-11-07 13:44:37.193494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.341 qpair failed and we were unable to recover it. 00:39:29.341 [2024-11-07 13:44:37.193683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.341 [2024-11-07 13:44:37.193698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.341 qpair failed and we were unable to recover it. 00:39:29.341 [2024-11-07 13:44:37.193878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.341 [2024-11-07 13:44:37.193893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.341 qpair failed and we were unable to recover it. 00:39:29.341 [2024-11-07 13:44:37.194134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.341 [2024-11-07 13:44:37.194148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.341 qpair failed and we were unable to recover it. 00:39:29.341 [2024-11-07 13:44:37.194431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.341 [2024-11-07 13:44:37.194446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.341 qpair failed and we were unable to recover it. 00:39:29.341 [2024-11-07 13:44:37.194646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.341 [2024-11-07 13:44:37.194662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.341 qpair failed and we were unable to recover it. 00:39:29.341 [2024-11-07 13:44:37.194849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.341 [2024-11-07 13:44:37.194876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.341 qpair failed and we were unable to recover it. 00:39:29.341 [2024-11-07 13:44:37.195176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.341 [2024-11-07 13:44:37.195191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.341 qpair failed and we were unable to recover it. 00:39:29.341 [2024-11-07 13:44:37.195398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.341 [2024-11-07 13:44:37.195415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.341 qpair failed and we were unable to recover it. 00:39:29.341 [2024-11-07 13:44:37.195520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.341 [2024-11-07 13:44:37.195535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.341 qpair failed and we were unable to recover it. 00:39:29.341 [2024-11-07 13:44:37.195819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.341 [2024-11-07 13:44:37.195834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.341 qpair failed and we were unable to recover it. 00:39:29.341 [2024-11-07 13:44:37.196169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.341 [2024-11-07 13:44:37.196185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.341 qpair failed and we were unable to recover it. 00:39:29.341 [2024-11-07 13:44:37.196375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.341 [2024-11-07 13:44:37.196390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.341 qpair failed and we were unable to recover it. 00:39:29.341 [2024-11-07 13:44:37.196710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.341 [2024-11-07 13:44:37.196725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.341 qpair failed and we were unable to recover it. 00:39:29.341 [2024-11-07 13:44:37.197071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.341 [2024-11-07 13:44:37.197087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.341 qpair failed and we were unable to recover it. 00:39:29.341 [2024-11-07 13:44:37.197319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.341 [2024-11-07 13:44:37.197333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.341 qpair failed and we were unable to recover it. 00:39:29.341 [2024-11-07 13:44:37.197533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.341 [2024-11-07 13:44:37.197549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.341 qpair failed and we were unable to recover it. 00:39:29.341 [2024-11-07 13:44:37.197765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.341 [2024-11-07 13:44:37.197779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.341 qpair failed and we were unable to recover it. 00:39:29.341 [2024-11-07 13:44:37.198003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.341 [2024-11-07 13:44:37.198017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.341 qpair failed and we were unable to recover it. 00:39:29.341 [2024-11-07 13:44:37.198206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.341 [2024-11-07 13:44:37.198221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.341 qpair failed and we were unable to recover it. 00:39:29.341 [2024-11-07 13:44:37.198562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.341 [2024-11-07 13:44:37.198576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.341 qpair failed and we were unable to recover it. 00:39:29.341 [2024-11-07 13:44:37.198668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.341 [2024-11-07 13:44:37.198681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.341 qpair failed and we were unable to recover it. 00:39:29.341 [2024-11-07 13:44:37.198891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.341 [2024-11-07 13:44:37.198906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.341 qpair failed and we were unable to recover it. 00:39:29.341 [2024-11-07 13:44:37.199191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.341 [2024-11-07 13:44:37.199209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.341 qpair failed and we were unable to recover it. 00:39:29.341 [2024-11-07 13:44:37.199489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.341 [2024-11-07 13:44:37.199505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.341 qpair failed and we were unable to recover it. 00:39:29.341 [2024-11-07 13:44:37.199832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.341 [2024-11-07 13:44:37.199846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.341 qpair failed and we were unable to recover it. 00:39:29.341 [2024-11-07 13:44:37.200182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.341 [2024-11-07 13:44:37.200198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.341 qpair failed and we were unable to recover it. 00:39:29.341 [2024-11-07 13:44:37.200256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.341 [2024-11-07 13:44:37.200270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.341 qpair failed and we were unable to recover it. 00:39:29.342 [2024-11-07 13:44:37.200604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.342 [2024-11-07 13:44:37.200618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.342 qpair failed and we were unable to recover it. 00:39:29.342 [2024-11-07 13:44:37.200934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.342 [2024-11-07 13:44:37.200949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.342 qpair failed and we were unable to recover it. 00:39:29.342 [2024-11-07 13:44:37.201159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.342 [2024-11-07 13:44:37.201174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.342 qpair failed and we were unable to recover it. 00:39:29.342 [2024-11-07 13:44:37.201488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.342 [2024-11-07 13:44:37.201504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.342 qpair failed and we were unable to recover it. 00:39:29.342 [2024-11-07 13:44:37.201585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.342 [2024-11-07 13:44:37.201600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.342 qpair failed and we were unable to recover it. 00:39:29.342 [2024-11-07 13:44:37.201871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.342 [2024-11-07 13:44:37.201887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.342 qpair failed and we were unable to recover it. 00:39:29.342 [2024-11-07 13:44:37.202193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.342 [2024-11-07 13:44:37.202209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.342 qpair failed and we were unable to recover it. 00:39:29.342 [2024-11-07 13:44:37.202542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.342 [2024-11-07 13:44:37.202556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.342 qpair failed and we were unable to recover it. 00:39:29.342 [2024-11-07 13:44:37.202884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.342 [2024-11-07 13:44:37.202900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.342 qpair failed and we were unable to recover it. 00:39:29.342 [2024-11-07 13:44:37.203233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.342 [2024-11-07 13:44:37.203247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.342 qpair failed and we were unable to recover it. 00:39:29.342 [2024-11-07 13:44:37.203435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.342 [2024-11-07 13:44:37.203449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.342 qpair failed and we were unable to recover it. 00:39:29.342 [2024-11-07 13:44:37.203754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.342 [2024-11-07 13:44:37.203769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.342 qpair failed and we were unable to recover it. 00:39:29.342 [2024-11-07 13:44:37.203957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.342 [2024-11-07 13:44:37.203972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.342 qpair failed and we were unable to recover it. 00:39:29.342 [2024-11-07 13:44:37.204242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.342 [2024-11-07 13:44:37.204257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.342 qpair failed and we were unable to recover it. 00:39:29.342 [2024-11-07 13:44:37.204593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.342 [2024-11-07 13:44:37.204608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.342 qpair failed and we were unable to recover it. 00:39:29.342 [2024-11-07 13:44:37.204939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.342 [2024-11-07 13:44:37.204954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.342 qpair failed and we were unable to recover it. 00:39:29.342 [2024-11-07 13:44:37.205139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.342 [2024-11-07 13:44:37.205153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.342 qpair failed and we were unable to recover it. 00:39:29.342 [2024-11-07 13:44:37.205491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.342 [2024-11-07 13:44:37.205505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.342 qpair failed and we were unable to recover it. 00:39:29.342 [2024-11-07 13:44:37.205782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.342 [2024-11-07 13:44:37.205797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.342 qpair failed and we were unable to recover it. 00:39:29.342 [2024-11-07 13:44:37.205985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.342 [2024-11-07 13:44:37.206002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.342 qpair failed and we were unable to recover it. 00:39:29.342 [2024-11-07 13:44:37.206329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.342 [2024-11-07 13:44:37.206347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.342 qpair failed and we were unable to recover it. 00:39:29.342 [2024-11-07 13:44:37.206672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.342 [2024-11-07 13:44:37.206686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.342 qpair failed and we were unable to recover it. 00:39:29.342 [2024-11-07 13:44:37.206813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.342 [2024-11-07 13:44:37.206827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.342 qpair failed and we were unable to recover it. 00:39:29.342 [2024-11-07 13:44:37.207154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.342 [2024-11-07 13:44:37.207169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.342 qpair failed and we were unable to recover it. 00:39:29.342 [2024-11-07 13:44:37.207501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.342 [2024-11-07 13:44:37.207516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.342 qpair failed and we were unable to recover it. 00:39:29.342 [2024-11-07 13:44:37.207866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.342 [2024-11-07 13:44:37.207882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.342 qpair failed and we were unable to recover it. 00:39:29.342 [2024-11-07 13:44:37.208074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.342 [2024-11-07 13:44:37.208089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.342 qpair failed and we were unable to recover it. 00:39:29.342 [2024-11-07 13:44:37.208263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.342 [2024-11-07 13:44:37.208276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.342 qpair failed and we were unable to recover it. 00:39:29.342 [2024-11-07 13:44:37.208511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.342 [2024-11-07 13:44:37.208527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.342 qpair failed and we were unable to recover it. 00:39:29.342 [2024-11-07 13:44:37.208755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.342 [2024-11-07 13:44:37.208770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.342 qpair failed and we were unable to recover it. 00:39:29.342 [2024-11-07 13:44:37.209070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.342 [2024-11-07 13:44:37.209085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.342 qpair failed and we were unable to recover it. 00:39:29.342 [2024-11-07 13:44:37.209287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.342 [2024-11-07 13:44:37.209301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.342 qpair failed and we were unable to recover it. 00:39:29.342 [2024-11-07 13:44:37.209647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.342 [2024-11-07 13:44:37.209662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.342 qpair failed and we were unable to recover it. 00:39:29.342 [2024-11-07 13:44:37.210004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.342 [2024-11-07 13:44:37.210020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.342 qpair failed and we were unable to recover it. 00:39:29.342 [2024-11-07 13:44:37.210356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.342 [2024-11-07 13:44:37.210371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.342 qpair failed and we were unable to recover it. 00:39:29.342 [2024-11-07 13:44:37.210716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.342 [2024-11-07 13:44:37.210783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.342 qpair failed and we were unable to recover it. 00:39:29.342 [2024-11-07 13:44:37.210972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.342 [2024-11-07 13:44:37.210989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.342 qpair failed and we were unable to recover it. 00:39:29.342 [2024-11-07 13:44:37.211099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.342 [2024-11-07 13:44:37.211114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.342 qpair failed and we were unable to recover it. 00:39:29.342 [2024-11-07 13:44:37.211212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.343 [2024-11-07 13:44:37.211227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.343 qpair failed and we were unable to recover it. 00:39:29.343 [2024-11-07 13:44:37.211396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.343 [2024-11-07 13:44:37.211411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.343 qpair failed and we were unable to recover it. 00:39:29.343 [2024-11-07 13:44:37.211714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.343 [2024-11-07 13:44:37.211729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.343 qpair failed and we were unable to recover it. 00:39:29.343 [2024-11-07 13:44:37.212079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.343 [2024-11-07 13:44:37.212094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.343 qpair failed and we were unable to recover it. 00:39:29.343 [2024-11-07 13:44:37.212270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.343 [2024-11-07 13:44:37.212284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.343 qpair failed and we were unable to recover it. 00:39:29.343 [2024-11-07 13:44:37.212591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.343 [2024-11-07 13:44:37.212605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.343 qpair failed and we were unable to recover it. 00:39:29.343 [2024-11-07 13:44:37.212787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.343 [2024-11-07 13:44:37.212802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.343 qpair failed and we were unable to recover it. 00:39:29.343 [2024-11-07 13:44:37.213127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.343 [2024-11-07 13:44:37.213141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.343 qpair failed and we were unable to recover it. 00:39:29.343 [2024-11-07 13:44:37.213475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.343 [2024-11-07 13:44:37.213491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.343 qpair failed and we were unable to recover it. 00:39:29.343 [2024-11-07 13:44:37.213855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.343 [2024-11-07 13:44:37.213875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.343 qpair failed and we were unable to recover it. 00:39:29.343 [2024-11-07 13:44:37.214064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.343 [2024-11-07 13:44:37.214078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.343 qpair failed and we were unable to recover it. 00:39:29.343 [2024-11-07 13:44:37.214311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.343 [2024-11-07 13:44:37.214326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.343 qpair failed and we were unable to recover it. 00:39:29.343 [2024-11-07 13:44:37.214659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.343 [2024-11-07 13:44:37.214673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.343 qpair failed and we were unable to recover it. 00:39:29.343 [2024-11-07 13:44:37.214991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.343 [2024-11-07 13:44:37.215007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.343 qpair failed and we were unable to recover it. 00:39:29.343 [2024-11-07 13:44:37.215306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.343 [2024-11-07 13:44:37.215322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.343 qpair failed and we were unable to recover it. 00:39:29.343 [2024-11-07 13:44:37.215624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.343 [2024-11-07 13:44:37.215640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.343 qpair failed and we were unable to recover it. 00:39:29.343 [2024-11-07 13:44:37.215961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.343 [2024-11-07 13:44:37.215977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.343 qpair failed and we were unable to recover it. 00:39:29.343 [2024-11-07 13:44:37.216296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.343 [2024-11-07 13:44:37.216311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.343 qpair failed and we were unable to recover it. 00:39:29.343 [2024-11-07 13:44:37.216615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.343 [2024-11-07 13:44:37.216629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.343 qpair failed and we were unable to recover it. 00:39:29.343 [2024-11-07 13:44:37.216964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.343 [2024-11-07 13:44:37.216980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.343 qpair failed and we were unable to recover it. 00:39:29.343 [2024-11-07 13:44:37.217325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.343 [2024-11-07 13:44:37.217340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.343 qpair failed and we were unable to recover it. 00:39:29.343 [2024-11-07 13:44:37.217671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.343 [2024-11-07 13:44:37.217686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.343 qpair failed and we were unable to recover it. 00:39:29.343 [2024-11-07 13:44:37.218010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.343 [2024-11-07 13:44:37.218026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.343 qpair failed and we were unable to recover it. 00:39:29.343 [2024-11-07 13:44:37.218198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.343 [2024-11-07 13:44:37.218213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.343 qpair failed and we were unable to recover it. 00:39:29.343 [2024-11-07 13:44:37.218548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.343 [2024-11-07 13:44:37.218563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.343 qpair failed and we were unable to recover it. 00:39:29.343 [2024-11-07 13:44:37.218880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.343 [2024-11-07 13:44:37.218895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.343 qpair failed and we were unable to recover it. 00:39:29.343 [2024-11-07 13:44:37.219210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.343 [2024-11-07 13:44:37.219225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.343 qpair failed and we were unable to recover it. 00:39:29.343 [2024-11-07 13:44:37.219577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.343 [2024-11-07 13:44:37.219591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.343 qpair failed and we were unable to recover it. 00:39:29.343 [2024-11-07 13:44:37.219927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.343 [2024-11-07 13:44:37.219942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.343 qpair failed and we were unable to recover it. 00:39:29.343 [2024-11-07 13:44:37.220249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.343 [2024-11-07 13:44:37.220264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.343 qpair failed and we were unable to recover it. 00:39:29.343 [2024-11-07 13:44:37.220565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.343 [2024-11-07 13:44:37.220581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.343 qpair failed and we were unable to recover it. 00:39:29.343 [2024-11-07 13:44:37.220753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.343 [2024-11-07 13:44:37.220767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.343 qpair failed and we were unable to recover it. 00:39:29.343 [2024-11-07 13:44:37.221091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.343 [2024-11-07 13:44:37.221106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.343 qpair failed and we were unable to recover it. 00:39:29.343 [2024-11-07 13:44:37.221406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.343 [2024-11-07 13:44:37.221421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.343 qpair failed and we were unable to recover it. 00:39:29.343 [2024-11-07 13:44:37.221731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.344 [2024-11-07 13:44:37.221746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.344 qpair failed and we were unable to recover it. 00:39:29.344 [2024-11-07 13:44:37.221944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.344 [2024-11-07 13:44:37.221959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.344 qpair failed and we were unable to recover it. 00:39:29.344 [2024-11-07 13:44:37.222283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.344 [2024-11-07 13:44:37.222298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.344 qpair failed and we were unable to recover it. 00:39:29.344 [2024-11-07 13:44:37.222472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.344 [2024-11-07 13:44:37.222490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.344 qpair failed and we were unable to recover it. 00:39:29.344 [2024-11-07 13:44:37.222782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.344 [2024-11-07 13:44:37.222797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.344 qpair failed and we were unable to recover it. 00:39:29.344 [2024-11-07 13:44:37.223154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.344 [2024-11-07 13:44:37.223169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.344 qpair failed and we were unable to recover it. 00:39:29.344 [2024-11-07 13:44:37.223364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.344 [2024-11-07 13:44:37.223378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.344 qpair failed and we were unable to recover it. 00:39:29.344 [2024-11-07 13:44:37.223564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.344 [2024-11-07 13:44:37.223578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.344 qpair failed and we were unable to recover it. 00:39:29.344 [2024-11-07 13:44:37.223798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.344 [2024-11-07 13:44:37.223814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.344 qpair failed and we were unable to recover it. 00:39:29.344 [2024-11-07 13:44:37.224012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.344 [2024-11-07 13:44:37.224027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.344 qpair failed and we were unable to recover it. 00:39:29.344 [2024-11-07 13:44:37.224227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.344 [2024-11-07 13:44:37.224242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.344 qpair failed and we were unable to recover it. 00:39:29.344 [2024-11-07 13:44:37.224302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.344 [2024-11-07 13:44:37.224316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.344 qpair failed and we were unable to recover it. 00:39:29.344 [2024-11-07 13:44:37.224384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.344 [2024-11-07 13:44:37.224398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.344 qpair failed and we were unable to recover it. 00:39:29.344 [2024-11-07 13:44:37.224698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.344 [2024-11-07 13:44:37.224713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.344 qpair failed and we were unable to recover it. 00:39:29.344 [2024-11-07 13:44:37.224909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.344 [2024-11-07 13:44:37.224924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.344 qpair failed and we were unable to recover it. 00:39:29.344 [2024-11-07 13:44:37.225056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.344 [2024-11-07 13:44:37.225070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.344 qpair failed and we were unable to recover it. 00:39:29.344 [2024-11-07 13:44:37.225342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.344 [2024-11-07 13:44:37.225357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.344 qpair failed and we were unable to recover it. 00:39:29.344 [2024-11-07 13:44:37.225561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.344 [2024-11-07 13:44:37.225577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.344 qpair failed and we were unable to recover it. 00:39:29.344 [2024-11-07 13:44:37.225904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.344 [2024-11-07 13:44:37.225920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.344 qpair failed and we were unable to recover it. 00:39:29.344 [2024-11-07 13:44:37.226250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.344 [2024-11-07 13:44:37.226264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.344 qpair failed and we were unable to recover it. 00:39:29.344 [2024-11-07 13:44:37.226442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.344 [2024-11-07 13:44:37.226457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.344 qpair failed and we were unable to recover it. 00:39:29.344 [2024-11-07 13:44:37.226803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.344 [2024-11-07 13:44:37.226817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.344 qpair failed and we were unable to recover it. 00:39:29.344 [2024-11-07 13:44:37.226995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.344 [2024-11-07 13:44:37.227010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.344 qpair failed and we were unable to recover it. 00:39:29.344 [2024-11-07 13:44:37.227297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.344 [2024-11-07 13:44:37.227311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.344 qpair failed and we were unable to recover it. 00:39:29.344 [2024-11-07 13:44:37.227650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.344 [2024-11-07 13:44:37.227665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.344 qpair failed and we were unable to recover it. 00:39:29.344 [2024-11-07 13:44:37.227834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.344 [2024-11-07 13:44:37.227850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.344 qpair failed and we were unable to recover it. 00:39:29.344 [2024-11-07 13:44:37.228035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.344 [2024-11-07 13:44:37.228051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.344 qpair failed and we were unable to recover it. 00:39:29.344 [2024-11-07 13:44:37.228345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.344 [2024-11-07 13:44:37.228360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.344 qpair failed and we were unable to recover it. 00:39:29.344 [2024-11-07 13:44:37.228625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.344 [2024-11-07 13:44:37.228640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.344 qpair failed and we were unable to recover it. 00:39:29.344 [2024-11-07 13:44:37.228992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.344 [2024-11-07 13:44:37.229008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.344 qpair failed and we were unable to recover it. 00:39:29.344 [2024-11-07 13:44:37.229309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.344 [2024-11-07 13:44:37.229324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.344 qpair failed and we were unable to recover it. 00:39:29.344 [2024-11-07 13:44:37.229659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.344 [2024-11-07 13:44:37.229673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.344 qpair failed and we were unable to recover it. 00:39:29.344 [2024-11-07 13:44:37.230008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.344 [2024-11-07 13:44:37.230023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.344 qpair failed and we were unable to recover it. 00:39:29.344 [2024-11-07 13:44:37.230167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.344 [2024-11-07 13:44:37.230182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.344 qpair failed and we were unable to recover it. 00:39:29.344 [2024-11-07 13:44:37.230372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.344 [2024-11-07 13:44:37.230388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.344 qpair failed and we were unable to recover it. 00:39:29.344 [2024-11-07 13:44:37.230457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.344 [2024-11-07 13:44:37.230472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.344 qpair failed and we were unable to recover it. 00:39:29.344 [2024-11-07 13:44:37.230643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.344 [2024-11-07 13:44:37.230661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.344 qpair failed and we were unable to recover it. 00:39:29.344 [2024-11-07 13:44:37.230985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.344 [2024-11-07 13:44:37.231001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.344 qpair failed and we were unable to recover it. 00:39:29.344 [2024-11-07 13:44:37.231330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.344 [2024-11-07 13:44:37.231346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.344 qpair failed and we were unable to recover it. 00:39:29.345 [2024-11-07 13:44:37.231673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.345 [2024-11-07 13:44:37.231688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.345 qpair failed and we were unable to recover it. 00:39:29.345 [2024-11-07 13:44:37.231981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.345 [2024-11-07 13:44:37.231995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.345 qpair failed and we were unable to recover it. 00:39:29.345 [2024-11-07 13:44:37.232340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.345 [2024-11-07 13:44:37.232354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.345 qpair failed and we were unable to recover it. 00:39:29.345 [2024-11-07 13:44:37.232655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.345 [2024-11-07 13:44:37.232669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.345 qpair failed and we were unable to recover it. 00:39:29.345 [2024-11-07 13:44:37.232881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.345 [2024-11-07 13:44:37.232899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.345 qpair failed and we were unable to recover it. 00:39:29.345 [2024-11-07 13:44:37.233225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.345 [2024-11-07 13:44:37.233240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.345 qpair failed and we were unable to recover it. 00:39:29.345 [2024-11-07 13:44:37.233578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.345 [2024-11-07 13:44:37.233592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.345 qpair failed and we were unable to recover it. 00:39:29.345 [2024-11-07 13:44:37.233935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.345 [2024-11-07 13:44:37.233951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.345 qpair failed and we were unable to recover it. 00:39:29.345 [2024-11-07 13:44:37.234276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.345 [2024-11-07 13:44:37.234292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.345 qpair failed and we were unable to recover it. 00:39:29.345 [2024-11-07 13:44:37.234624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.345 [2024-11-07 13:44:37.234639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.345 qpair failed and we were unable to recover it. 00:39:29.345 [2024-11-07 13:44:37.234927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.345 [2024-11-07 13:44:37.234943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.345 qpair failed and we were unable to recover it. 00:39:29.345 [2024-11-07 13:44:37.235111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.345 [2024-11-07 13:44:37.235127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.345 qpair failed and we were unable to recover it. 00:39:29.345 [2024-11-07 13:44:37.235458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.345 [2024-11-07 13:44:37.235474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.345 qpair failed and we were unable to recover it. 00:39:29.345 [2024-11-07 13:44:37.235785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.345 [2024-11-07 13:44:37.235800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.345 qpair failed and we were unable to recover it. 00:39:29.345 [2024-11-07 13:44:37.236093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.345 [2024-11-07 13:44:37.236109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.345 qpair failed and we were unable to recover it. 00:39:29.345 [2024-11-07 13:44:37.236411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.345 [2024-11-07 13:44:37.236426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.345 qpair failed and we were unable to recover it. 00:39:29.345 [2024-11-07 13:44:37.236615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.345 [2024-11-07 13:44:37.236628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.345 qpair failed and we were unable to recover it. 00:39:29.345 [2024-11-07 13:44:37.236844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.345 [2024-11-07 13:44:37.236859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.345 qpair failed and we were unable to recover it. 00:39:29.345 [2024-11-07 13:44:37.237185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.345 [2024-11-07 13:44:37.237200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.345 qpair failed and we were unable to recover it. 00:39:29.345 [2024-11-07 13:44:37.237521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.345 [2024-11-07 13:44:37.237536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.345 qpair failed and we were unable to recover it. 00:39:29.345 [2024-11-07 13:44:37.237821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.345 [2024-11-07 13:44:37.237837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.345 qpair failed and we were unable to recover it. 00:39:29.345 [2024-11-07 13:44:37.238007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.345 [2024-11-07 13:44:37.238023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.345 qpair failed and we were unable to recover it. 00:39:29.345 [2024-11-07 13:44:37.238373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.345 [2024-11-07 13:44:37.238388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.345 qpair failed and we were unable to recover it. 00:39:29.345 [2024-11-07 13:44:37.238714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.345 [2024-11-07 13:44:37.238729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.345 qpair failed and we were unable to recover it. 00:39:29.345 [2024-11-07 13:44:37.239058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.345 [2024-11-07 13:44:37.239074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.345 qpair failed and we were unable to recover it. 00:39:29.345 [2024-11-07 13:44:37.239416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.345 [2024-11-07 13:44:37.239432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.345 qpair failed and we were unable to recover it. 00:39:29.345 [2024-11-07 13:44:37.239644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.345 [2024-11-07 13:44:37.239658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.345 qpair failed and we were unable to recover it. 00:39:29.345 [2024-11-07 13:44:37.239870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.345 [2024-11-07 13:44:37.239885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.345 qpair failed and we were unable to recover it. 00:39:29.345 [2024-11-07 13:44:37.239947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.345 [2024-11-07 13:44:37.239962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.345 qpair failed and we were unable to recover it. 00:39:29.345 [2024-11-07 13:44:37.240303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.345 [2024-11-07 13:44:37.240318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.345 qpair failed and we were unable to recover it. 00:39:29.345 [2024-11-07 13:44:37.240648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.345 [2024-11-07 13:44:37.240662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.345 qpair failed and we were unable to recover it. 00:39:29.345 [2024-11-07 13:44:37.240972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.345 [2024-11-07 13:44:37.240988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.345 qpair failed and we were unable to recover it. 00:39:29.345 [2024-11-07 13:44:37.241196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.345 [2024-11-07 13:44:37.241211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.345 qpair failed and we were unable to recover it. 00:39:29.345 [2024-11-07 13:44:37.241539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.345 [2024-11-07 13:44:37.241554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.345 qpair failed and we were unable to recover it. 00:39:29.345 [2024-11-07 13:44:37.241877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.345 [2024-11-07 13:44:37.241892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.345 qpair failed and we were unable to recover it. 00:39:29.345 [2024-11-07 13:44:37.242201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.345 [2024-11-07 13:44:37.242214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.345 qpair failed and we were unable to recover it. 00:39:29.345 [2024-11-07 13:44:37.242539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.345 [2024-11-07 13:44:37.242551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.345 qpair failed and we were unable to recover it. 00:39:29.345 [2024-11-07 13:44:37.242764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.345 [2024-11-07 13:44:37.242777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.345 qpair failed and we were unable to recover it. 00:39:29.345 [2024-11-07 13:44:37.243085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.346 [2024-11-07 13:44:37.243099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.346 qpair failed and we were unable to recover it. 00:39:29.346 [2024-11-07 13:44:37.243366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.346 [2024-11-07 13:44:37.243379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.346 qpair failed and we were unable to recover it. 00:39:29.346 [2024-11-07 13:44:37.243553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.346 [2024-11-07 13:44:37.243568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.346 qpair failed and we were unable to recover it. 00:39:29.346 [2024-11-07 13:44:37.243904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.346 [2024-11-07 13:44:37.243918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.346 qpair failed and we were unable to recover it. 00:39:29.346 [2024-11-07 13:44:37.244247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.346 [2024-11-07 13:44:37.244261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.346 qpair failed and we were unable to recover it. 00:39:29.346 [2024-11-07 13:44:37.244600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.346 [2024-11-07 13:44:37.244612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.346 qpair failed and we were unable to recover it. 00:39:29.346 [2024-11-07 13:44:37.244953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.346 [2024-11-07 13:44:37.244966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.346 qpair failed and we were unable to recover it. 00:39:29.346 [2024-11-07 13:44:37.245255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.346 [2024-11-07 13:44:37.245269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.346 qpair failed and we were unable to recover it. 00:39:29.346 [2024-11-07 13:44:37.245578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.346 [2024-11-07 13:44:37.245592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.346 qpair failed and we were unable to recover it. 00:39:29.346 [2024-11-07 13:44:37.245814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.346 [2024-11-07 13:44:37.245829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.346 qpair failed and we were unable to recover it. 00:39:29.346 [2024-11-07 13:44:37.246052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.346 [2024-11-07 13:44:37.246068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.346 qpair failed and we were unable to recover it. 00:39:29.346 [2024-11-07 13:44:37.246405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.346 [2024-11-07 13:44:37.246420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.346 qpair failed and we were unable to recover it. 00:39:29.346 [2024-11-07 13:44:37.246779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.346 [2024-11-07 13:44:37.246796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.346 qpair failed and we were unable to recover it. 00:39:29.346 [2024-11-07 13:44:37.247158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.346 [2024-11-07 13:44:37.247173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.346 qpair failed and we were unable to recover it. 00:39:29.346 [2024-11-07 13:44:37.247507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.346 [2024-11-07 13:44:37.247522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.346 qpair failed and we were unable to recover it. 00:39:29.346 [2024-11-07 13:44:37.247723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.346 [2024-11-07 13:44:37.247741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.346 qpair failed and we were unable to recover it. 00:39:29.346 [2024-11-07 13:44:37.247979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.346 [2024-11-07 13:44:37.247995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.346 qpair failed and we were unable to recover it. 00:39:29.346 [2024-11-07 13:44:37.248162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.346 [2024-11-07 13:44:37.248177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.346 qpair failed and we were unable to recover it. 00:39:29.346 [2024-11-07 13:44:37.248474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.346 [2024-11-07 13:44:37.248490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.346 qpair failed and we were unable to recover it. 00:39:29.346 [2024-11-07 13:44:37.248821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.346 [2024-11-07 13:44:37.248836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.346 qpair failed and we were unable to recover it. 00:39:29.346 [2024-11-07 13:44:37.249023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.346 [2024-11-07 13:44:37.249040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.346 qpair failed and we were unable to recover it. 00:39:29.346 [2024-11-07 13:44:37.249342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.346 [2024-11-07 13:44:37.249357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.346 qpair failed and we were unable to recover it. 00:39:29.346 [2024-11-07 13:44:37.249695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.346 [2024-11-07 13:44:37.249710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.346 qpair failed and we were unable to recover it. 00:39:29.346 [2024-11-07 13:44:37.249981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.346 [2024-11-07 13:44:37.249997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.346 qpair failed and we were unable to recover it. 00:39:29.346 [2024-11-07 13:44:37.250302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.346 [2024-11-07 13:44:37.250317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.346 qpair failed and we were unable to recover it. 00:39:29.346 [2024-11-07 13:44:37.250633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.346 [2024-11-07 13:44:37.250650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.346 qpair failed and we were unable to recover it. 00:39:29.346 [2024-11-07 13:44:37.250816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.346 [2024-11-07 13:44:37.250831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.346 qpair failed and we were unable to recover it. 00:39:29.346 [2024-11-07 13:44:37.251144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.346 [2024-11-07 13:44:37.251160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.346 qpair failed and we were unable to recover it. 00:39:29.346 [2024-11-07 13:44:37.251472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.346 [2024-11-07 13:44:37.251488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.346 qpair failed and we were unable to recover it. 00:39:29.346 [2024-11-07 13:44:37.251817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.346 [2024-11-07 13:44:37.251833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.346 qpair failed and we were unable to recover it. 00:39:29.346 [2024-11-07 13:44:37.252165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.346 [2024-11-07 13:44:37.252182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.346 qpair failed and we were unable to recover it. 00:39:29.346 [2024-11-07 13:44:37.252513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.346 [2024-11-07 13:44:37.252529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.346 qpair failed and we were unable to recover it. 00:39:29.346 [2024-11-07 13:44:37.252867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.346 [2024-11-07 13:44:37.252882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.346 qpair failed and we were unable to recover it. 00:39:29.346 [2024-11-07 13:44:37.253243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.346 [2024-11-07 13:44:37.253261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.346 qpair failed and we were unable to recover it. 00:39:29.346 [2024-11-07 13:44:37.253594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.346 [2024-11-07 13:44:37.253610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.346 qpair failed and we were unable to recover it. 00:39:29.346 [2024-11-07 13:44:37.253939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.346 [2024-11-07 13:44:37.253955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.346 qpair failed and we were unable to recover it. 00:39:29.346 [2024-11-07 13:44:37.254263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.346 [2024-11-07 13:44:37.254278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.346 qpair failed and we were unable to recover it. 00:39:29.346 [2024-11-07 13:44:37.254622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.346 [2024-11-07 13:44:37.254638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.346 qpair failed and we were unable to recover it. 00:39:29.346 [2024-11-07 13:44:37.254966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.346 [2024-11-07 13:44:37.254983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.346 qpair failed and we were unable to recover it. 00:39:29.347 [2024-11-07 13:44:37.255158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.347 [2024-11-07 13:44:37.255175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.347 qpair failed and we were unable to recover it. 00:39:29.347 [2024-11-07 13:44:37.255502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.347 [2024-11-07 13:44:37.255519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.347 qpair failed and we were unable to recover it. 00:39:29.347 [2024-11-07 13:44:37.255854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.347 [2024-11-07 13:44:37.255874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.347 qpair failed and we were unable to recover it. 00:39:29.347 [2024-11-07 13:44:37.256067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.347 [2024-11-07 13:44:37.256084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.347 qpair failed and we were unable to recover it. 00:39:29.347 [2024-11-07 13:44:37.256268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.347 [2024-11-07 13:44:37.256284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.347 qpair failed and we were unable to recover it. 00:39:29.347 [2024-11-07 13:44:37.256566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.347 [2024-11-07 13:44:37.256581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.347 qpair failed and we were unable to recover it. 00:39:29.347 [2024-11-07 13:44:37.256883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.347 [2024-11-07 13:44:37.256900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.347 qpair failed and we were unable to recover it. 00:39:29.347 [2024-11-07 13:44:37.257227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.347 [2024-11-07 13:44:37.257243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.347 qpair failed and we were unable to recover it. 00:39:29.347 [2024-11-07 13:44:37.257583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.347 [2024-11-07 13:44:37.257599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.347 qpair failed and we were unable to recover it. 00:39:29.347 [2024-11-07 13:44:37.257894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.347 [2024-11-07 13:44:37.257911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.347 qpair failed and we were unable to recover it. 00:39:29.347 [2024-11-07 13:44:37.258241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.347 [2024-11-07 13:44:37.258257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.347 qpair failed and we were unable to recover it. 00:39:29.347 [2024-11-07 13:44:37.258320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.347 [2024-11-07 13:44:37.258336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.347 qpair failed and we were unable to recover it. 00:39:29.347 [2024-11-07 13:44:37.258493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.347 [2024-11-07 13:44:37.258509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.347 qpair failed and we were unable to recover it. 00:39:29.347 [2024-11-07 13:44:37.258685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.347 [2024-11-07 13:44:37.258700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.347 qpair failed and we were unable to recover it. 00:39:29.347 [2024-11-07 13:44:37.259038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.347 [2024-11-07 13:44:37.259054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.347 qpair failed and we were unable to recover it. 00:39:29.347 [2024-11-07 13:44:37.259386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.347 [2024-11-07 13:44:37.259401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.347 qpair failed and we were unable to recover it. 00:39:29.347 [2024-11-07 13:44:37.259568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.347 [2024-11-07 13:44:37.259585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.347 qpair failed and we were unable to recover it. 00:39:29.347 [2024-11-07 13:44:37.259761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.347 [2024-11-07 13:44:37.259776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.347 qpair failed and we were unable to recover it. 00:39:29.347 [2024-11-07 13:44:37.260089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.347 [2024-11-07 13:44:37.260105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.347 qpair failed and we were unable to recover it. 00:39:29.347 [2024-11-07 13:44:37.260315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.347 [2024-11-07 13:44:37.260329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.347 qpair failed and we were unable to recover it. 00:39:29.347 [2024-11-07 13:44:37.260735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.347 [2024-11-07 13:44:37.260749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.347 qpair failed and we were unable to recover it. 00:39:29.347 [2024-11-07 13:44:37.261099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.347 [2024-11-07 13:44:37.261116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.347 qpair failed and we were unable to recover it. 00:39:29.347 [2024-11-07 13:44:37.261166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.347 [2024-11-07 13:44:37.261180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.347 qpair failed and we were unable to recover it. 00:39:29.347 [2024-11-07 13:44:37.261484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.347 [2024-11-07 13:44:37.261499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.347 qpair failed and we were unable to recover it. 00:39:29.347 [2024-11-07 13:44:37.261876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.347 [2024-11-07 13:44:37.261892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.347 qpair failed and we were unable to recover it. 00:39:29.347 [2024-11-07 13:44:37.262063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.347 [2024-11-07 13:44:37.262077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.347 qpair failed and we were unable to recover it. 00:39:29.347 [2024-11-07 13:44:37.262506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.347 [2024-11-07 13:44:37.262521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.347 qpair failed and we were unable to recover it. 00:39:29.347 [2024-11-07 13:44:37.262846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.347 [2024-11-07 13:44:37.262860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.347 qpair failed and we were unable to recover it. 00:39:29.347 [2024-11-07 13:44:37.263168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.347 [2024-11-07 13:44:37.263183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.347 qpair failed and we were unable to recover it. 00:39:29.347 [2024-11-07 13:44:37.263389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.347 [2024-11-07 13:44:37.263403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.347 qpair failed and we were unable to recover it. 00:39:29.347 [2024-11-07 13:44:37.263593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.347 [2024-11-07 13:44:37.263608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.347 qpair failed and we were unable to recover it. 00:39:29.347 [2024-11-07 13:44:37.263785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.347 [2024-11-07 13:44:37.263800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.347 qpair failed and we were unable to recover it. 00:39:29.347 [2024-11-07 13:44:37.264172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.347 [2024-11-07 13:44:37.264187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.347 qpair failed and we were unable to recover it. 00:39:29.347 [2024-11-07 13:44:37.264535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.347 [2024-11-07 13:44:37.264550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.347 qpair failed and we were unable to recover it. 00:39:29.347 [2024-11-07 13:44:37.264887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.348 [2024-11-07 13:44:37.264906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.348 qpair failed and we were unable to recover it. 00:39:29.348 [2024-11-07 13:44:37.265243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.348 [2024-11-07 13:44:37.265258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.348 qpair failed and we were unable to recover it. 00:39:29.348 [2024-11-07 13:44:37.265593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.348 [2024-11-07 13:44:37.265608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.348 qpair failed and we were unable to recover it. 00:39:29.348 [2024-11-07 13:44:37.265879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.348 [2024-11-07 13:44:37.265895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.348 qpair failed and we were unable to recover it. 00:39:29.348 [2024-11-07 13:44:37.266074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.348 [2024-11-07 13:44:37.266090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.348 qpair failed and we were unable to recover it. 00:39:29.348 [2024-11-07 13:44:37.266278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.348 [2024-11-07 13:44:37.266293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.348 qpair failed and we were unable to recover it. 00:39:29.348 [2024-11-07 13:44:37.266581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.348 [2024-11-07 13:44:37.266596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.348 qpair failed and we were unable to recover it. 00:39:29.348 [2024-11-07 13:44:37.266897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.348 [2024-11-07 13:44:37.266912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.348 qpair failed and we were unable to recover it. 00:39:29.348 [2024-11-07 13:44:37.267242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.348 [2024-11-07 13:44:37.267258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.348 qpair failed and we were unable to recover it. 00:39:29.348 [2024-11-07 13:44:37.267430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.348 [2024-11-07 13:44:37.267444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.348 qpair failed and we were unable to recover it. 00:39:29.348 [2024-11-07 13:44:37.267780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.348 [2024-11-07 13:44:37.267795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.348 qpair failed and we were unable to recover it. 00:39:29.348 [2024-11-07 13:44:37.268341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.348 [2024-11-07 13:44:37.268357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.348 qpair failed and we were unable to recover it. 00:39:29.348 [2024-11-07 13:44:37.268549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.348 [2024-11-07 13:44:37.268564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.348 qpair failed and we were unable to recover it. 00:39:29.348 [2024-11-07 13:44:37.268869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.348 [2024-11-07 13:44:37.268885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.348 qpair failed and we were unable to recover it. 00:39:29.348 [2024-11-07 13:44:37.269227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.348 [2024-11-07 13:44:37.269242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.348 qpair failed and we were unable to recover it. 00:39:29.348 [2024-11-07 13:44:37.269584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.348 [2024-11-07 13:44:37.269599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.348 qpair failed and we were unable to recover it. 00:39:29.348 [2024-11-07 13:44:37.269906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.348 [2024-11-07 13:44:37.269922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.348 qpair failed and we were unable to recover it. 00:39:29.348 [2024-11-07 13:44:37.270220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.348 [2024-11-07 13:44:37.270235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.348 qpair failed and we were unable to recover it. 00:39:29.348 [2024-11-07 13:44:37.270563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.348 [2024-11-07 13:44:37.270578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.348 qpair failed and we were unable to recover it. 00:39:29.348 [2024-11-07 13:44:37.270756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.348 [2024-11-07 13:44:37.270771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.348 qpair failed and we were unable to recover it. 00:39:29.348 [2024-11-07 13:44:37.270950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.348 [2024-11-07 13:44:37.270965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.348 qpair failed and we were unable to recover it. 00:39:29.348 [2024-11-07 13:44:37.271020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.348 [2024-11-07 13:44:37.271033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.348 qpair failed and we were unable to recover it. 00:39:29.348 [2024-11-07 13:44:37.271345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.348 [2024-11-07 13:44:37.271361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.348 qpair failed and we were unable to recover it. 00:39:29.348 [2024-11-07 13:44:37.271685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.348 [2024-11-07 13:44:37.271700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.348 qpair failed and we were unable to recover it. 00:39:29.348 [2024-11-07 13:44:37.272047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.348 [2024-11-07 13:44:37.272064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.348 qpair failed and we were unable to recover it. 00:39:29.348 [2024-11-07 13:44:37.272362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.348 [2024-11-07 13:44:37.272377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.348 qpair failed and we were unable to recover it. 00:39:29.348 [2024-11-07 13:44:37.272727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.348 [2024-11-07 13:44:37.272743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.348 qpair failed and we were unable to recover it. 00:39:29.348 [2024-11-07 13:44:37.273084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.348 [2024-11-07 13:44:37.273100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.348 qpair failed and we were unable to recover it. 00:39:29.348 [2024-11-07 13:44:37.273307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.348 [2024-11-07 13:44:37.273323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.348 qpair failed and we were unable to recover it. 00:39:29.348 [2024-11-07 13:44:37.273417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.348 [2024-11-07 13:44:37.273432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.348 qpair failed and we were unable to recover it. 00:39:29.348 [2024-11-07 13:44:37.273717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.348 [2024-11-07 13:44:37.273732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.348 qpair failed and we were unable to recover it. 00:39:29.348 [2024-11-07 13:44:37.273926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.348 [2024-11-07 13:44:37.273941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.348 qpair failed and we were unable to recover it. 00:39:29.348 [2024-11-07 13:44:37.274254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.348 [2024-11-07 13:44:37.274269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.348 qpair failed and we were unable to recover it. 00:39:29.348 [2024-11-07 13:44:37.274478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.348 [2024-11-07 13:44:37.274494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.348 qpair failed and we were unable to recover it. 00:39:29.348 [2024-11-07 13:44:37.274662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.348 [2024-11-07 13:44:37.274675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.348 qpair failed and we were unable to recover it. 00:39:29.348 [2024-11-07 13:44:37.275047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.348 [2024-11-07 13:44:37.275063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.348 qpair failed and we were unable to recover it. 00:39:29.348 [2024-11-07 13:44:37.275408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.348 [2024-11-07 13:44:37.275424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.348 qpair failed and we were unable to recover it. 00:39:29.348 [2024-11-07 13:44:37.275793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.348 [2024-11-07 13:44:37.275809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.348 qpair failed and we were unable to recover it. 00:39:29.348 [2024-11-07 13:44:37.276134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.349 [2024-11-07 13:44:37.276149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.349 qpair failed and we were unable to recover it. 00:39:29.349 [2024-11-07 13:44:37.276342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.349 [2024-11-07 13:44:37.276358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.349 qpair failed and we were unable to recover it. 00:39:29.349 [2024-11-07 13:44:37.276703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.349 [2024-11-07 13:44:37.276720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.349 qpair failed and we were unable to recover it. 00:39:29.349 [2024-11-07 13:44:37.276918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.349 [2024-11-07 13:44:37.276933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.349 qpair failed and we were unable to recover it. 00:39:29.349 [2024-11-07 13:44:37.277265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.349 [2024-11-07 13:44:37.277280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.349 qpair failed and we were unable to recover it. 00:39:29.349 [2024-11-07 13:44:37.277620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.349 [2024-11-07 13:44:37.277634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.349 qpair failed and we were unable to recover it. 00:39:29.349 [2024-11-07 13:44:37.277970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.349 [2024-11-07 13:44:37.277986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.349 qpair failed and we were unable to recover it. 00:39:29.349 [2024-11-07 13:44:37.278279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.349 [2024-11-07 13:44:37.278293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.349 qpair failed and we were unable to recover it. 00:39:29.349 [2024-11-07 13:44:37.278623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.349 [2024-11-07 13:44:37.278637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.349 qpair failed and we were unable to recover it. 00:39:29.349 [2024-11-07 13:44:37.279003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.349 [2024-11-07 13:44:37.279019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.349 qpair failed and we were unable to recover it. 00:39:29.349 [2024-11-07 13:44:37.279084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.349 [2024-11-07 13:44:37.279098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.349 qpair failed and we were unable to recover it. 00:39:29.349 [2024-11-07 13:44:37.279437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.349 [2024-11-07 13:44:37.279452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.349 qpair failed and we were unable to recover it. 00:39:29.349 [2024-11-07 13:44:37.279714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.349 [2024-11-07 13:44:37.279728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.349 qpair failed and we were unable to recover it. 00:39:29.349 [2024-11-07 13:44:37.280085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.349 [2024-11-07 13:44:37.280100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.349 qpair failed and we were unable to recover it. 00:39:29.349 [2024-11-07 13:44:37.280393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.349 [2024-11-07 13:44:37.280407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.349 qpair failed and we were unable to recover it. 00:39:29.349 [2024-11-07 13:44:37.280703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.349 [2024-11-07 13:44:37.280717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.349 qpair failed and we were unable to recover it. 00:39:29.349 [2024-11-07 13:44:37.280915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.349 [2024-11-07 13:44:37.280931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.349 qpair failed and we were unable to recover it. 00:39:29.349 [2024-11-07 13:44:37.281252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.349 [2024-11-07 13:44:37.281268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.349 qpair failed and we were unable to recover it. 00:39:29.349 [2024-11-07 13:44:37.281438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.349 [2024-11-07 13:44:37.281453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.349 qpair failed and we were unable to recover it. 00:39:29.349 [2024-11-07 13:44:37.281796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.349 [2024-11-07 13:44:37.281812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.349 qpair failed and we were unable to recover it. 00:39:29.349 [2024-11-07 13:44:37.282191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.349 [2024-11-07 13:44:37.282208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.349 qpair failed and we were unable to recover it. 00:39:29.349 [2024-11-07 13:44:37.282550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.349 [2024-11-07 13:44:37.282566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.349 qpair failed and we were unable to recover it. 00:39:29.349 [2024-11-07 13:44:37.282873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.349 [2024-11-07 13:44:37.282890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.349 qpair failed and we were unable to recover it. 00:39:29.349 [2024-11-07 13:44:37.283224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.349 [2024-11-07 13:44:37.283239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.349 qpair failed and we were unable to recover it. 00:39:29.349 [2024-11-07 13:44:37.283582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.349 [2024-11-07 13:44:37.283598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.349 qpair failed and we were unable to recover it. 00:39:29.349 [2024-11-07 13:44:37.283894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.349 [2024-11-07 13:44:37.283910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.349 qpair failed and we were unable to recover it. 00:39:29.349 [2024-11-07 13:44:37.284198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.349 [2024-11-07 13:44:37.284213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.349 qpair failed and we were unable to recover it. 00:39:29.349 [2024-11-07 13:44:37.284545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.349 [2024-11-07 13:44:37.284560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.349 qpair failed and we were unable to recover it. 00:39:29.349 [2024-11-07 13:44:37.284850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.349 [2024-11-07 13:44:37.284868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.349 qpair failed and we were unable to recover it. 00:39:29.349 [2024-11-07 13:44:37.285120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.349 [2024-11-07 13:44:37.285135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.349 qpair failed and we were unable to recover it. 00:39:29.349 [2024-11-07 13:44:37.285467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.349 [2024-11-07 13:44:37.285482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.349 qpair failed and we were unable to recover it. 00:39:29.349 [2024-11-07 13:44:37.285819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.349 [2024-11-07 13:44:37.285834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.349 qpair failed and we were unable to recover it. 00:39:29.349 [2024-11-07 13:44:37.286176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.349 [2024-11-07 13:44:37.286193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.349 qpair failed and we were unable to recover it. 00:39:29.349 [2024-11-07 13:44:37.286537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.349 [2024-11-07 13:44:37.286552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.349 qpair failed and we were unable to recover it. 00:39:29.349 [2024-11-07 13:44:37.286889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.349 [2024-11-07 13:44:37.286904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.349 qpair failed and we were unable to recover it. 00:39:29.349 [2024-11-07 13:44:37.287235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.349 [2024-11-07 13:44:37.287251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.349 qpair failed and we were unable to recover it. 00:39:29.349 [2024-11-07 13:44:37.287463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.349 [2024-11-07 13:44:37.287477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.349 qpair failed and we were unable to recover it. 00:39:29.349 [2024-11-07 13:44:37.287706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.349 [2024-11-07 13:44:37.287724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.349 qpair failed and we were unable to recover it. 00:39:29.349 [2024-11-07 13:44:37.287888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.349 [2024-11-07 13:44:37.287903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.349 qpair failed and we were unable to recover it. 00:39:29.350 [2024-11-07 13:44:37.288210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.350 [2024-11-07 13:44:37.288225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.350 qpair failed and we were unable to recover it. 00:39:29.350 [2024-11-07 13:44:37.288520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.350 [2024-11-07 13:44:37.288536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.350 qpair failed and we were unable to recover it. 00:39:29.350 [2024-11-07 13:44:37.288702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.350 [2024-11-07 13:44:37.288718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.350 qpair failed and we were unable to recover it. 00:39:29.350 [2024-11-07 13:44:37.288908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.350 [2024-11-07 13:44:37.288927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.350 qpair failed and we were unable to recover it. 00:39:29.350 [2024-11-07 13:44:37.289233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.350 [2024-11-07 13:44:37.289248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.350 qpair failed and we were unable to recover it. 00:39:29.350 [2024-11-07 13:44:37.289596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.350 [2024-11-07 13:44:37.289612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.350 qpair failed and we were unable to recover it. 00:39:29.350 [2024-11-07 13:44:37.290022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.350 [2024-11-07 13:44:37.290038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.350 qpair failed and we were unable to recover it. 00:39:29.350 [2024-11-07 13:44:37.290324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.350 [2024-11-07 13:44:37.290338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.350 qpair failed and we were unable to recover it. 00:39:29.350 [2024-11-07 13:44:37.290661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.350 [2024-11-07 13:44:37.290676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.350 qpair failed and we were unable to recover it. 00:39:29.350 [2024-11-07 13:44:37.290872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.350 [2024-11-07 13:44:37.290887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.350 qpair failed and we were unable to recover it. 00:39:29.350 [2024-11-07 13:44:37.291087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.350 [2024-11-07 13:44:37.291102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.350 qpair failed and we were unable to recover it. 00:39:29.350 [2024-11-07 13:44:37.291430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.350 [2024-11-07 13:44:37.291445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.350 qpair failed and we were unable to recover it. 00:39:29.350 [2024-11-07 13:44:37.291765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.350 [2024-11-07 13:44:37.291781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.350 qpair failed and we were unable to recover it. 00:39:29.350 [2024-11-07 13:44:37.292094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.350 [2024-11-07 13:44:37.292109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.350 qpair failed and we were unable to recover it. 00:39:29.350 [2024-11-07 13:44:37.292326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.350 [2024-11-07 13:44:37.292341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.350 qpair failed and we were unable to recover it. 00:39:29.350 [2024-11-07 13:44:37.292532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.350 [2024-11-07 13:44:37.292549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.350 qpair failed and we were unable to recover it. 00:39:29.350 [2024-11-07 13:44:37.292888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.350 [2024-11-07 13:44:37.292905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.350 qpair failed and we were unable to recover it. 00:39:29.350 [2024-11-07 13:44:37.293243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.350 [2024-11-07 13:44:37.293258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.350 qpair failed and we were unable to recover it. 00:39:29.350 [2024-11-07 13:44:37.293601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.350 [2024-11-07 13:44:37.293618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.350 qpair failed and we were unable to recover it. 00:39:29.350 [2024-11-07 13:44:37.293816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.350 [2024-11-07 13:44:37.293831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.350 qpair failed and we were unable to recover it. 00:39:29.350 [2024-11-07 13:44:37.294140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.350 [2024-11-07 13:44:37.294156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.350 qpair failed and we were unable to recover it. 00:39:29.350 [2024-11-07 13:44:37.294487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.350 [2024-11-07 13:44:37.294503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.350 qpair failed and we were unable to recover it. 00:39:29.350 [2024-11-07 13:44:37.294708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.350 [2024-11-07 13:44:37.294723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.350 qpair failed and we were unable to recover it. 00:39:29.350 [2024-11-07 13:44:37.295033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.350 [2024-11-07 13:44:37.295049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.350 qpair failed and we were unable to recover it. 00:39:29.350 [2024-11-07 13:44:37.295381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.350 [2024-11-07 13:44:37.295396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.350 qpair failed and we were unable to recover it. 00:39:29.350 [2024-11-07 13:44:37.295692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.350 [2024-11-07 13:44:37.295706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.350 qpair failed and we were unable to recover it. 00:39:29.350 [2024-11-07 13:44:37.296031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.350 [2024-11-07 13:44:37.296048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.350 qpair failed and we were unable to recover it. 00:39:29.350 [2024-11-07 13:44:37.296345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.350 [2024-11-07 13:44:37.296359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.350 qpair failed and we were unable to recover it. 00:39:29.350 [2024-11-07 13:44:37.296653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.350 [2024-11-07 13:44:37.296667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.350 qpair failed and we were unable to recover it. 00:39:29.350 [2024-11-07 13:44:37.296947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.350 [2024-11-07 13:44:37.296963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.350 qpair failed and we were unable to recover it. 00:39:29.350 [2024-11-07 13:44:37.297254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.350 [2024-11-07 13:44:37.297268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.350 qpair failed and we were unable to recover it. 00:39:29.350 [2024-11-07 13:44:37.297573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.350 [2024-11-07 13:44:37.297588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.350 qpair failed and we were unable to recover it. 00:39:29.350 [2024-11-07 13:44:37.297771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.350 [2024-11-07 13:44:37.297786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.350 qpair failed and we were unable to recover it. 00:39:29.350 [2024-11-07 13:44:37.298109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.350 [2024-11-07 13:44:37.298124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.350 qpair failed and we were unable to recover it. 00:39:29.350 [2024-11-07 13:44:37.298468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.350 [2024-11-07 13:44:37.298484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.350 qpair failed and we were unable to recover it. 00:39:29.350 [2024-11-07 13:44:37.298805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.350 [2024-11-07 13:44:37.298821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.350 qpair failed and we were unable to recover it. 00:39:29.350 [2024-11-07 13:44:37.299152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.350 [2024-11-07 13:44:37.299168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.350 qpair failed and we were unable to recover it. 00:39:29.350 [2024-11-07 13:44:37.299520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.350 [2024-11-07 13:44:37.299536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.350 qpair failed and we were unable to recover it. 00:39:29.350 [2024-11-07 13:44:37.299756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.351 [2024-11-07 13:44:37.299772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.351 qpair failed and we were unable to recover it. 00:39:29.351 [2024-11-07 13:44:37.300055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.351 [2024-11-07 13:44:37.300071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.351 qpair failed and we were unable to recover it. 00:39:29.351 [2024-11-07 13:44:37.300368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.351 [2024-11-07 13:44:37.300383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.351 qpair failed and we were unable to recover it. 00:39:29.351 [2024-11-07 13:44:37.300728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.351 [2024-11-07 13:44:37.300743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.351 qpair failed and we were unable to recover it. 00:39:29.351 [2024-11-07 13:44:37.300980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.351 [2024-11-07 13:44:37.300995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.351 qpair failed and we were unable to recover it. 00:39:29.351 [2024-11-07 13:44:37.301190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.351 [2024-11-07 13:44:37.301208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.351 qpair failed and we were unable to recover it. 00:39:29.351 [2024-11-07 13:44:37.301542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.351 [2024-11-07 13:44:37.301558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.351 qpair failed and we were unable to recover it. 00:39:29.351 [2024-11-07 13:44:37.301895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.351 [2024-11-07 13:44:37.301911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.351 qpair failed and we were unable to recover it. 00:39:29.351 [2024-11-07 13:44:37.302202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.351 [2024-11-07 13:44:37.302217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.351 qpair failed and we were unable to recover it. 00:39:29.351 [2024-11-07 13:44:37.302410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.351 [2024-11-07 13:44:37.302425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.351 qpair failed and we were unable to recover it. 00:39:29.351 [2024-11-07 13:44:37.302765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.351 [2024-11-07 13:44:37.302780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.351 qpair failed and we were unable to recover it. 00:39:29.351 [2024-11-07 13:44:37.303100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.351 [2024-11-07 13:44:37.303114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.351 qpair failed and we were unable to recover it. 00:39:29.351 [2024-11-07 13:44:37.303399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.351 [2024-11-07 13:44:37.303414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.351 qpair failed and we were unable to recover it. 00:39:29.351 [2024-11-07 13:44:37.303791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.351 [2024-11-07 13:44:37.303805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.351 qpair failed and we were unable to recover it. 00:39:29.351 [2024-11-07 13:44:37.304118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.351 [2024-11-07 13:44:37.304132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.351 qpair failed and we were unable to recover it. 00:39:29.351 [2024-11-07 13:44:37.304420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.351 [2024-11-07 13:44:37.304436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.351 qpair failed and we were unable to recover it. 00:39:29.351 [2024-11-07 13:44:37.304761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.351 [2024-11-07 13:44:37.304776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.351 qpair failed and we were unable to recover it. 00:39:29.351 [2024-11-07 13:44:37.304956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.351 [2024-11-07 13:44:37.304971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.351 qpair failed and we were unable to recover it. 00:39:29.351 [2024-11-07 13:44:37.305179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.351 [2024-11-07 13:44:37.305193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.351 qpair failed and we were unable to recover it. 00:39:29.351 [2024-11-07 13:44:37.305540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.351 [2024-11-07 13:44:37.305555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.351 qpair failed and we were unable to recover it. 00:39:29.351 [2024-11-07 13:44:37.305739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.351 [2024-11-07 13:44:37.305753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.351 qpair failed and we were unable to recover it. 00:39:29.351 [2024-11-07 13:44:37.306096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.351 [2024-11-07 13:44:37.306111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.351 qpair failed and we were unable to recover it. 00:39:29.351 [2024-11-07 13:44:37.306460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.351 [2024-11-07 13:44:37.306474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.351 qpair failed and we were unable to recover it. 00:39:29.351 [2024-11-07 13:44:37.306798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.351 [2024-11-07 13:44:37.306812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.351 qpair failed and we were unable to recover it. 00:39:29.351 [2024-11-07 13:44:37.307012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.351 [2024-11-07 13:44:37.307028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.351 qpair failed and we were unable to recover it. 00:39:29.351 [2024-11-07 13:44:37.307322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.351 [2024-11-07 13:44:37.307337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.351 qpair failed and we were unable to recover it. 00:39:29.351 [2024-11-07 13:44:37.307690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.351 [2024-11-07 13:44:37.307705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.351 qpair failed and we were unable to recover it. 00:39:29.351 [2024-11-07 13:44:37.308045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.351 [2024-11-07 13:44:37.308060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.351 qpair failed and we were unable to recover it. 00:39:29.351 [2024-11-07 13:44:37.308442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.351 [2024-11-07 13:44:37.308456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.351 qpair failed and we were unable to recover it. 00:39:29.351 [2024-11-07 13:44:37.308844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.351 [2024-11-07 13:44:37.308860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.351 qpair failed and we were unable to recover it. 00:39:29.351 [2024-11-07 13:44:37.309212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.351 [2024-11-07 13:44:37.309227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.351 qpair failed and we were unable to recover it. 00:39:29.351 [2024-11-07 13:44:37.309413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.351 [2024-11-07 13:44:37.309428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.351 qpair failed and we were unable to recover it. 00:39:29.351 [2024-11-07 13:44:37.309767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.351 [2024-11-07 13:44:37.309783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.351 qpair failed and we were unable to recover it. 00:39:29.351 [2024-11-07 13:44:37.310137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.351 [2024-11-07 13:44:37.310153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.351 qpair failed and we were unable to recover it. 00:39:29.351 [2024-11-07 13:44:37.310491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.351 [2024-11-07 13:44:37.310506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.351 qpair failed and we were unable to recover it. 00:39:29.351 [2024-11-07 13:44:37.310815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.351 [2024-11-07 13:44:37.310829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.351 qpair failed and we were unable to recover it. 00:39:29.351 [2024-11-07 13:44:37.311021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.351 [2024-11-07 13:44:37.311037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.351 qpair failed and we were unable to recover it. 00:39:29.351 [2024-11-07 13:44:37.311222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.351 [2024-11-07 13:44:37.311237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.351 qpair failed and we were unable to recover it. 00:39:29.351 [2024-11-07 13:44:37.311578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.351 [2024-11-07 13:44:37.311592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.352 qpair failed and we were unable to recover it. 00:39:29.352 [2024-11-07 13:44:37.311894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.352 [2024-11-07 13:44:37.311910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.352 qpair failed and we were unable to recover it. 00:39:29.352 [2024-11-07 13:44:37.312232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.352 [2024-11-07 13:44:37.312246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.352 qpair failed and we were unable to recover it. 00:39:29.352 [2024-11-07 13:44:37.312441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.352 [2024-11-07 13:44:37.312455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.352 qpair failed and we were unable to recover it. 00:39:29.352 [2024-11-07 13:44:37.312727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.352 [2024-11-07 13:44:37.312741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.352 qpair failed and we were unable to recover it. 00:39:29.352 [2024-11-07 13:44:37.313044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.352 [2024-11-07 13:44:37.313059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.352 qpair failed and we were unable to recover it. 00:39:29.352 [2024-11-07 13:44:37.313247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.352 [2024-11-07 13:44:37.313262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.352 qpair failed and we were unable to recover it. 00:39:29.352 [2024-11-07 13:44:37.313605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.352 [2024-11-07 13:44:37.313622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.352 qpair failed and we were unable to recover it. 00:39:29.352 [2024-11-07 13:44:37.313966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.352 [2024-11-07 13:44:37.313981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.352 qpair failed and we were unable to recover it. 00:39:29.352 [2024-11-07 13:44:37.314161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.352 [2024-11-07 13:44:37.314175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.352 qpair failed and we were unable to recover it. 00:39:29.623 [2024-11-07 13:44:37.314534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.623 [2024-11-07 13:44:37.314549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.623 qpair failed and we were unable to recover it. 00:39:29.623 [2024-11-07 13:44:37.314890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.623 [2024-11-07 13:44:37.314905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.623 qpair failed and we were unable to recover it. 00:39:29.623 [2024-11-07 13:44:37.315257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.623 [2024-11-07 13:44:37.315271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.623 qpair failed and we were unable to recover it. 00:39:29.623 [2024-11-07 13:44:37.315572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.623 [2024-11-07 13:44:37.315588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.623 qpair failed and we were unable to recover it. 00:39:29.623 [2024-11-07 13:44:37.315763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.623 [2024-11-07 13:44:37.315779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.623 qpair failed and we were unable to recover it. 00:39:29.623 [2024-11-07 13:44:37.315971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.623 [2024-11-07 13:44:37.315987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.623 qpair failed and we were unable to recover it. 00:39:29.623 [2024-11-07 13:44:37.316339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.623 [2024-11-07 13:44:37.316353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.623 qpair failed and we were unable to recover it. 00:39:29.623 [2024-11-07 13:44:37.316686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.623 [2024-11-07 13:44:37.316700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.623 qpair failed and we were unable to recover it. 00:39:29.623 [2024-11-07 13:44:37.317020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.623 [2024-11-07 13:44:37.317035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.623 qpair failed and we were unable to recover it. 00:39:29.623 [2024-11-07 13:44:37.317394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.623 [2024-11-07 13:44:37.317409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.623 qpair failed and we were unable to recover it. 00:39:29.623 [2024-11-07 13:44:37.317727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.623 [2024-11-07 13:44:37.317752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.623 qpair failed and we were unable to recover it. 00:39:29.623 [2024-11-07 13:44:37.317924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.623 [2024-11-07 13:44:37.317940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.623 qpair failed and we were unable to recover it. 00:39:29.623 [2024-11-07 13:44:37.318295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.623 [2024-11-07 13:44:37.318310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.623 qpair failed and we were unable to recover it. 00:39:29.623 [2024-11-07 13:44:37.318655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.623 [2024-11-07 13:44:37.318670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.623 qpair failed and we were unable to recover it. 00:39:29.623 [2024-11-07 13:44:37.318860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.623 [2024-11-07 13:44:37.318878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.623 qpair failed and we were unable to recover it. 00:39:29.623 [2024-11-07 13:44:37.319188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.623 [2024-11-07 13:44:37.319202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.623 qpair failed and we were unable to recover it. 00:39:29.623 [2024-11-07 13:44:37.319515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.623 [2024-11-07 13:44:37.319531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.623 qpair failed and we were unable to recover it. 00:39:29.623 [2024-11-07 13:44:37.319872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.623 [2024-11-07 13:44:37.319888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.623 qpair failed and we were unable to recover it. 00:39:29.623 [2024-11-07 13:44:37.320072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.623 [2024-11-07 13:44:37.320087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.623 qpair failed and we were unable to recover it. 00:39:29.623 [2024-11-07 13:44:37.320283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.623 [2024-11-07 13:44:37.320299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.623 qpair failed and we were unable to recover it. 00:39:29.624 [2024-11-07 13:44:37.320632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.624 [2024-11-07 13:44:37.320648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.624 qpair failed and we were unable to recover it. 00:39:29.624 [2024-11-07 13:44:37.320939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.624 [2024-11-07 13:44:37.320955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.624 qpair failed and we were unable to recover it. 00:39:29.624 [2024-11-07 13:44:37.321272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.624 [2024-11-07 13:44:37.321288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.624 qpair failed and we were unable to recover it. 00:39:29.624 [2024-11-07 13:44:37.321628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.624 [2024-11-07 13:44:37.321643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.624 qpair failed and we were unable to recover it. 00:39:29.624 [2024-11-07 13:44:37.321972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.624 [2024-11-07 13:44:37.321987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.624 qpair failed and we were unable to recover it. 00:39:29.624 [2024-11-07 13:44:37.322326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.624 [2024-11-07 13:44:37.322341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.624 qpair failed and we were unable to recover it. 00:39:29.624 [2024-11-07 13:44:37.322628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.624 [2024-11-07 13:44:37.322642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.624 qpair failed and we were unable to recover it. 00:39:29.624 [2024-11-07 13:44:37.322941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.624 [2024-11-07 13:44:37.322956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.624 qpair failed and we were unable to recover it. 00:39:29.624 [2024-11-07 13:44:37.323151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.624 [2024-11-07 13:44:37.323165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.624 qpair failed and we were unable to recover it. 00:39:29.624 [2024-11-07 13:44:37.323503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.624 [2024-11-07 13:44:37.323518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.624 qpair failed and we were unable to recover it. 00:39:29.624 [2024-11-07 13:44:37.323856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.624 [2024-11-07 13:44:37.323876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.624 qpair failed and we were unable to recover it. 00:39:29.624 [2024-11-07 13:44:37.324219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.624 [2024-11-07 13:44:37.324234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.624 qpair failed and we were unable to recover it. 00:39:29.624 [2024-11-07 13:44:37.324565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.624 [2024-11-07 13:44:37.324580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.624 qpair failed and we were unable to recover it. 00:39:29.624 [2024-11-07 13:44:37.324762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.624 [2024-11-07 13:44:37.324778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.624 qpair failed and we were unable to recover it. 00:39:29.624 [2024-11-07 13:44:37.325113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.624 [2024-11-07 13:44:37.325129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.624 qpair failed and we were unable to recover it. 00:39:29.624 [2024-11-07 13:44:37.325449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.624 [2024-11-07 13:44:37.325464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.624 qpair failed and we were unable to recover it. 00:39:29.624 [2024-11-07 13:44:37.325803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.624 [2024-11-07 13:44:37.325817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.624 qpair failed and we were unable to recover it. 00:39:29.624 [2024-11-07 13:44:37.326101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.624 [2024-11-07 13:44:37.326119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.624 qpair failed and we were unable to recover it. 00:39:29.624 [2024-11-07 13:44:37.326428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.624 [2024-11-07 13:44:37.326443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.624 qpair failed and we were unable to recover it. 00:39:29.624 [2024-11-07 13:44:37.326738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.624 [2024-11-07 13:44:37.326753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.624 qpair failed and we were unable to recover it. 00:39:29.624 [2024-11-07 13:44:37.326935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.624 [2024-11-07 13:44:37.326951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.624 qpair failed and we were unable to recover it. 00:39:29.624 [2024-11-07 13:44:37.327253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.624 [2024-11-07 13:44:37.327268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.624 qpair failed and we were unable to recover it. 00:39:29.624 [2024-11-07 13:44:37.327456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.624 [2024-11-07 13:44:37.327470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.624 qpair failed and we were unable to recover it. 00:39:29.624 [2024-11-07 13:44:37.327780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.624 [2024-11-07 13:44:37.327795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.624 qpair failed and we were unable to recover it. 00:39:29.624 [2024-11-07 13:44:37.328140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.624 [2024-11-07 13:44:37.328155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.624 qpair failed and we were unable to recover it. 00:39:29.624 [2024-11-07 13:44:37.328442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.624 [2024-11-07 13:44:37.328458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.624 qpair failed and we were unable to recover it. 00:39:29.624 [2024-11-07 13:44:37.328643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.624 [2024-11-07 13:44:37.328659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.624 qpair failed and we were unable to recover it. 00:39:29.624 [2024-11-07 13:44:37.328985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.624 [2024-11-07 13:44:37.329001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.624 qpair failed and we were unable to recover it. 00:39:29.624 [2024-11-07 13:44:37.329310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.624 [2024-11-07 13:44:37.329325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.624 qpair failed and we were unable to recover it. 00:39:29.624 [2024-11-07 13:44:37.329602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.624 [2024-11-07 13:44:37.329616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.624 qpair failed and we were unable to recover it. 00:39:29.624 [2024-11-07 13:44:37.329700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.624 [2024-11-07 13:44:37.329715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.624 qpair failed and we were unable to recover it. 00:39:29.624 [2024-11-07 13:44:37.329976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.624 [2024-11-07 13:44:37.329991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.624 qpair failed and we were unable to recover it. 00:39:29.624 [2024-11-07 13:44:37.330286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.624 [2024-11-07 13:44:37.330301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.625 qpair failed and we were unable to recover it. 00:39:29.625 [2024-11-07 13:44:37.330634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.625 [2024-11-07 13:44:37.330657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.625 qpair failed and we were unable to recover it. 00:39:29.625 [2024-11-07 13:44:37.330841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.625 [2024-11-07 13:44:37.330856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.625 qpair failed and we were unable to recover it. 00:39:29.625 [2024-11-07 13:44:37.331186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.625 [2024-11-07 13:44:37.331202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.625 qpair failed and we were unable to recover it. 00:39:29.625 [2024-11-07 13:44:37.331486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.625 [2024-11-07 13:44:37.331501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.625 qpair failed and we were unable to recover it. 00:39:29.625 [2024-11-07 13:44:37.331709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.625 [2024-11-07 13:44:37.331724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.625 qpair failed and we were unable to recover it. 00:39:29.625 [2024-11-07 13:44:37.332072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.625 [2024-11-07 13:44:37.332087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.625 qpair failed and we were unable to recover it. 00:39:29.625 [2024-11-07 13:44:37.332399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.625 [2024-11-07 13:44:37.332414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.625 qpair failed and we were unable to recover it. 00:39:29.625 [2024-11-07 13:44:37.332635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.625 [2024-11-07 13:44:37.332649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.625 qpair failed and we were unable to recover it. 00:39:29.625 [2024-11-07 13:44:37.332761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.625 [2024-11-07 13:44:37.332777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.625 qpair failed and we were unable to recover it. 00:39:29.625 [2024-11-07 13:44:37.333094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.625 [2024-11-07 13:44:37.333111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.625 qpair failed and we were unable to recover it. 00:39:29.625 [2024-11-07 13:44:37.333305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.625 [2024-11-07 13:44:37.333320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.625 qpair failed and we were unable to recover it. 00:39:29.625 [2024-11-07 13:44:37.333486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.625 [2024-11-07 13:44:37.333502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.625 qpair failed and we were unable to recover it. 00:39:29.625 [2024-11-07 13:44:37.333844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.625 [2024-11-07 13:44:37.333860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.625 qpair failed and we were unable to recover it. 00:39:29.625 [2024-11-07 13:44:37.333998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.625 [2024-11-07 13:44:37.334013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.625 qpair failed and we were unable to recover it. 00:39:29.625 [2024-11-07 13:44:37.334298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.625 [2024-11-07 13:44:37.334313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.625 qpair failed and we were unable to recover it. 00:39:29.625 [2024-11-07 13:44:37.334486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.625 [2024-11-07 13:44:37.334501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.625 qpair failed and we were unable to recover it. 00:39:29.625 [2024-11-07 13:44:37.334703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.625 [2024-11-07 13:44:37.334719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.625 qpair failed and we were unable to recover it. 00:39:29.625 [2024-11-07 13:44:37.335030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.625 [2024-11-07 13:44:37.335045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.625 qpair failed and we were unable to recover it. 00:39:29.625 [2024-11-07 13:44:37.335369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.625 [2024-11-07 13:44:37.335384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.625 qpair failed and we were unable to recover it. 00:39:29.625 [2024-11-07 13:44:37.335712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.625 [2024-11-07 13:44:37.335727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.625 qpair failed and we were unable to recover it. 00:39:29.625 [2024-11-07 13:44:37.336158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.625 [2024-11-07 13:44:37.336174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.625 qpair failed and we were unable to recover it. 00:39:29.625 [2024-11-07 13:44:37.336509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.625 [2024-11-07 13:44:37.336524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.625 qpair failed and we were unable to recover it. 00:39:29.625 [2024-11-07 13:44:37.336871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.625 [2024-11-07 13:44:37.336886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.625 qpair failed and we were unable to recover it. 00:39:29.625 [2024-11-07 13:44:37.337228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.625 [2024-11-07 13:44:37.337243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.625 qpair failed and we were unable to recover it. 00:39:29.625 [2024-11-07 13:44:37.337578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.625 [2024-11-07 13:44:37.337596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.625 qpair failed and we were unable to recover it. 00:39:29.625 [2024-11-07 13:44:37.337934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.625 [2024-11-07 13:44:37.337950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.625 qpair failed and we were unable to recover it. 00:39:29.625 [2024-11-07 13:44:37.338296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.625 [2024-11-07 13:44:37.338312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.625 qpair failed and we were unable to recover it. 00:39:29.625 [2024-11-07 13:44:37.338638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.625 [2024-11-07 13:44:37.338653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.625 qpair failed and we were unable to recover it. 00:39:29.625 [2024-11-07 13:44:37.338996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.625 [2024-11-07 13:44:37.339011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.625 qpair failed and we were unable to recover it. 00:39:29.625 [2024-11-07 13:44:37.339194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.625 [2024-11-07 13:44:37.339210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.625 qpair failed and we were unable to recover it. 00:39:29.625 [2024-11-07 13:44:37.339519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.625 [2024-11-07 13:44:37.339534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.625 qpair failed and we were unable to recover it. 00:39:29.625 [2024-11-07 13:44:37.339746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.625 [2024-11-07 13:44:37.339761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.625 qpair failed and we were unable to recover it. 00:39:29.625 [2024-11-07 13:44:37.340079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.625 [2024-11-07 13:44:37.340094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.625 qpair failed and we were unable to recover it. 00:39:29.625 [2024-11-07 13:44:37.340428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.625 [2024-11-07 13:44:37.340443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.625 qpair failed and we were unable to recover it. 00:39:29.625 [2024-11-07 13:44:37.340777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.625 [2024-11-07 13:44:37.340791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.625 qpair failed and we were unable to recover it. 00:39:29.625 [2024-11-07 13:44:37.341114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.625 [2024-11-07 13:44:37.341129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.625 qpair failed and we were unable to recover it. 00:39:29.625 13:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:29.625 [2024-11-07 13:44:37.341341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.625 [2024-11-07 13:44:37.341355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.626 qpair failed and we were unable to recover it. 00:39:29.626 13:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:39:29.626 [2024-11-07 13:44:37.341694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.626 [2024-11-07 13:44:37.341709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.626 qpair failed and we were unable to recover it. 00:39:29.626 [2024-11-07 13:44:37.341911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.626 [2024-11-07 13:44:37.341927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.626 qpair failed and we were unable to recover it. 00:39:29.626 13:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:29.626 [2024-11-07 13:44:37.342215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.626 [2024-11-07 13:44:37.342230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.626 qpair failed and we were unable to recover it. 00:39:29.626 13:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:29.626 [2024-11-07 13:44:37.342301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.626 [2024-11-07 13:44:37.342315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.626 qpair failed and we were unable to recover it. 00:39:29.626 13:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:29.626 [2024-11-07 13:44:37.342625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.626 [2024-11-07 13:44:37.342641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.626 qpair failed and we were unable to recover it. 00:39:29.626 [2024-11-07 13:44:37.342973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.626 [2024-11-07 13:44:37.342989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.626 qpair failed and we were unable to recover it. 00:39:29.626 [2024-11-07 13:44:37.343400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.626 [2024-11-07 13:44:37.343416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.626 qpair failed and we were unable to recover it. 00:39:29.626 [2024-11-07 13:44:37.343588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.626 [2024-11-07 13:44:37.343603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.626 qpair failed and we were unable to recover it. 00:39:29.626 [2024-11-07 13:44:37.343964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.626 [2024-11-07 13:44:37.343979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.626 qpair failed and we were unable to recover it. 00:39:29.626 [2024-11-07 13:44:37.344308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.626 [2024-11-07 13:44:37.344323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.626 qpair failed and we were unable to recover it. 00:39:29.626 [2024-11-07 13:44:37.344512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.626 [2024-11-07 13:44:37.344528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.626 qpair failed and we were unable to recover it. 00:39:29.626 [2024-11-07 13:44:37.344811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.626 [2024-11-07 13:44:37.344827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.626 qpair failed and we were unable to recover it. 00:39:29.626 [2024-11-07 13:44:37.345166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.626 [2024-11-07 13:44:37.345182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.626 qpair failed and we were unable to recover it. 00:39:29.626 [2024-11-07 13:44:37.345493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.626 [2024-11-07 13:44:37.345508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.626 qpair failed and we were unable to recover it. 00:39:29.626 [2024-11-07 13:44:37.345752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.626 [2024-11-07 13:44:37.345766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.626 qpair failed and we were unable to recover it. 00:39:29.626 [2024-11-07 13:44:37.346032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.626 [2024-11-07 13:44:37.346047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.626 qpair failed and we were unable to recover it. 00:39:29.626 [2024-11-07 13:44:37.346386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.626 [2024-11-07 13:44:37.346402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.626 qpair failed and we were unable to recover it. 00:39:29.626 [2024-11-07 13:44:37.346737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.626 [2024-11-07 13:44:37.346752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.626 qpair failed and we were unable to recover it. 00:39:29.626 [2024-11-07 13:44:37.347076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.626 [2024-11-07 13:44:37.347093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.626 qpair failed and we were unable to recover it. 00:39:29.626 [2024-11-07 13:44:37.347280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.626 [2024-11-07 13:44:37.347295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.626 qpair failed and we were unable to recover it. 00:39:29.626 [2024-11-07 13:44:37.347637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.626 [2024-11-07 13:44:37.347652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.626 qpair failed and we were unable to recover it. 00:39:29.626 [2024-11-07 13:44:37.347837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.626 [2024-11-07 13:44:37.347853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.626 qpair failed and we were unable to recover it. 00:39:29.626 [2024-11-07 13:44:37.348049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.626 [2024-11-07 13:44:37.348063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.626 qpair failed and we were unable to recover it. 00:39:29.626 [2024-11-07 13:44:37.348198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.626 [2024-11-07 13:44:37.348212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.626 qpair failed and we were unable to recover it. 00:39:29.626 [2024-11-07 13:44:37.348544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.626 [2024-11-07 13:44:37.348559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.626 qpair failed and we were unable to recover it. 00:39:29.626 [2024-11-07 13:44:37.348882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.626 [2024-11-07 13:44:37.348900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.626 qpair failed and we were unable to recover it. 00:39:29.626 [2024-11-07 13:44:37.348948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.626 [2024-11-07 13:44:37.348963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.626 qpair failed and we were unable to recover it. 00:39:29.626 [2024-11-07 13:44:37.349271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.626 [2024-11-07 13:44:37.349286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.626 qpair failed and we were unable to recover it. 00:39:29.626 [2024-11-07 13:44:37.349479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.626 [2024-11-07 13:44:37.349493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.626 qpair failed and we were unable to recover it. 00:39:29.626 [2024-11-07 13:44:37.349766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.626 [2024-11-07 13:44:37.349783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.626 qpair failed and we were unable to recover it. 00:39:29.626 [2024-11-07 13:44:37.350118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.626 [2024-11-07 13:44:37.350133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.626 qpair failed and we were unable to recover it. 00:39:29.626 [2024-11-07 13:44:37.350325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.626 [2024-11-07 13:44:37.350340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.626 qpair failed and we were unable to recover it. 00:39:29.626 [2024-11-07 13:44:37.350669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.626 [2024-11-07 13:44:37.350685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.626 qpair failed and we were unable to recover it. 00:39:29.626 [2024-11-07 13:44:37.350875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.627 [2024-11-07 13:44:37.350891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.627 qpair failed and we were unable to recover it. 00:39:29.627 [2024-11-07 13:44:37.351253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.627 [2024-11-07 13:44:37.351269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.627 qpair failed and we were unable to recover it. 00:39:29.627 [2024-11-07 13:44:37.351549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.627 [2024-11-07 13:44:37.351564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.627 qpair failed and we were unable to recover it. 00:39:29.627 [2024-11-07 13:44:37.351866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.627 [2024-11-07 13:44:37.351881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.627 qpair failed and we were unable to recover it. 00:39:29.627 [2024-11-07 13:44:37.352163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.627 [2024-11-07 13:44:37.352179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.627 qpair failed and we were unable to recover it. 00:39:29.627 [2024-11-07 13:44:37.352471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.627 [2024-11-07 13:44:37.352487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.627 qpair failed and we were unable to recover it. 00:39:29.627 [2024-11-07 13:44:37.352682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.627 [2024-11-07 13:44:37.352697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.627 qpair failed and we were unable to recover it. 00:39:29.627 [2024-11-07 13:44:37.352870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.627 [2024-11-07 13:44:37.352886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.627 qpair failed and we were unable to recover it. 00:39:29.627 [2024-11-07 13:44:37.353184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.627 [2024-11-07 13:44:37.353200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.627 qpair failed and we were unable to recover it. 00:39:29.627 [2024-11-07 13:44:37.353423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.627 [2024-11-07 13:44:37.353438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.627 qpair failed and we were unable to recover it. 00:39:29.627 [2024-11-07 13:44:37.353763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.627 [2024-11-07 13:44:37.353779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.627 qpair failed and we were unable to recover it. 00:39:29.627 [2024-11-07 13:44:37.354137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.627 [2024-11-07 13:44:37.354153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.627 qpair failed and we were unable to recover it. 00:39:29.627 [2024-11-07 13:44:37.354342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.627 [2024-11-07 13:44:37.354355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.627 qpair failed and we were unable to recover it. 00:39:29.627 [2024-11-07 13:44:37.354569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.627 [2024-11-07 13:44:37.354584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.627 qpair failed and we were unable to recover it. 00:39:29.627 [2024-11-07 13:44:37.354961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.627 [2024-11-07 13:44:37.354976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.627 qpair failed and we were unable to recover it. 00:39:29.627 [2024-11-07 13:44:37.355278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.627 [2024-11-07 13:44:37.355293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.627 qpair failed and we were unable to recover it. 00:39:29.627 [2024-11-07 13:44:37.355489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.627 [2024-11-07 13:44:37.355505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.627 qpair failed and we were unable to recover it. 00:39:29.627 [2024-11-07 13:44:37.355820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.627 [2024-11-07 13:44:37.355837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.627 qpair failed and we were unable to recover it. 00:39:29.627 [2024-11-07 13:44:37.356019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.627 [2024-11-07 13:44:37.356036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.627 qpair failed and we were unable to recover it. 00:39:29.627 [2024-11-07 13:44:37.356338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.627 [2024-11-07 13:44:37.356354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.627 qpair failed and we were unable to recover it. 00:39:29.627 [2024-11-07 13:44:37.356685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.627 [2024-11-07 13:44:37.356701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.627 qpair failed and we were unable to recover it. 00:39:29.627 [2024-11-07 13:44:37.357032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.627 [2024-11-07 13:44:37.357048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.627 qpair failed and we were unable to recover it. 00:39:29.627 [2024-11-07 13:44:37.357322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.627 [2024-11-07 13:44:37.357337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.627 qpair failed and we were unable to recover it. 00:39:29.627 [2024-11-07 13:44:37.357408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.627 [2024-11-07 13:44:37.357420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.627 qpair failed and we were unable to recover it. 00:39:29.627 [2024-11-07 13:44:37.357691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.627 [2024-11-07 13:44:37.357706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.627 qpair failed and we were unable to recover it. 00:39:29.627 [2024-11-07 13:44:37.358019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.627 [2024-11-07 13:44:37.358035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.627 qpair failed and we were unable to recover it. 00:39:29.627 [2024-11-07 13:44:37.358362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.627 [2024-11-07 13:44:37.358377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.627 qpair failed and we were unable to recover it. 00:39:29.627 [2024-11-07 13:44:37.358708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.627 [2024-11-07 13:44:37.358725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.627 qpair failed and we were unable to recover it. 00:39:29.627 [2024-11-07 13:44:37.359038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.627 [2024-11-07 13:44:37.359053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.627 qpair failed and we were unable to recover it. 00:39:29.627 [2024-11-07 13:44:37.359250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.627 [2024-11-07 13:44:37.359264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.627 qpair failed and we were unable to recover it. 00:39:29.627 [2024-11-07 13:44:37.359346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.627 [2024-11-07 13:44:37.359359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.627 qpair failed and we were unable to recover it. 00:39:29.627 [2024-11-07 13:44:37.359534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.627 [2024-11-07 13:44:37.359549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.627 qpair failed and we were unable to recover it. 00:39:29.627 [2024-11-07 13:44:37.359730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.627 [2024-11-07 13:44:37.359747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.627 qpair failed and we were unable to recover it. 00:39:29.627 [2024-11-07 13:44:37.360100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.627 [2024-11-07 13:44:37.360115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.627 qpair failed and we were unable to recover it. 00:39:29.627 [2024-11-07 13:44:37.360405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.627 [2024-11-07 13:44:37.360419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.628 qpair failed and we were unable to recover it. 00:39:29.628 [2024-11-07 13:44:37.360617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.628 [2024-11-07 13:44:37.360633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.628 qpair failed and we were unable to recover it. 00:39:29.628 [2024-11-07 13:44:37.360938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.628 [2024-11-07 13:44:37.360953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.628 qpair failed and we were unable to recover it. 00:39:29.628 [2024-11-07 13:44:37.361140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.628 [2024-11-07 13:44:37.361155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.628 qpair failed and we were unable to recover it. 00:39:29.628 [2024-11-07 13:44:37.361480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.628 [2024-11-07 13:44:37.361494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.628 qpair failed and we were unable to recover it. 00:39:29.628 [2024-11-07 13:44:37.361677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.628 [2024-11-07 13:44:37.361691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.628 qpair failed and we were unable to recover it. 00:39:29.628 [2024-11-07 13:44:37.362015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.628 [2024-11-07 13:44:37.362032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.628 qpair failed and we were unable to recover it. 00:39:29.628 [2024-11-07 13:44:37.362392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.628 [2024-11-07 13:44:37.362407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.628 qpair failed and we were unable to recover it. 00:39:29.628 [2024-11-07 13:44:37.362603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.628 [2024-11-07 13:44:37.362618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.628 qpair failed and we were unable to recover it. 00:39:29.628 [2024-11-07 13:44:37.362912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.628 [2024-11-07 13:44:37.362928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.628 qpair failed and we were unable to recover it. 00:39:29.628 [2024-11-07 13:44:37.363282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.628 [2024-11-07 13:44:37.363299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.628 qpair failed and we were unable to recover it. 00:39:29.628 [2024-11-07 13:44:37.363603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.628 [2024-11-07 13:44:37.363620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.628 qpair failed and we were unable to recover it. 00:39:29.628 [2024-11-07 13:44:37.363832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.628 [2024-11-07 13:44:37.363846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.628 qpair failed and we were unable to recover it. 00:39:29.628 [2024-11-07 13:44:37.364037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.628 [2024-11-07 13:44:37.364053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.628 qpair failed and we were unable to recover it. 00:39:29.628 [2024-11-07 13:44:37.364395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.628 [2024-11-07 13:44:37.364412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.628 qpair failed and we were unable to recover it. 00:39:29.628 [2024-11-07 13:44:37.364743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.628 [2024-11-07 13:44:37.364759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.628 qpair failed and we were unable to recover it. 00:39:29.628 [2024-11-07 13:44:37.364815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.628 [2024-11-07 13:44:37.364829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.628 qpair failed and we were unable to recover it. 00:39:29.628 [2024-11-07 13:44:37.365030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.628 [2024-11-07 13:44:37.365046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.628 qpair failed and we were unable to recover it. 00:39:29.628 [2024-11-07 13:44:37.365372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.628 [2024-11-07 13:44:37.365386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.628 qpair failed and we were unable to recover it. 00:39:29.628 [2024-11-07 13:44:37.365596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.628 [2024-11-07 13:44:37.365610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.628 qpair failed and we were unable to recover it. 00:39:29.628 [2024-11-07 13:44:37.365956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.628 [2024-11-07 13:44:37.365972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.628 qpair failed and we were unable to recover it. 00:39:29.628 [2024-11-07 13:44:37.366253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.628 [2024-11-07 13:44:37.366268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.628 qpair failed and we were unable to recover it. 00:39:29.628 [2024-11-07 13:44:37.366563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.628 [2024-11-07 13:44:37.366578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.628 qpair failed and we were unable to recover it. 00:39:29.628 [2024-11-07 13:44:37.366934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.628 [2024-11-07 13:44:37.366949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.628 qpair failed and we were unable to recover it. 00:39:29.628 [2024-11-07 13:44:37.367140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.628 [2024-11-07 13:44:37.367155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.628 qpair failed and we were unable to recover it. 00:39:29.628 [2024-11-07 13:44:37.367338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.628 [2024-11-07 13:44:37.367352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.628 qpair failed and we were unable to recover it. 00:39:29.628 [2024-11-07 13:44:37.367543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.628 [2024-11-07 13:44:37.367559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.628 qpair failed and we were unable to recover it. 00:39:29.628 [2024-11-07 13:44:37.367742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.628 [2024-11-07 13:44:37.367757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.628 qpair failed and we were unable to recover it. 00:39:29.628 [2024-11-07 13:44:37.367970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.628 [2024-11-07 13:44:37.367986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.629 qpair failed and we were unable to recover it. 00:39:29.629 [2024-11-07 13:44:37.368169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.629 [2024-11-07 13:44:37.368183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.629 qpair failed and we were unable to recover it. 00:39:29.629 [2024-11-07 13:44:37.368365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.629 [2024-11-07 13:44:37.368380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.629 qpair failed and we were unable to recover it. 00:39:29.629 [2024-11-07 13:44:37.368718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.629 [2024-11-07 13:44:37.368734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.629 qpair failed and we were unable to recover it. 00:39:29.629 [2024-11-07 13:44:37.369059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.629 [2024-11-07 13:44:37.369075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.629 qpair failed and we were unable to recover it. 00:39:29.629 [2024-11-07 13:44:37.369246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.629 [2024-11-07 13:44:37.369260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.629 qpair failed and we were unable to recover it. 00:39:29.629 [2024-11-07 13:44:37.369586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.629 [2024-11-07 13:44:37.369602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.629 qpair failed and we were unable to recover it. 00:39:29.629 [2024-11-07 13:44:37.369939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.629 [2024-11-07 13:44:37.369954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.629 qpair failed and we were unable to recover it. 00:39:29.629 [2024-11-07 13:44:37.370296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.629 [2024-11-07 13:44:37.370312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.629 qpair failed and we were unable to recover it. 00:39:29.629 [2024-11-07 13:44:37.370634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.629 [2024-11-07 13:44:37.370650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.629 qpair failed and we were unable to recover it. 00:39:29.629 [2024-11-07 13:44:37.370978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.629 [2024-11-07 13:44:37.370998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.629 qpair failed and we were unable to recover it. 00:39:29.629 [2024-11-07 13:44:37.371347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.629 [2024-11-07 13:44:37.371363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.629 qpair failed and we were unable to recover it. 00:39:29.629 [2024-11-07 13:44:37.371672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.629 [2024-11-07 13:44:37.371687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.629 qpair failed and we were unable to recover it. 00:39:29.629 [2024-11-07 13:44:37.371853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.629 [2024-11-07 13:44:37.371872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.629 qpair failed and we were unable to recover it. 00:39:29.629 [2024-11-07 13:44:37.372233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.629 [2024-11-07 13:44:37.372248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.629 qpair failed and we were unable to recover it. 00:39:29.629 [2024-11-07 13:44:37.372585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.629 [2024-11-07 13:44:37.372601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.629 qpair failed and we were unable to recover it. 00:39:29.629 [2024-11-07 13:44:37.372894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.629 [2024-11-07 13:44:37.372910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.629 qpair failed and we were unable to recover it. 00:39:29.629 [2024-11-07 13:44:37.373192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.629 [2024-11-07 13:44:37.373206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.629 qpair failed and we were unable to recover it. 00:39:29.629 [2024-11-07 13:44:37.373356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.629 [2024-11-07 13:44:37.373370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.629 qpair failed and we were unable to recover it. 00:39:29.629 [2024-11-07 13:44:37.373710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.629 [2024-11-07 13:44:37.373725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.629 qpair failed and we were unable to recover it. 00:39:29.629 [2024-11-07 13:44:37.374030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.629 [2024-11-07 13:44:37.374045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.629 qpair failed and we were unable to recover it. 00:39:29.629 [2024-11-07 13:44:37.374106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.629 [2024-11-07 13:44:37.374120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.629 qpair failed and we were unable to recover it. 00:39:29.629 [2024-11-07 13:44:37.374407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.629 [2024-11-07 13:44:37.374422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.629 qpair failed and we were unable to recover it. 00:39:29.629 [2024-11-07 13:44:37.374759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.629 [2024-11-07 13:44:37.374776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.629 qpair failed and we were unable to recover it. 00:39:29.629 [2024-11-07 13:44:37.375099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.629 [2024-11-07 13:44:37.375115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.629 qpair failed and we were unable to recover it. 00:39:29.629 [2024-11-07 13:44:37.375436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.629 [2024-11-07 13:44:37.375452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.629 qpair failed and we were unable to recover it. 00:39:29.629 [2024-11-07 13:44:37.375750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.629 [2024-11-07 13:44:37.375765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.629 qpair failed and we were unable to recover it. 00:39:29.629 [2024-11-07 13:44:37.375946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.629 [2024-11-07 13:44:37.375962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.629 qpair failed and we were unable to recover it. 00:39:29.629 [2024-11-07 13:44:37.376257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.629 [2024-11-07 13:44:37.376271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.629 qpair failed and we were unable to recover it. 00:39:29.629 [2024-11-07 13:44:37.376609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.629 [2024-11-07 13:44:37.376624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.629 qpair failed and we were unable to recover it. 00:39:29.629 [2024-11-07 13:44:37.376948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.629 [2024-11-07 13:44:37.376963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.629 qpair failed and we were unable to recover it. 00:39:29.629 [2024-11-07 13:44:37.377288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.629 [2024-11-07 13:44:37.377303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.629 qpair failed and we were unable to recover it. 00:39:29.629 [2024-11-07 13:44:37.377653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.629 [2024-11-07 13:44:37.377669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.629 qpair failed and we were unable to recover it. 00:39:29.629 [2024-11-07 13:44:37.378009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.629 [2024-11-07 13:44:37.378025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.629 qpair failed and we were unable to recover it. 00:39:29.629 [2024-11-07 13:44:37.378372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.629 [2024-11-07 13:44:37.378386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.629 qpair failed and we were unable to recover it. 00:39:29.629 [2024-11-07 13:44:37.378727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.629 [2024-11-07 13:44:37.378741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.629 qpair failed and we were unable to recover it. 00:39:29.629 [2024-11-07 13:44:37.379074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.629 [2024-11-07 13:44:37.379089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.629 qpair failed and we were unable to recover it. 00:39:29.629 [2024-11-07 13:44:37.379432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.629 [2024-11-07 13:44:37.379447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.629 qpair failed and we were unable to recover it. 00:39:29.629 [2024-11-07 13:44:37.379646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.630 [2024-11-07 13:44:37.379661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.630 qpair failed and we were unable to recover it. 00:39:29.630 [2024-11-07 13:44:37.379893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.630 [2024-11-07 13:44:37.379908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.630 qpair failed and we were unable to recover it. 00:39:29.630 [2024-11-07 13:44:37.380212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.630 [2024-11-07 13:44:37.380226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.630 qpair failed and we were unable to recover it. 00:39:29.630 [2024-11-07 13:44:37.380545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.630 [2024-11-07 13:44:37.380559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.630 qpair failed and we were unable to recover it. 00:39:29.630 [2024-11-07 13:44:37.380614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.630 [2024-11-07 13:44:37.380627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.630 qpair failed and we were unable to recover it. 00:39:29.630 [2024-11-07 13:44:37.380813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.630 [2024-11-07 13:44:37.380827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.630 qpair failed and we were unable to recover it. 00:39:29.630 [2024-11-07 13:44:37.381039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.630 [2024-11-07 13:44:37.381054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.630 qpair failed and we were unable to recover it. 00:39:29.630 [2024-11-07 13:44:37.381372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.630 [2024-11-07 13:44:37.381387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.630 qpair failed and we were unable to recover it. 00:39:29.630 [2024-11-07 13:44:37.381718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.630 [2024-11-07 13:44:37.381734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.630 qpair failed and we were unable to recover it. 00:39:29.630 [2024-11-07 13:44:37.382032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.630 [2024-11-07 13:44:37.382049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.630 qpair failed and we were unable to recover it. 00:39:29.630 [2024-11-07 13:44:37.382224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.630 [2024-11-07 13:44:37.382239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.630 qpair failed and we were unable to recover it. 00:39:29.630 [2024-11-07 13:44:37.382588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.630 [2024-11-07 13:44:37.382604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.630 qpair failed and we were unable to recover it. 00:39:29.630 13:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:29.630 [2024-11-07 13:44:37.382894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.630 [2024-11-07 13:44:37.382911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.630 qpair failed and we were unable to recover it. 00:39:29.630 [2024-11-07 13:44:37.383207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.630 13:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:29.630 [2024-11-07 13:44:37.383223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.630 qpair failed and we were unable to recover it. 00:39:29.630 [2024-11-07 13:44:37.383514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.630 [2024-11-07 13:44:37.383530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b0 13:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:29.630 0 with addr=10.0.0.2, port=4420 00:39:29.630 qpair failed and we were unable to recover it. 00:39:29.630 [2024-11-07 13:44:37.383723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.630 [2024-11-07 13:44:37.383739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.630 qpair failed and we were unable to recover it. 00:39:29.630 13:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:29.630 [2024-11-07 13:44:37.383916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.630 [2024-11-07 13:44:37.383932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.630 qpair failed and we were unable to recover it. 00:39:29.630 [2024-11-07 13:44:37.384113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.630 [2024-11-07 13:44:37.384128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.630 qpair failed and we were unable to recover it. 00:39:29.630 [2024-11-07 13:44:37.384451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.630 [2024-11-07 13:44:37.384466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.630 qpair failed and we were unable to recover it. 00:39:29.630 [2024-11-07 13:44:37.384808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.630 [2024-11-07 13:44:37.384823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.630 qpair failed and we were unable to recover it. 00:39:29.630 [2024-11-07 13:44:37.385217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.630 [2024-11-07 13:44:37.385232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.630 qpair failed and we were unable to recover it. 00:39:29.630 [2024-11-07 13:44:37.385562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.630 [2024-11-07 13:44:37.385577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.630 qpair failed and we were unable to recover it. 00:39:29.630 [2024-11-07 13:44:37.385875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.630 [2024-11-07 13:44:37.385891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.630 qpair failed and we were unable to recover it. 00:39:29.630 [2024-11-07 13:44:37.386215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.630 [2024-11-07 13:44:37.386231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.630 qpair failed and we were unable to recover it. 00:39:29.630 [2024-11-07 13:44:37.386566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.630 [2024-11-07 13:44:37.386581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.630 qpair failed and we were unable to recover it. 00:39:29.630 [2024-11-07 13:44:37.386882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.630 [2024-11-07 13:44:37.386898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.630 qpair failed and we were unable to recover it. 00:39:29.630 [2024-11-07 13:44:37.387066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.630 [2024-11-07 13:44:37.387081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.630 qpair failed and we were unable to recover it. 00:39:29.630 [2024-11-07 13:44:37.387258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.630 [2024-11-07 13:44:37.387273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.630 qpair failed and we were unable to recover it. 00:39:29.630 [2024-11-07 13:44:37.387583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.630 [2024-11-07 13:44:37.387598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.630 qpair failed and we were unable to recover it. 00:39:29.630 [2024-11-07 13:44:37.387928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.630 [2024-11-07 13:44:37.387942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.630 qpair failed and we were unable to recover it. 00:39:29.630 [2024-11-07 13:44:37.388117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.630 [2024-11-07 13:44:37.388132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.630 qpair failed and we were unable to recover it. 00:39:29.630 [2024-11-07 13:44:37.388470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.630 [2024-11-07 13:44:37.388485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.630 qpair failed and we were unable to recover it. 00:39:29.630 [2024-11-07 13:44:37.388688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.630 [2024-11-07 13:44:37.388702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.630 qpair failed and we were unable to recover it. 00:39:29.630 [2024-11-07 13:44:37.388880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.630 [2024-11-07 13:44:37.388895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.630 qpair failed and we were unable to recover it. 00:39:29.630 [2024-11-07 13:44:37.388978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.630 [2024-11-07 13:44:37.388991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.631 qpair failed and we were unable to recover it. 00:39:29.631 [2024-11-07 13:44:37.389162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.631 [2024-11-07 13:44:37.389177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.631 qpair failed and we were unable to recover it. 00:39:29.631 [2024-11-07 13:44:37.389496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.631 [2024-11-07 13:44:37.389511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.631 qpair failed and we were unable to recover it. 00:39:29.631 [2024-11-07 13:44:37.389683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.631 [2024-11-07 13:44:37.389698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.631 qpair failed and we were unable to recover it. 00:39:29.631 [2024-11-07 13:44:37.390027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.631 [2024-11-07 13:44:37.390043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.631 qpair failed and we were unable to recover it. 00:39:29.631 [2024-11-07 13:44:37.390332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.631 [2024-11-07 13:44:37.390349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.631 qpair failed and we were unable to recover it. 00:39:29.631 [2024-11-07 13:44:37.390660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.631 [2024-11-07 13:44:37.390675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.631 qpair failed and we were unable to recover it. 00:39:29.631 [2024-11-07 13:44:37.390894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.631 [2024-11-07 13:44:37.390911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.631 qpair failed and we were unable to recover it. 00:39:29.631 [2024-11-07 13:44:37.390974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.631 [2024-11-07 13:44:37.390988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.631 qpair failed and we were unable to recover it. 00:39:29.631 [2024-11-07 13:44:37.391272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.631 [2024-11-07 13:44:37.391287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.631 qpair failed and we were unable to recover it. 00:39:29.631 [2024-11-07 13:44:37.391477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.631 [2024-11-07 13:44:37.391491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.631 qpair failed and we were unable to recover it. 00:39:29.631 [2024-11-07 13:44:37.391680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.631 [2024-11-07 13:44:37.391696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.631 qpair failed and we were unable to recover it. 00:39:29.631 [2024-11-07 13:44:37.391958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.631 [2024-11-07 13:44:37.391973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.631 qpair failed and we were unable to recover it. 00:39:29.631 [2024-11-07 13:44:37.392160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.631 [2024-11-07 13:44:37.392174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.631 qpair failed and we were unable to recover it. 00:39:29.631 [2024-11-07 13:44:37.392500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.631 [2024-11-07 13:44:37.392515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.631 qpair failed and we were unable to recover it. 00:39:29.631 [2024-11-07 13:44:37.392809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.631 [2024-11-07 13:44:37.392823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.631 qpair failed and we were unable to recover it. 00:39:29.631 [2024-11-07 13:44:37.393169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.631 [2024-11-07 13:44:37.393189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.631 qpair failed and we were unable to recover it. 00:39:29.631 [2024-11-07 13:44:37.393506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.631 [2024-11-07 13:44:37.393521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.631 qpair failed and we were unable to recover it. 00:39:29.631 [2024-11-07 13:44:37.393801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.631 [2024-11-07 13:44:37.393816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.631 qpair failed and we were unable to recover it. 00:39:29.631 [2024-11-07 13:44:37.394127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.631 [2024-11-07 13:44:37.394142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.631 qpair failed and we were unable to recover it. 00:39:29.631 [2024-11-07 13:44:37.394474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.631 [2024-11-07 13:44:37.394489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.631 qpair failed and we were unable to recover it. 00:39:29.631 [2024-11-07 13:44:37.394816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.631 [2024-11-07 13:44:37.394831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.631 qpair failed and we were unable to recover it. 00:39:29.631 [2024-11-07 13:44:37.395045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.631 [2024-11-07 13:44:37.395060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.631 qpair failed and we were unable to recover it. 00:39:29.631 [2024-11-07 13:44:37.395386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.631 [2024-11-07 13:44:37.395401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.631 qpair failed and we were unable to recover it. 00:39:29.631 [2024-11-07 13:44:37.395741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.631 [2024-11-07 13:44:37.395755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.631 qpair failed and we were unable to recover it. 00:39:29.631 [2024-11-07 13:44:37.395938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.631 [2024-11-07 13:44:37.395953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.631 qpair failed and we were unable to recover it. 00:39:29.631 [2024-11-07 13:44:37.396283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.631 [2024-11-07 13:44:37.396297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.631 qpair failed and we were unable to recover it. 00:39:29.631 [2024-11-07 13:44:37.396491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.631 [2024-11-07 13:44:37.396505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.631 qpair failed and we were unable to recover it. 00:39:29.631 [2024-11-07 13:44:37.396790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.631 [2024-11-07 13:44:37.396805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.631 qpair failed and we were unable to recover it. 00:39:29.631 [2024-11-07 13:44:37.397089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.631 [2024-11-07 13:44:37.397105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.631 qpair failed and we were unable to recover it. 00:39:29.631 [2024-11-07 13:44:37.397429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.631 [2024-11-07 13:44:37.397445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.631 qpair failed and we were unable to recover it. 00:39:29.631 [2024-11-07 13:44:37.397770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.631 [2024-11-07 13:44:37.397784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.631 qpair failed and we were unable to recover it. 00:39:29.631 [2024-11-07 13:44:37.398116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.631 [2024-11-07 13:44:37.398130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.631 qpair failed and we were unable to recover it. 00:39:29.631 [2024-11-07 13:44:37.398462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.631 [2024-11-07 13:44:37.398477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.631 qpair failed and we were unable to recover it. 00:39:29.631 [2024-11-07 13:44:37.398804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.631 [2024-11-07 13:44:37.398819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.631 qpair failed and we were unable to recover it. 00:39:29.631 [2024-11-07 13:44:37.399031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.631 [2024-11-07 13:44:37.399046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.631 qpair failed and we were unable to recover it. 00:39:29.632 [2024-11-07 13:44:37.399388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.632 [2024-11-07 13:44:37.399404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.632 qpair failed and we were unable to recover it. 00:39:29.632 [2024-11-07 13:44:37.399703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.632 [2024-11-07 13:44:37.399718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.632 qpair failed and we were unable to recover it. 00:39:29.632 [2024-11-07 13:44:37.400028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.632 [2024-11-07 13:44:37.400043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.632 qpair failed and we were unable to recover it. 00:39:29.632 [2024-11-07 13:44:37.400397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.632 [2024-11-07 13:44:37.400412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.632 qpair failed and we were unable to recover it. 00:39:29.632 [2024-11-07 13:44:37.400600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.632 [2024-11-07 13:44:37.400614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.632 qpair failed and we were unable to recover it. 00:39:29.632 [2024-11-07 13:44:37.400931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.632 [2024-11-07 13:44:37.400946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.632 qpair failed and we were unable to recover it. 00:39:29.632 [2024-11-07 13:44:37.401275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.632 [2024-11-07 13:44:37.401289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.632 qpair failed and we were unable to recover it. 00:39:29.632 [2024-11-07 13:44:37.401575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.632 [2024-11-07 13:44:37.401589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.632 qpair failed and we were unable to recover it. 00:39:29.632 [2024-11-07 13:44:37.401898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.632 [2024-11-07 13:44:37.401913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.632 qpair failed and we were unable to recover it. 00:39:29.632 [2024-11-07 13:44:37.402224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.632 [2024-11-07 13:44:37.402240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.632 qpair failed and we were unable to recover it. 00:39:29.632 [2024-11-07 13:44:37.402432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.632 [2024-11-07 13:44:37.402447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.632 qpair failed and we were unable to recover it. 00:39:29.632 [2024-11-07 13:44:37.402788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.632 [2024-11-07 13:44:37.402804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.632 qpair failed and we were unable to recover it. 00:39:29.632 [2024-11-07 13:44:37.402997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.632 [2024-11-07 13:44:37.403015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.632 qpair failed and we were unable to recover it. 00:39:29.632 [2024-11-07 13:44:37.403354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.632 [2024-11-07 13:44:37.403370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.632 qpair failed and we were unable to recover it. 00:39:29.632 [2024-11-07 13:44:37.403652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.632 [2024-11-07 13:44:37.403666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.632 qpair failed and we were unable to recover it. 00:39:29.632 [2024-11-07 13:44:37.403995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.632 [2024-11-07 13:44:37.404011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.632 qpair failed and we were unable to recover it. 00:39:29.632 [2024-11-07 13:44:37.404194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.632 [2024-11-07 13:44:37.404208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.632 qpair failed and we were unable to recover it. 00:39:29.632 [2024-11-07 13:44:37.404473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.632 [2024-11-07 13:44:37.404488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.632 qpair failed and we were unable to recover it. 00:39:29.632 [2024-11-07 13:44:37.404658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.632 [2024-11-07 13:44:37.404673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.632 qpair failed and we were unable to recover it. 00:39:29.632 [2024-11-07 13:44:37.404957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.632 [2024-11-07 13:44:37.404972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.632 qpair failed and we were unable to recover it. 00:39:29.632 [2024-11-07 13:44:37.405192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.632 [2024-11-07 13:44:37.405209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.632 qpair failed and we were unable to recover it. 00:39:29.632 [2024-11-07 13:44:37.405624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.632 [2024-11-07 13:44:37.405641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.632 qpair failed and we were unable to recover it. 00:39:29.632 [2024-11-07 13:44:37.405976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.632 [2024-11-07 13:44:37.405992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.632 qpair failed and we were unable to recover it. 00:39:29.632 [2024-11-07 13:44:37.406327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.632 [2024-11-07 13:44:37.406343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.632 qpair failed and we were unable to recover it. 00:39:29.632 [2024-11-07 13:44:37.406671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.632 [2024-11-07 13:44:37.406686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.632 qpair failed and we were unable to recover it. 00:39:29.632 [2024-11-07 13:44:37.406894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.632 [2024-11-07 13:44:37.406911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.632 qpair failed and we were unable to recover it. 00:39:29.632 [2024-11-07 13:44:37.407202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.632 [2024-11-07 13:44:37.407217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.632 qpair failed and we were unable to recover it. 00:39:29.632 [2024-11-07 13:44:37.407371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.632 [2024-11-07 13:44:37.407386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.632 qpair failed and we were unable to recover it. 00:39:29.632 [2024-11-07 13:44:37.407601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.632 [2024-11-07 13:44:37.407616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.632 qpair failed and we were unable to recover it. 00:39:29.632 [2024-11-07 13:44:37.407945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.632 [2024-11-07 13:44:37.407961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.632 qpair failed and we were unable to recover it. 00:39:29.632 [2024-11-07 13:44:37.408306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.632 [2024-11-07 13:44:37.408324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.632 qpair failed and we were unable to recover it. 00:39:29.632 [2024-11-07 13:44:37.408617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.632 [2024-11-07 13:44:37.408634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.632 qpair failed and we were unable to recover it. 00:39:29.632 [2024-11-07 13:44:37.408965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.633 [2024-11-07 13:44:37.408983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.633 qpair failed and we were unable to recover it. 00:39:29.633 [2024-11-07 13:44:37.409155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.633 [2024-11-07 13:44:37.409170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.633 qpair failed and we were unable to recover it. 00:39:29.633 [2024-11-07 13:44:37.409499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.633 [2024-11-07 13:44:37.409516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.633 qpair failed and we were unable to recover it. 00:39:29.633 [2024-11-07 13:44:37.409846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.633 [2024-11-07 13:44:37.409882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.633 qpair failed and we were unable to recover it. 00:39:29.633 [2024-11-07 13:44:37.410073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.633 [2024-11-07 13:44:37.410089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.633 qpair failed and we were unable to recover it. 00:39:29.633 [2024-11-07 13:44:37.410392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.633 [2024-11-07 13:44:37.410408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.633 qpair failed and we were unable to recover it. 00:39:29.633 [2024-11-07 13:44:37.410738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.633 [2024-11-07 13:44:37.410754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.633 qpair failed and we were unable to recover it. 00:39:29.633 [2024-11-07 13:44:37.410947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.633 [2024-11-07 13:44:37.410963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.633 qpair failed and we were unable to recover it. 00:39:29.633 [2024-11-07 13:44:37.411302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.633 [2024-11-07 13:44:37.411318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.633 qpair failed and we were unable to recover it. 00:39:29.633 [2024-11-07 13:44:37.411602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.633 [2024-11-07 13:44:37.411619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.633 qpair failed and we were unable to recover it. 00:39:29.633 [2024-11-07 13:44:37.411962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.633 [2024-11-07 13:44:37.411978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.633 qpair failed and we were unable to recover it. 00:39:29.633 [2024-11-07 13:44:37.412181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.633 [2024-11-07 13:44:37.412196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.633 qpair failed and we were unable to recover it. 00:39:29.633 [2024-11-07 13:44:37.412404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.633 [2024-11-07 13:44:37.412420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.633 qpair failed and we were unable to recover it. 00:39:29.633 [2024-11-07 13:44:37.412652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.633 [2024-11-07 13:44:37.412668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.633 qpair failed and we were unable to recover it. 00:39:29.633 [2024-11-07 13:44:37.413012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.633 [2024-11-07 13:44:37.413028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.633 qpair failed and we were unable to recover it. 00:39:29.633 [2024-11-07 13:44:37.413208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.633 [2024-11-07 13:44:37.413223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.633 qpair failed and we were unable to recover it. 00:39:29.633 [2024-11-07 13:44:37.413515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.633 [2024-11-07 13:44:37.413531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.633 qpair failed and we were unable to recover it. 00:39:29.633 [2024-11-07 13:44:37.413874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.633 [2024-11-07 13:44:37.413890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.633 qpair failed and we were unable to recover it. 00:39:29.633 [2024-11-07 13:44:37.414226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.633 [2024-11-07 13:44:37.414242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.633 qpair failed and we were unable to recover it. 00:39:29.633 [2024-11-07 13:44:37.414576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.633 [2024-11-07 13:44:37.414592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.633 qpair failed and we were unable to recover it. 00:39:29.633 [2024-11-07 13:44:37.414945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.633 [2024-11-07 13:44:37.414961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.633 qpair failed and we were unable to recover it. 00:39:29.633 [2024-11-07 13:44:37.415350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.633 [2024-11-07 13:44:37.415366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.633 qpair failed and we were unable to recover it. 00:39:29.633 [2024-11-07 13:44:37.415545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.633 [2024-11-07 13:44:37.415561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.633 qpair failed and we were unable to recover it. 00:39:29.633 [2024-11-07 13:44:37.415726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.633 [2024-11-07 13:44:37.415742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.633 qpair failed and we were unable to recover it. 00:39:29.633 [2024-11-07 13:44:37.415912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.633 [2024-11-07 13:44:37.415929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.633 qpair failed and we were unable to recover it. 00:39:29.633 [2024-11-07 13:44:37.416210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.634 [2024-11-07 13:44:37.416226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.634 qpair failed and we were unable to recover it. 00:39:29.634 [2024-11-07 13:44:37.416579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.634 [2024-11-07 13:44:37.416595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.634 qpair failed and we were unable to recover it. 00:39:29.634 [2024-11-07 13:44:37.416761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.634 [2024-11-07 13:44:37.416776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.634 qpair failed and we were unable to recover it. 00:39:29.634 [2024-11-07 13:44:37.417022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.634 [2024-11-07 13:44:37.417040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.634 qpair failed and we were unable to recover it. 00:39:29.634 [2024-11-07 13:44:37.417358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.634 [2024-11-07 13:44:37.417375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.634 qpair failed and we were unable to recover it. 00:39:29.634 [2024-11-07 13:44:37.417698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.634 [2024-11-07 13:44:37.417714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.634 qpair failed and we were unable to recover it. 00:39:29.634 [2024-11-07 13:44:37.417920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.634 [2024-11-07 13:44:37.417936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.634 qpair failed and we were unable to recover it. 00:39:29.634 [2024-11-07 13:44:37.418284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.634 [2024-11-07 13:44:37.418300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.634 qpair failed and we were unable to recover it. 00:39:29.634 [2024-11-07 13:44:37.418666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.634 [2024-11-07 13:44:37.418682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.634 qpair failed and we were unable to recover it. 00:39:29.634 [2024-11-07 13:44:37.418897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.634 [2024-11-07 13:44:37.418914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.634 qpair failed and we were unable to recover it. 00:39:29.634 [2024-11-07 13:44:37.419229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.634 [2024-11-07 13:44:37.419245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.634 qpair failed and we were unable to recover it. 00:39:29.634 [2024-11-07 13:44:37.419437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.634 [2024-11-07 13:44:37.419455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.634 qpair failed and we were unable to recover it. 00:39:29.634 [2024-11-07 13:44:37.419741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.634 [2024-11-07 13:44:37.419756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.634 qpair failed and we were unable to recover it. 00:39:29.634 [2024-11-07 13:44:37.419944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.634 [2024-11-07 13:44:37.419960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.634 qpair failed and we were unable to recover it. 00:39:29.634 [2024-11-07 13:44:37.420154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.634 [2024-11-07 13:44:37.420171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.634 qpair failed and we were unable to recover it. 00:39:29.634 [2024-11-07 13:44:37.420475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.634 [2024-11-07 13:44:37.420491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.634 qpair failed and we were unable to recover it. 00:39:29.634 [2024-11-07 13:44:37.420658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.634 [2024-11-07 13:44:37.420674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.634 qpair failed and we were unable to recover it. 00:39:29.634 [2024-11-07 13:44:37.420727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.634 [2024-11-07 13:44:37.420742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.634 qpair failed and we were unable to recover it. 00:39:29.634 [2024-11-07 13:44:37.420944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.634 [2024-11-07 13:44:37.420959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.634 qpair failed and we were unable to recover it. 00:39:29.634 [2024-11-07 13:44:37.421294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.634 [2024-11-07 13:44:37.421310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.634 qpair failed and we were unable to recover it. 00:39:29.634 [2024-11-07 13:44:37.421625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.634 [2024-11-07 13:44:37.421641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.634 qpair failed and we were unable to recover it. 00:39:29.634 [2024-11-07 13:44:37.421876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.634 [2024-11-07 13:44:37.421892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.634 qpair failed and we were unable to recover it. 00:39:29.634 [2024-11-07 13:44:37.422115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.634 [2024-11-07 13:44:37.422130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.634 qpair failed and we were unable to recover it. 00:39:29.634 [2024-11-07 13:44:37.422434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.634 [2024-11-07 13:44:37.422448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.634 qpair failed and we were unable to recover it. 00:39:29.634 [2024-11-07 13:44:37.422770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.634 [2024-11-07 13:44:37.422784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.634 qpair failed and we were unable to recover it. 00:39:29.634 [2024-11-07 13:44:37.423101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.634 [2024-11-07 13:44:37.423115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.634 qpair failed and we were unable to recover it. 00:39:29.634 [2024-11-07 13:44:37.423441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.634 [2024-11-07 13:44:37.423455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.634 qpair failed and we were unable to recover it. 00:39:29.634 [2024-11-07 13:44:37.423707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.634 [2024-11-07 13:44:37.423721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.634 qpair failed and we were unable to recover it. 00:39:29.634 [2024-11-07 13:44:37.424056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.634 [2024-11-07 13:44:37.424072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.634 qpair failed and we were unable to recover it. 00:39:29.634 [2024-11-07 13:44:37.424249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.634 [2024-11-07 13:44:37.424263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.634 qpair failed and we were unable to recover it. 00:39:29.634 [2024-11-07 13:44:37.424545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.634 [2024-11-07 13:44:37.424565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.634 qpair failed and we were unable to recover it. 00:39:29.634 [2024-11-07 13:44:37.424760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.634 [2024-11-07 13:44:37.424775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.634 qpair failed and we were unable to recover it. 00:39:29.634 [2024-11-07 13:44:37.424953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.634 [2024-11-07 13:44:37.424968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.634 qpair failed and we were unable to recover it. 00:39:29.634 [2024-11-07 13:44:37.425253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.634 [2024-11-07 13:44:37.425270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.634 qpair failed and we were unable to recover it. 00:39:29.634 [2024-11-07 13:44:37.425612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.634 [2024-11-07 13:44:37.425627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.634 qpair failed and we were unable to recover it. 00:39:29.634 [2024-11-07 13:44:37.425940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.634 [2024-11-07 13:44:37.425955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.634 qpair failed and we were unable to recover it. 00:39:29.634 [2024-11-07 13:44:37.426247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.635 [2024-11-07 13:44:37.426262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.635 qpair failed and we were unable to recover it. 00:39:29.635 [2024-11-07 13:44:37.426449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.635 [2024-11-07 13:44:37.426463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.635 qpair failed and we were unable to recover it. 00:39:29.635 [2024-11-07 13:44:37.426784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.635 [2024-11-07 13:44:37.426799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.635 qpair failed and we were unable to recover it. 00:39:29.635 [2024-11-07 13:44:37.427018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.635 [2024-11-07 13:44:37.427034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.635 qpair failed and we were unable to recover it. 00:39:29.635 [2024-11-07 13:44:37.427370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.635 [2024-11-07 13:44:37.427385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.635 qpair failed and we were unable to recover it. 00:39:29.635 [2024-11-07 13:44:37.427731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.635 [2024-11-07 13:44:37.427746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.635 qpair failed and we were unable to recover it. 00:39:29.635 [2024-11-07 13:44:37.428087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.635 [2024-11-07 13:44:37.428103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.635 qpair failed and we were unable to recover it. 00:39:29.635 [2024-11-07 13:44:37.428387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.635 [2024-11-07 13:44:37.428402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.635 qpair failed and we were unable to recover it. 00:39:29.635 [2024-11-07 13:44:37.428700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.635 [2024-11-07 13:44:37.428716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.635 qpair failed and we were unable to recover it. 00:39:29.635 [2024-11-07 13:44:37.429041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.635 [2024-11-07 13:44:37.429056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.635 qpair failed and we were unable to recover it. 00:39:29.635 [2024-11-07 13:44:37.429388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.635 [2024-11-07 13:44:37.429404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.635 qpair failed and we were unable to recover it. 00:39:29.635 [2024-11-07 13:44:37.429704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.635 [2024-11-07 13:44:37.429718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.635 qpair failed and we were unable to recover it. 00:39:29.635 [2024-11-07 13:44:37.430027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.635 [2024-11-07 13:44:37.430042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.635 qpair failed and we were unable to recover it. 00:39:29.635 [2024-11-07 13:44:37.430231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.635 [2024-11-07 13:44:37.430246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.635 qpair failed and we were unable to recover it. 00:39:29.635 [2024-11-07 13:44:37.430587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.635 [2024-11-07 13:44:37.430602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.635 qpair failed and we were unable to recover it. 00:39:29.635 [2024-11-07 13:44:37.430928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.635 [2024-11-07 13:44:37.430944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.635 qpair failed and we were unable to recover it. 00:39:29.635 [2024-11-07 13:44:37.431144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.635 [2024-11-07 13:44:37.431158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.635 qpair failed and we were unable to recover it. 00:39:29.635 [2024-11-07 13:44:37.431355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.635 [2024-11-07 13:44:37.431370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.635 qpair failed and we were unable to recover it. 00:39:29.635 [2024-11-07 13:44:37.431563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.635 [2024-11-07 13:44:37.431577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.635 qpair failed and we were unable to recover it. 00:39:29.635 [2024-11-07 13:44:37.431850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.635 [2024-11-07 13:44:37.431870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.635 qpair failed and we were unable to recover it. 00:39:29.635 [2024-11-07 13:44:37.432206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.635 [2024-11-07 13:44:37.432220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.635 qpair failed and we were unable to recover it. 00:39:29.635 [2024-11-07 13:44:37.432417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.635 [2024-11-07 13:44:37.432431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.635 qpair failed and we were unable to recover it. 00:39:29.635 [2024-11-07 13:44:37.432639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.635 [2024-11-07 13:44:37.432654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.635 qpair failed and we were unable to recover it. 00:39:29.635 [2024-11-07 13:44:37.432987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.635 [2024-11-07 13:44:37.433002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.635 qpair failed and we were unable to recover it. 00:39:29.635 [2024-11-07 13:44:37.433303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.635 [2024-11-07 13:44:37.433318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.635 qpair failed and we were unable to recover it. 00:39:29.635 [2024-11-07 13:44:37.433686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.635 [2024-11-07 13:44:37.433700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.635 qpair failed and we were unable to recover it. 00:39:29.635 [2024-11-07 13:44:37.433885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.635 [2024-11-07 13:44:37.433900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.635 qpair failed and we were unable to recover it. 00:39:29.635 [2024-11-07 13:44:37.433967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.635 [2024-11-07 13:44:37.433982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.635 qpair failed and we were unable to recover it. 00:39:29.635 [2024-11-07 13:44:37.434291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.635 [2024-11-07 13:44:37.434306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.635 qpair failed and we were unable to recover it. 00:39:29.635 [2024-11-07 13:44:37.434499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.635 [2024-11-07 13:44:37.434513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.635 qpair failed and we were unable to recover it. 00:39:29.635 [2024-11-07 13:44:37.434835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.635 [2024-11-07 13:44:37.434850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.635 qpair failed and we were unable to recover it. 00:39:29.635 [2024-11-07 13:44:37.435171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.635 [2024-11-07 13:44:37.435186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.635 qpair failed and we were unable to recover it. 00:39:29.635 [2024-11-07 13:44:37.435502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.635 [2024-11-07 13:44:37.435516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.635 qpair failed and we were unable to recover it. 00:39:29.635 [2024-11-07 13:44:37.435817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.635 [2024-11-07 13:44:37.435832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.635 qpair failed and we were unable to recover it. 00:39:29.635 [2024-11-07 13:44:37.436220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.635 [2024-11-07 13:44:37.436237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.635 qpair failed and we were unable to recover it. 00:39:29.635 [2024-11-07 13:44:37.436560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.635 [2024-11-07 13:44:37.436574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.635 qpair failed and we were unable to recover it. 00:39:29.635 [2024-11-07 13:44:37.436855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.635 [2024-11-07 13:44:37.436874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.636 qpair failed and we were unable to recover it. 00:39:29.636 [2024-11-07 13:44:37.437179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.636 [2024-11-07 13:44:37.437194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.636 qpair failed and we were unable to recover it. 00:39:29.636 [2024-11-07 13:44:37.437500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.636 [2024-11-07 13:44:37.437517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.636 qpair failed and we were unable to recover it. 00:39:29.636 [2024-11-07 13:44:37.437728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.636 [2024-11-07 13:44:37.437743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.636 qpair failed and we were unable to recover it. 00:39:29.636 [2024-11-07 13:44:37.437958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.636 [2024-11-07 13:44:37.437973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.636 qpair failed and we were unable to recover it. 00:39:29.636 [2024-11-07 13:44:37.438297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.636 [2024-11-07 13:44:37.438311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.636 qpair failed and we were unable to recover it. 00:39:29.636 [2024-11-07 13:44:37.438438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.636 [2024-11-07 13:44:37.438452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.636 qpair failed and we were unable to recover it. 00:39:29.636 [2024-11-07 13:44:37.438625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.636 [2024-11-07 13:44:37.438639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.636 qpair failed and we were unable to recover it. 00:39:29.636 [2024-11-07 13:44:37.438814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.636 [2024-11-07 13:44:37.438828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.636 qpair failed and we were unable to recover it. 00:39:29.636 [2024-11-07 13:44:37.439014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.636 [2024-11-07 13:44:37.439029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.636 qpair failed and we were unable to recover it. 00:39:29.636 [2024-11-07 13:44:37.439357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.636 [2024-11-07 13:44:37.439371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.636 qpair failed and we were unable to recover it. 00:39:29.636 [2024-11-07 13:44:37.439697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.636 [2024-11-07 13:44:37.439712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.636 qpair failed and we were unable to recover it. 00:39:29.636 [2024-11-07 13:44:37.440067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.636 [2024-11-07 13:44:37.440082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.636 qpair failed and we were unable to recover it. 00:39:29.636 [2024-11-07 13:44:37.440268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.636 [2024-11-07 13:44:37.440284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.636 qpair failed and we were unable to recover it. 00:39:29.636 [2024-11-07 13:44:37.440365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.636 [2024-11-07 13:44:37.440379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.636 qpair failed and we were unable to recover it. 00:39:29.636 [2024-11-07 13:44:37.440706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.636 [2024-11-07 13:44:37.440721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.636 qpair failed and we were unable to recover it. 00:39:29.636 [2024-11-07 13:44:37.441060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.636 [2024-11-07 13:44:37.441075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.636 qpair failed and we were unable to recover it. 00:39:29.636 [2024-11-07 13:44:37.441136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.636 [2024-11-07 13:44:37.441150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.636 qpair failed and we were unable to recover it. 00:39:29.636 [2024-11-07 13:44:37.441485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.636 [2024-11-07 13:44:37.441499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.636 qpair failed and we were unable to recover it. 00:39:29.636 [2024-11-07 13:44:37.441836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.636 [2024-11-07 13:44:37.441850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.636 qpair failed and we were unable to recover it. 00:39:29.636 [2024-11-07 13:44:37.442191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.636 [2024-11-07 13:44:37.442206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.636 qpair failed and we were unable to recover it. 00:39:29.636 [2024-11-07 13:44:37.442519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.636 [2024-11-07 13:44:37.442535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.636 qpair failed and we were unable to recover it. 00:39:29.636 [2024-11-07 13:44:37.442858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.636 [2024-11-07 13:44:37.442876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.636 qpair failed and we were unable to recover it. 00:39:29.636 [2024-11-07 13:44:37.443219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.636 [2024-11-07 13:44:37.443235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.636 qpair failed and we were unable to recover it. 00:39:29.636 [2024-11-07 13:44:37.443567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.636 [2024-11-07 13:44:37.443582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.636 qpair failed and we were unable to recover it. 00:39:29.636 [2024-11-07 13:44:37.443766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.636 [2024-11-07 13:44:37.443781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.636 qpair failed and we were unable to recover it. 00:39:29.636 [2024-11-07 13:44:37.443949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.636 [2024-11-07 13:44:37.443964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.636 qpair failed and we were unable to recover it. 00:39:29.636 [2024-11-07 13:44:37.444243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.636 [2024-11-07 13:44:37.444258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.636 qpair failed and we were unable to recover it. 00:39:29.636 [2024-11-07 13:44:37.444561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.636 [2024-11-07 13:44:37.444575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.636 qpair failed and we were unable to recover it. 00:39:29.636 [2024-11-07 13:44:37.444918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.636 [2024-11-07 13:44:37.444933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.636 qpair failed and we were unable to recover it. 00:39:29.636 [2024-11-07 13:44:37.445248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.636 [2024-11-07 13:44:37.445262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.636 qpair failed and we were unable to recover it. 00:39:29.636 [2024-11-07 13:44:37.445445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.636 [2024-11-07 13:44:37.445461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.636 qpair failed and we were unable to recover it. 00:39:29.636 [2024-11-07 13:44:37.445748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.636 [2024-11-07 13:44:37.445762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.636 qpair failed and we were unable to recover it. 00:39:29.636 [2024-11-07 13:44:37.446072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.636 [2024-11-07 13:44:37.446088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.636 qpair failed and we were unable to recover it. 00:39:29.636 [2024-11-07 13:44:37.446421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.636 [2024-11-07 13:44:37.446436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.636 qpair failed and we were unable to recover it. 00:39:29.636 [2024-11-07 13:44:37.446776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.636 [2024-11-07 13:44:37.446790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.636 qpair failed and we were unable to recover it. 00:39:29.636 [2024-11-07 13:44:37.447096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.636 [2024-11-07 13:44:37.447111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.636 qpair failed and we were unable to recover it. 00:39:29.637 [2024-11-07 13:44:37.447446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.637 [2024-11-07 13:44:37.447461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.637 qpair failed and we were unable to recover it. 00:39:29.637 [2024-11-07 13:44:37.447804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.637 [2024-11-07 13:44:37.447821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.637 qpair failed and we were unable to recover it. 00:39:29.637 [2024-11-07 13:44:37.448007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.637 [2024-11-07 13:44:37.448022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.637 qpair failed and we were unable to recover it. 00:39:29.637 [2024-11-07 13:44:37.448389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.637 [2024-11-07 13:44:37.448405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.637 qpair failed and we were unable to recover it. 00:39:29.637 [2024-11-07 13:44:37.448734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.637 [2024-11-07 13:44:37.448750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.637 qpair failed and we were unable to recover it. 00:39:29.637 [2024-11-07 13:44:37.448944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.637 [2024-11-07 13:44:37.448959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.637 qpair failed and we were unable to recover it. 00:39:29.637 [2024-11-07 13:44:37.449236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.637 [2024-11-07 13:44:37.449250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.637 qpair failed and we were unable to recover it. 00:39:29.637 [2024-11-07 13:44:37.449595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.637 [2024-11-07 13:44:37.449611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.637 qpair failed and we were unable to recover it. 00:39:29.637 [2024-11-07 13:44:37.449934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.637 [2024-11-07 13:44:37.449950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.637 qpair failed and we were unable to recover it. 00:39:29.637 [2024-11-07 13:44:37.450288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.637 [2024-11-07 13:44:37.450302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.637 qpair failed and we were unable to recover it. 00:39:29.637 [2024-11-07 13:44:37.450631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.637 [2024-11-07 13:44:37.450647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.637 qpair failed and we were unable to recover it. 00:39:29.637 [2024-11-07 13:44:37.450843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.637 [2024-11-07 13:44:37.450859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.637 qpair failed and we were unable to recover it. 00:39:29.637 [2024-11-07 13:44:37.451091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.637 [2024-11-07 13:44:37.451106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.637 qpair failed and we were unable to recover it. 00:39:29.637 [2024-11-07 13:44:37.451172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.637 [2024-11-07 13:44:37.451185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.637 qpair failed and we were unable to recover it. 00:39:29.637 [2024-11-07 13:44:37.451263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.637 [2024-11-07 13:44:37.451277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.637 qpair failed and we were unable to recover it. 00:39:29.637 Malloc0 00:39:29.637 [2024-11-07 13:44:37.451650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.637 [2024-11-07 13:44:37.451665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.637 qpair failed and we were unable to recover it. 00:39:29.637 [2024-11-07 13:44:37.451978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.637 [2024-11-07 13:44:37.451993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.637 qpair failed and we were unable to recover it. 00:39:29.637 [2024-11-07 13:44:37.452172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.637 [2024-11-07 13:44:37.452188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.637 qpair failed and we were unable to recover it. 00:39:29.637 13:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:29.637 [2024-11-07 13:44:37.452504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.637 [2024-11-07 13:44:37.452520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.637 qpair failed and we were unable to recover it. 00:39:29.637 13:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:39:29.637 [2024-11-07 13:44:37.452704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.637 [2024-11-07 13:44:37.452719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.637 qpair failed and we were unable to recover it. 00:39:29.637 [2024-11-07 13:44:37.452806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.637 [2024-11-07 13:44:37.452821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.637 qpair failed and we were unable to recover it. 00:39:29.637 13:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:29.637 [2024-11-07 13:44:37.452981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.637 [2024-11-07 13:44:37.452996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.637 qpair failed and we were unable to recover it. 00:39:29.637 13:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:29.637 [2024-11-07 13:44:37.453327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.637 [2024-11-07 13:44:37.453342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.637 qpair failed and we were unable to recover it. 00:39:29.637 [2024-11-07 13:44:37.453537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.637 [2024-11-07 13:44:37.453551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.637 qpair failed and we were unable to recover it. 00:39:29.637 [2024-11-07 13:44:37.453793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.637 [2024-11-07 13:44:37.453809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.637 qpair failed and we were unable to recover it. 00:39:29.637 [2024-11-07 13:44:37.454116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.637 [2024-11-07 13:44:37.454130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.637 qpair failed and we were unable to recover it. 00:39:29.637 [2024-11-07 13:44:37.454317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.637 [2024-11-07 13:44:37.454332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.637 qpair failed and we were unable to recover it. 00:39:29.637 [2024-11-07 13:44:37.454503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.637 [2024-11-07 13:44:37.454517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.637 qpair failed and we were unable to recover it. 00:39:29.637 [2024-11-07 13:44:37.454660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.637 [2024-11-07 13:44:37.454674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.637 qpair failed and we were unable to recover it. 00:39:29.637 [2024-11-07 13:44:37.454979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.637 [2024-11-07 13:44:37.454995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.637 qpair failed and we were unable to recover it. 00:39:29.637 [2024-11-07 13:44:37.455284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.637 [2024-11-07 13:44:37.455303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.637 qpair failed and we were unable to recover it. 00:39:29.637 [2024-11-07 13:44:37.455627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.637 [2024-11-07 13:44:37.455642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.637 qpair failed and we were unable to recover it. 00:39:29.637 [2024-11-07 13:44:37.455822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.637 [2024-11-07 13:44:37.455837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.637 qpair failed and we were unable to recover it. 00:39:29.637 [2024-11-07 13:44:37.455923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.637 [2024-11-07 13:44:37.455938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000417b00 with addr=10.0.0.2, port=4420 00:39:29.637 qpair failed and we were unable to recover it. 00:39:29.637 [2024-11-07 13:44:37.456307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.637 [2024-11-07 13:44:37.456347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.637 qpair failed and we were unable to recover it. 00:39:29.638 [2024-11-07 13:44:37.456702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.638 [2024-11-07 13:44:37.456717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.638 qpair failed and we were unable to recover it. 00:39:29.638 [2024-11-07 13:44:37.457152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.638 [2024-11-07 13:44:37.457188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.638 qpair failed and we were unable to recover it. 00:39:29.638 [2024-11-07 13:44:37.457394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.638 [2024-11-07 13:44:37.457407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.638 qpair failed and we were unable to recover it. 00:39:29.638 [2024-11-07 13:44:37.457612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.638 [2024-11-07 13:44:37.457624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.638 qpair failed and we were unable to recover it. 00:39:29.638 [2024-11-07 13:44:37.457845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.638 [2024-11-07 13:44:37.457859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.638 qpair failed and we were unable to recover it. 00:39:29.638 [2024-11-07 13:44:37.458070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.638 [2024-11-07 13:44:37.458081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.638 qpair failed and we were unable to recover it. 00:39:29.638 [2024-11-07 13:44:37.458122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.638 [2024-11-07 13:44:37.458133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.638 qpair failed and we were unable to recover it. 00:39:29.638 [2024-11-07 13:44:37.458437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.638 [2024-11-07 13:44:37.458448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.638 qpair failed and we were unable to recover it. 00:39:29.638 [2024-11-07 13:44:37.458778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.638 [2024-11-07 13:44:37.458790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.638 qpair failed and we were unable to recover it. 00:39:29.638 [2024-11-07 13:44:37.458776] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:29.638 [2024-11-07 13:44:37.458992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.638 [2024-11-07 13:44:37.459004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.638 qpair failed and we were unable to recover it. 00:39:29.638 [2024-11-07 13:44:37.459051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.638 [2024-11-07 13:44:37.459063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.638 qpair failed and we were unable to recover it. 00:39:29.638 [2024-11-07 13:44:37.459385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.638 [2024-11-07 13:44:37.459397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.638 qpair failed and we were unable to recover it. 00:39:29.638 [2024-11-07 13:44:37.459442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.638 [2024-11-07 13:44:37.459452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.638 qpair failed and we were unable to recover it. 00:39:29.638 [2024-11-07 13:44:37.459759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.638 [2024-11-07 13:44:37.459770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.638 qpair failed and we were unable to recover it. 00:39:29.638 [2024-11-07 13:44:37.460088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.638 [2024-11-07 13:44:37.460100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.638 qpair failed and we were unable to recover it. 00:39:29.638 [2024-11-07 13:44:37.460295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.638 [2024-11-07 13:44:37.460306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.638 qpair failed and we were unable to recover it. 00:39:29.638 [2024-11-07 13:44:37.460628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.638 [2024-11-07 13:44:37.460639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.638 qpair failed and we were unable to recover it. 00:39:29.638 [2024-11-07 13:44:37.460816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.638 [2024-11-07 13:44:37.460826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.638 qpair failed and we were unable to recover it. 00:39:29.638 [2024-11-07 13:44:37.461174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.638 [2024-11-07 13:44:37.461187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.638 qpair failed and we were unable to recover it. 00:39:29.638 [2024-11-07 13:44:37.461499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.638 [2024-11-07 13:44:37.461512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.638 qpair failed and we were unable to recover it. 00:39:29.638 [2024-11-07 13:44:37.461853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.638 [2024-11-07 13:44:37.461869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.638 qpair failed and we were unable to recover it. 00:39:29.638 [2024-11-07 13:44:37.462186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.638 [2024-11-07 13:44:37.462199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.638 qpair failed and we were unable to recover it. 00:39:29.638 [2024-11-07 13:44:37.462560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.638 [2024-11-07 13:44:37.462571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.638 qpair failed and we were unable to recover it. 00:39:29.638 [2024-11-07 13:44:37.462741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.638 [2024-11-07 13:44:37.462751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.638 qpair failed and we were unable to recover it. 00:39:29.638 [2024-11-07 13:44:37.463074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.638 [2024-11-07 13:44:37.463085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.638 qpair failed and we were unable to recover it. 00:39:29.638 [2024-11-07 13:44:37.463280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.638 [2024-11-07 13:44:37.463291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.638 qpair failed and we were unable to recover it. 00:39:29.638 [2024-11-07 13:44:37.463467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.638 [2024-11-07 13:44:37.463478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.638 qpair failed and we were unable to recover it. 00:39:29.638 [2024-11-07 13:44:37.463780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.638 [2024-11-07 13:44:37.463791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.638 qpair failed and we were unable to recover it. 00:39:29.638 [2024-11-07 13:44:37.463987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.638 [2024-11-07 13:44:37.463998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.638 qpair failed and we were unable to recover it. 00:39:29.638 [2024-11-07 13:44:37.464183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.638 [2024-11-07 13:44:37.464195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.639 qpair failed and we were unable to recover it. 00:39:29.639 [2024-11-07 13:44:37.464351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.639 [2024-11-07 13:44:37.464363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.639 qpair failed and we were unable to recover it. 00:39:29.639 [2024-11-07 13:44:37.464544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.639 [2024-11-07 13:44:37.464555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.639 qpair failed and we were unable to recover it. 00:39:29.639 [2024-11-07 13:44:37.464903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.639 [2024-11-07 13:44:37.464915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.639 qpair failed and we were unable to recover it. 00:39:29.639 [2024-11-07 13:44:37.465247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.639 [2024-11-07 13:44:37.465258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.639 qpair failed and we were unable to recover it. 00:39:29.639 [2024-11-07 13:44:37.465572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.639 [2024-11-07 13:44:37.465584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.639 qpair failed and we were unable to recover it. 00:39:29.639 [2024-11-07 13:44:37.465917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.639 [2024-11-07 13:44:37.465929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.639 qpair failed and we were unable to recover it. 00:39:29.639 [2024-11-07 13:44:37.466264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.639 [2024-11-07 13:44:37.466276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.639 qpair failed and we were unable to recover it. 00:39:29.639 [2024-11-07 13:44:37.466329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.639 [2024-11-07 13:44:37.466340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.639 qpair failed and we were unable to recover it. 00:39:29.639 [2024-11-07 13:44:37.466494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.639 [2024-11-07 13:44:37.466505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.639 qpair failed and we were unable to recover it. 00:39:29.639 [2024-11-07 13:44:37.466787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.639 [2024-11-07 13:44:37.466799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.639 qpair failed and we were unable to recover it. 00:39:29.639 [2024-11-07 13:44:37.467102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.639 [2024-11-07 13:44:37.467117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.639 qpair failed and we were unable to recover it. 00:39:29.639 [2024-11-07 13:44:37.467378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.639 [2024-11-07 13:44:37.467389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.639 qpair failed and we were unable to recover it. 00:39:29.639 [2024-11-07 13:44:37.467724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.639 [2024-11-07 13:44:37.467735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.639 qpair failed and we were unable to recover it. 00:39:29.639 [2024-11-07 13:44:37.468032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.639 [2024-11-07 13:44:37.468043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.639 qpair failed and we were unable to recover it. 00:39:29.639 13:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:29.639 [2024-11-07 13:44:37.468371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.639 [2024-11-07 13:44:37.468383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.639 qpair failed and we were unable to recover it. 00:39:29.639 13:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:29.639 [2024-11-07 13:44:37.468569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.639 [2024-11-07 13:44:37.468581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.639 qpair failed and we were unable to recover it. 00:39:29.639 13:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:29.639 [2024-11-07 13:44:37.468900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.639 [2024-11-07 13:44:37.468912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.639 qpair failed and we were unable to recover it. 00:39:29.639 13:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:29.639 [2024-11-07 13:44:37.469251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.639 [2024-11-07 13:44:37.469262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.639 qpair failed and we were unable to recover it. 00:39:29.639 [2024-11-07 13:44:37.469464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.639 [2024-11-07 13:44:37.469475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.639 qpair failed and we were unable to recover it. 00:39:29.639 [2024-11-07 13:44:37.469667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.639 [2024-11-07 13:44:37.469679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.639 qpair failed and we were unable to recover it. 00:39:29.639 [2024-11-07 13:44:37.469957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.639 [2024-11-07 13:44:37.469969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.639 qpair failed and we were unable to recover it. 00:39:29.639 [2024-11-07 13:44:37.470285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.639 [2024-11-07 13:44:37.470297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.639 qpair failed and we were unable to recover it. 00:39:29.639 [2024-11-07 13:44:37.470470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.639 [2024-11-07 13:44:37.470482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.639 qpair failed and we were unable to recover it. 00:39:29.639 [2024-11-07 13:44:37.470846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.639 [2024-11-07 13:44:37.470859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.639 qpair failed and we were unable to recover it. 00:39:29.639 [2024-11-07 13:44:37.471018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.639 [2024-11-07 13:44:37.471030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.639 qpair failed and we were unable to recover it. 00:39:29.639 [2024-11-07 13:44:37.471222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.639 [2024-11-07 13:44:37.471232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.639 qpair failed and we were unable to recover it. 00:39:29.639 [2024-11-07 13:44:37.471526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.639 [2024-11-07 13:44:37.471539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.639 qpair failed and we were unable to recover it. 00:39:29.639 [2024-11-07 13:44:37.471872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.639 [2024-11-07 13:44:37.471885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.639 qpair failed and we were unable to recover it. 00:39:29.639 [2024-11-07 13:44:37.472166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.639 [2024-11-07 13:44:37.472177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.639 qpair failed and we were unable to recover it. 00:39:29.639 [2024-11-07 13:44:37.472531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.639 [2024-11-07 13:44:37.472543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.639 qpair failed and we were unable to recover it. 00:39:29.639 [2024-11-07 13:44:37.472731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.639 [2024-11-07 13:44:37.472742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.639 qpair failed and we were unable to recover it. 00:39:29.639 [2024-11-07 13:44:37.472945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.639 [2024-11-07 13:44:37.472957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.639 qpair failed and we were unable to recover it. 00:39:29.639 [2024-11-07 13:44:37.473176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.639 [2024-11-07 13:44:37.473188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.639 qpair failed and we were unable to recover it. 00:39:29.639 [2024-11-07 13:44:37.473455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.639 [2024-11-07 13:44:37.473467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.639 qpair failed and we were unable to recover it. 00:39:29.639 [2024-11-07 13:44:37.473692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.639 [2024-11-07 13:44:37.473703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.639 qpair failed and we were unable to recover it. 00:39:29.639 [2024-11-07 13:44:37.473893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.639 [2024-11-07 13:44:37.473905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.640 qpair failed and we were unable to recover it. 00:39:29.640 [2024-11-07 13:44:37.474172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.640 [2024-11-07 13:44:37.474182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.640 qpair failed and we were unable to recover it. 00:39:29.640 [2024-11-07 13:44:37.474354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.640 [2024-11-07 13:44:37.474365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.640 qpair failed and we were unable to recover it. 00:39:29.640 [2024-11-07 13:44:37.474697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.640 [2024-11-07 13:44:37.474708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.640 qpair failed and we were unable to recover it. 00:39:29.640 [2024-11-07 13:44:37.474879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.640 [2024-11-07 13:44:37.474890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.640 qpair failed and we were unable to recover it. 00:39:29.640 [2024-11-07 13:44:37.475214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.640 [2024-11-07 13:44:37.475225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.640 qpair failed and we were unable to recover it. 00:39:29.640 [2024-11-07 13:44:37.475605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.640 [2024-11-07 13:44:37.475616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.640 qpair failed and we were unable to recover it. 00:39:29.640 [2024-11-07 13:44:37.476023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.640 [2024-11-07 13:44:37.476034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.640 qpair failed and we were unable to recover it. 00:39:29.640 [2024-11-07 13:44:37.476362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.640 [2024-11-07 13:44:37.476374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.640 qpair failed and we were unable to recover it. 00:39:29.640 [2024-11-07 13:44:37.476700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.640 [2024-11-07 13:44:37.476710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.640 qpair failed and we were unable to recover it. 00:39:29.640 [2024-11-07 13:44:37.477006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.640 [2024-11-07 13:44:37.477017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.640 qpair failed and we were unable to recover it. 00:39:29.640 [2024-11-07 13:44:37.477349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.640 [2024-11-07 13:44:37.477359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.640 qpair failed and we were unable to recover it. 00:39:29.640 [2024-11-07 13:44:37.477409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.640 [2024-11-07 13:44:37.477419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.640 qpair failed and we were unable to recover it. 00:39:29.640 [2024-11-07 13:44:37.477699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.640 [2024-11-07 13:44:37.477709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.640 qpair failed and we were unable to recover it. 00:39:29.640 [2024-11-07 13:44:37.478037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.640 [2024-11-07 13:44:37.478050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.640 qpair failed and we were unable to recover it. 00:39:29.640 [2024-11-07 13:44:37.478367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.640 [2024-11-07 13:44:37.478378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.640 qpair failed and we were unable to recover it. 00:39:29.640 [2024-11-07 13:44:37.478698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.640 [2024-11-07 13:44:37.478709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.640 qpair failed and we were unable to recover it. 00:39:29.640 [2024-11-07 13:44:37.478925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.640 [2024-11-07 13:44:37.478937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.640 qpair failed and we were unable to recover it. 00:39:29.640 [2024-11-07 13:44:37.479258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.640 [2024-11-07 13:44:37.479269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.640 qpair failed and we were unable to recover it. 00:39:29.640 [2024-11-07 13:44:37.479449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.640 [2024-11-07 13:44:37.479459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.640 qpair failed and we were unable to recover it. 00:39:29.640 [2024-11-07 13:44:37.479800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.640 [2024-11-07 13:44:37.479811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.640 qpair failed and we were unable to recover it. 00:39:29.640 13:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:29.640 [2024-11-07 13:44:37.480135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.640 [2024-11-07 13:44:37.480146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.640 qpair failed and we were unable to recover it. 00:39:29.640 13:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:29.640 [2024-11-07 13:44:37.480453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.640 [2024-11-07 13:44:37.480465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.640 qpair failed and we were unable to recover it. 00:39:29.640 [2024-11-07 13:44:37.480652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.640 [2024-11-07 13:44:37.480663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.640 qpair failed and we were unable to recover it. 00:39:29.640 13:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:29.640 [2024-11-07 13:44:37.480839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.640 [2024-11-07 13:44:37.480850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.640 qpair failed and we were unable to recover it. 00:39:29.640 13:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:29.640 [2024-11-07 13:44:37.481054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.640 [2024-11-07 13:44:37.481066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.640 qpair failed and we were unable to recover it. 00:39:29.640 [2024-11-07 13:44:37.481373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.640 [2024-11-07 13:44:37.481387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.640 qpair failed and we were unable to recover it. 00:39:29.640 [2024-11-07 13:44:37.481668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.640 [2024-11-07 13:44:37.481679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.640 qpair failed and we were unable to recover it. 00:39:29.640 [2024-11-07 13:44:37.481995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.640 [2024-11-07 13:44:37.482007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.640 qpair failed and we were unable to recover it. 00:39:29.640 [2024-11-07 13:44:37.482314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.640 [2024-11-07 13:44:37.482326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.640 qpair failed and we were unable to recover it. 00:39:29.640 [2024-11-07 13:44:37.482639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.640 [2024-11-07 13:44:37.482650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.640 qpair failed and we were unable to recover it. 00:39:29.640 [2024-11-07 13:44:37.482974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.640 [2024-11-07 13:44:37.482987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.640 qpair failed and we were unable to recover it. 00:39:29.640 [2024-11-07 13:44:37.483317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.640 [2024-11-07 13:44:37.483327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.640 qpair failed and we were unable to recover it. 00:39:29.640 [2024-11-07 13:44:37.483588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.640 [2024-11-07 13:44:37.483600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.640 qpair failed and we were unable to recover it. 00:39:29.640 [2024-11-07 13:44:37.483781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.640 [2024-11-07 13:44:37.483794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.640 qpair failed and we were unable to recover it. 00:39:29.641 [2024-11-07 13:44:37.483981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.641 [2024-11-07 13:44:37.483992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.641 qpair failed and we were unable to recover it. 00:39:29.641 [2024-11-07 13:44:37.484187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.641 [2024-11-07 13:44:37.484198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.641 qpair failed and we were unable to recover it. 00:39:29.641 [2024-11-07 13:44:37.484460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.641 [2024-11-07 13:44:37.484472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.641 qpair failed and we were unable to recover it. 00:39:29.641 [2024-11-07 13:44:37.484812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.641 [2024-11-07 13:44:37.484825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.641 qpair failed and we were unable to recover it. 00:39:29.641 [2024-11-07 13:44:37.485152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.641 [2024-11-07 13:44:37.485168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.641 qpair failed and we were unable to recover it. 00:39:29.641 [2024-11-07 13:44:37.485477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.641 [2024-11-07 13:44:37.485490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.641 qpair failed and we were unable to recover it. 00:39:29.641 [2024-11-07 13:44:37.485652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.641 [2024-11-07 13:44:37.485665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.641 qpair failed and we were unable to recover it. 00:39:29.641 [2024-11-07 13:44:37.485968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.641 [2024-11-07 13:44:37.485980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.641 qpair failed and we were unable to recover it. 00:39:29.641 [2024-11-07 13:44:37.486308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.641 [2024-11-07 13:44:37.486320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.641 qpair failed and we were unable to recover it. 00:39:29.641 [2024-11-07 13:44:37.486653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.641 [2024-11-07 13:44:37.486665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.641 qpair failed and we were unable to recover it. 00:39:29.641 [2024-11-07 13:44:37.486852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.641 [2024-11-07 13:44:37.486868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.641 qpair failed and we were unable to recover it. 00:39:29.641 [2024-11-07 13:44:37.487170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.641 [2024-11-07 13:44:37.487181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.641 qpair failed and we were unable to recover it. 00:39:29.641 [2024-11-07 13:44:37.487495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.641 [2024-11-07 13:44:37.487507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.641 qpair failed and we were unable to recover it. 00:39:29.641 [2024-11-07 13:44:37.487669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.641 [2024-11-07 13:44:37.487679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.641 qpair failed and we were unable to recover it. 00:39:29.641 [2024-11-07 13:44:37.488005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.641 [2024-11-07 13:44:37.488016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.641 qpair failed and we were unable to recover it. 00:39:29.641 [2024-11-07 13:44:37.488332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.641 [2024-11-07 13:44:37.488343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.641 qpair failed and we were unable to recover it. 00:39:29.641 [2024-11-07 13:44:37.488654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.641 [2024-11-07 13:44:37.488665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.641 qpair failed and we were unable to recover it. 00:39:29.641 [2024-11-07 13:44:37.488855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.641 [2024-11-07 13:44:37.488870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.641 qpair failed and we were unable to recover it. 00:39:29.641 [2024-11-07 13:44:37.489163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.641 [2024-11-07 13:44:37.489175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.641 qpair failed and we were unable to recover it. 00:39:29.641 [2024-11-07 13:44:37.489477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.641 [2024-11-07 13:44:37.489489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.641 qpair failed and we were unable to recover it. 00:39:29.641 [2024-11-07 13:44:37.489790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.641 [2024-11-07 13:44:37.489804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.641 qpair failed and we were unable to recover it. 00:39:29.641 [2024-11-07 13:44:37.490011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.641 [2024-11-07 13:44:37.490023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.641 qpair failed and we were unable to recover it. 00:39:29.641 [2024-11-07 13:44:37.490320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.641 [2024-11-07 13:44:37.490331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.641 qpair failed and we were unable to recover it. 00:39:29.641 [2024-11-07 13:44:37.490737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.641 [2024-11-07 13:44:37.490749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.641 qpair failed and we were unable to recover it. 00:39:29.641 [2024-11-07 13:44:37.491092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.641 [2024-11-07 13:44:37.491104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.641 qpair failed and we were unable to recover it. 00:39:29.641 [2024-11-07 13:44:37.491410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.641 [2024-11-07 13:44:37.491422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.641 qpair failed and we were unable to recover it. 00:39:29.641 [2024-11-07 13:44:37.491604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.641 [2024-11-07 13:44:37.491614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.641 qpair failed and we were unable to recover it. 00:39:29.641 [2024-11-07 13:44:37.492000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.641 [2024-11-07 13:44:37.492011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.641 qpair failed and we were unable to recover it. 00:39:29.641 13:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:29.641 [2024-11-07 13:44:37.492309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.641 [2024-11-07 13:44:37.492320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.641 qpair failed and we were unable to recover it. 00:39:29.641 13:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:29.641 [2024-11-07 13:44:37.492569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.641 [2024-11-07 13:44:37.492581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.641 qpair failed and we were unable to recover it. 00:39:29.641 [2024-11-07 13:44:37.492781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.641 [2024-11-07 13:44:37.492791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.641 qpair failed and we were unable to recover it. 00:39:29.641 13:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:29.641 [2024-11-07 13:44:37.493124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.641 13:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:29.641 [2024-11-07 13:44:37.493136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.641 qpair failed and we were unable to recover it. 00:39:29.641 [2024-11-07 13:44:37.493452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.641 [2024-11-07 13:44:37.493463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.641 qpair failed and we were unable to recover it. 00:39:29.641 [2024-11-07 13:44:37.493632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.641 [2024-11-07 13:44:37.493644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.641 qpair failed and we were unable to recover it. 00:39:29.641 [2024-11-07 13:44:37.493961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.641 [2024-11-07 13:44:37.493971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.641 qpair failed and we were unable to recover it. 00:39:29.641 [2024-11-07 13:44:37.494282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.641 [2024-11-07 13:44:37.494295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.641 qpair failed and we were unable to recover it. 00:39:29.642 [2024-11-07 13:44:37.494592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.642 [2024-11-07 13:44:37.494604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.642 qpair failed and we were unable to recover it. 00:39:29.642 [2024-11-07 13:44:37.494944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.642 [2024-11-07 13:44:37.494955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.642 qpair failed and we were unable to recover it. 00:39:29.642 [2024-11-07 13:44:37.495261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.642 [2024-11-07 13:44:37.495272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.642 qpair failed and we were unable to recover it. 00:39:29.642 [2024-11-07 13:44:37.495575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.642 [2024-11-07 13:44:37.495587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.642 qpair failed and we were unable to recover it. 00:39:29.642 [2024-11-07 13:44:37.495901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.642 [2024-11-07 13:44:37.495913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.642 qpair failed and we were unable to recover it. 00:39:29.642 [2024-11-07 13:44:37.496235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.642 [2024-11-07 13:44:37.496246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.642 qpair failed and we were unable to recover it. 00:39:29.642 [2024-11-07 13:44:37.496440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.642 [2024-11-07 13:44:37.496452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.642 qpair failed and we were unable to recover it. 00:39:29.642 [2024-11-07 13:44:37.496785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.642 [2024-11-07 13:44:37.496796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.642 qpair failed and we were unable to recover it. 00:39:29.642 [2024-11-07 13:44:37.497111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.642 [2024-11-07 13:44:37.497122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.642 qpair failed and we were unable to recover it. 00:39:29.642 [2024-11-07 13:44:37.497411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.642 [2024-11-07 13:44:37.497424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.642 qpair failed and we were unable to recover it. 00:39:29.642 [2024-11-07 13:44:37.497772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.642 [2024-11-07 13:44:37.497783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.642 qpair failed and we were unable to recover it. 00:39:29.642 [2024-11-07 13:44:37.497945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.642 [2024-11-07 13:44:37.497958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.642 qpair failed and we were unable to recover it. 00:39:29.642 [2024-11-07 13:44:37.498279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.642 [2024-11-07 13:44:37.498290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.642 qpair failed and we were unable to recover it. 00:39:29.642 [2024-11-07 13:44:37.498607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.642 [2024-11-07 13:44:37.498619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.642 qpair failed and we were unable to recover it. 00:39:29.642 [2024-11-07 13:44:37.498931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.642 [2024-11-07 13:44:37.498943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.642 qpair failed and we were unable to recover it. 00:39:29.642 [2024-11-07 13:44:37.499279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:29.642 [2024-11-07 13:44:37.499292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500042fe80 with addr=10.0.0.2, port=4420 00:39:29.642 qpair failed and we were unable to recover it. 00:39:29.642 [2024-11-07 13:44:37.499401] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:29.642 13:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:29.642 13:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:29.642 13:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:29.642 13:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:29.642 [2024-11-07 13:44:37.510246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.642 [2024-11-07 13:44:37.510384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.642 [2024-11-07 13:44:37.510403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.642 [2024-11-07 13:44:37.510415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.642 [2024-11-07 13:44:37.510423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:29.642 [2024-11-07 13:44:37.510447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.642 qpair failed and we were unable to recover it. 00:39:29.642 13:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:29.642 13:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 4147890 00:39:29.642 [2024-11-07 13:44:37.520021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.642 [2024-11-07 13:44:37.520132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.642 [2024-11-07 13:44:37.520149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.642 [2024-11-07 13:44:37.520158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.642 [2024-11-07 13:44:37.520166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:29.642 [2024-11-07 13:44:37.520183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.642 qpair failed and we were unable to recover it. 00:39:29.642 [2024-11-07 13:44:37.530103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.642 [2024-11-07 13:44:37.530185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.642 [2024-11-07 13:44:37.530202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.642 [2024-11-07 13:44:37.530210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.642 [2024-11-07 13:44:37.530217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:29.642 [2024-11-07 13:44:37.530235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.642 qpair failed and we were unable to recover it. 00:39:29.642 [2024-11-07 13:44:37.540092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.642 [2024-11-07 13:44:37.540175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.642 [2024-11-07 13:44:37.540192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.642 [2024-11-07 13:44:37.540200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.642 [2024-11-07 13:44:37.540206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:29.642 [2024-11-07 13:44:37.540223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.642 qpair failed and we were unable to recover it. 00:39:29.642 [2024-11-07 13:44:37.550108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.642 [2024-11-07 13:44:37.550182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.642 [2024-11-07 13:44:37.550198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.642 [2024-11-07 13:44:37.550207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.642 [2024-11-07 13:44:37.550214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:29.642 [2024-11-07 13:44:37.550229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.642 qpair failed and we were unable to recover it. 00:39:29.642 [2024-11-07 13:44:37.559995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.642 [2024-11-07 13:44:37.560063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.643 [2024-11-07 13:44:37.560079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.643 [2024-11-07 13:44:37.560090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.643 [2024-11-07 13:44:37.560097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:29.643 [2024-11-07 13:44:37.560112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.643 qpair failed and we were unable to recover it. 00:39:29.643 [2024-11-07 13:44:37.570137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.643 [2024-11-07 13:44:37.570204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.643 [2024-11-07 13:44:37.570220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.643 [2024-11-07 13:44:37.570228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.643 [2024-11-07 13:44:37.570239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:29.643 [2024-11-07 13:44:37.570255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.643 qpair failed and we were unable to recover it. 00:39:29.643 [2024-11-07 13:44:37.580214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.643 [2024-11-07 13:44:37.580286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.643 [2024-11-07 13:44:37.580302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.643 [2024-11-07 13:44:37.580311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.643 [2024-11-07 13:44:37.580318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:29.643 [2024-11-07 13:44:37.580333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.643 qpair failed and we were unable to recover it. 00:39:29.643 [2024-11-07 13:44:37.590186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.643 [2024-11-07 13:44:37.590256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.643 [2024-11-07 13:44:37.590272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.643 [2024-11-07 13:44:37.590281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.643 [2024-11-07 13:44:37.590287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:29.643 [2024-11-07 13:44:37.590303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.643 qpair failed and we were unable to recover it. 00:39:29.643 [2024-11-07 13:44:37.600301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.643 [2024-11-07 13:44:37.600392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.643 [2024-11-07 13:44:37.600409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.643 [2024-11-07 13:44:37.600418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.643 [2024-11-07 13:44:37.600424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:29.643 [2024-11-07 13:44:37.600443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.643 qpair failed and we were unable to recover it. 00:39:29.643 [2024-11-07 13:44:37.610249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.643 [2024-11-07 13:44:37.610364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.643 [2024-11-07 13:44:37.610380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.643 [2024-11-07 13:44:37.610388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.643 [2024-11-07 13:44:37.610395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:29.643 [2024-11-07 13:44:37.610410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.643 qpair failed and we were unable to recover it. 00:39:29.905 [2024-11-07 13:44:37.620317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.906 [2024-11-07 13:44:37.620389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.906 [2024-11-07 13:44:37.620405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.906 [2024-11-07 13:44:37.620414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.906 [2024-11-07 13:44:37.620421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:29.906 [2024-11-07 13:44:37.620436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.906 qpair failed and we were unable to recover it. 00:39:29.906 [2024-11-07 13:44:37.630252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.906 [2024-11-07 13:44:37.630319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.906 [2024-11-07 13:44:37.630335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.906 [2024-11-07 13:44:37.630343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.906 [2024-11-07 13:44:37.630350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:29.906 [2024-11-07 13:44:37.630366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.906 qpair failed and we were unable to recover it. 00:39:29.906 [2024-11-07 13:44:37.640333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.906 [2024-11-07 13:44:37.640404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.906 [2024-11-07 13:44:37.640421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.906 [2024-11-07 13:44:37.640429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.906 [2024-11-07 13:44:37.640435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:29.906 [2024-11-07 13:44:37.640452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.906 qpair failed and we were unable to recover it. 00:39:29.906 [2024-11-07 13:44:37.650362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.906 [2024-11-07 13:44:37.650425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.906 [2024-11-07 13:44:37.650441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.906 [2024-11-07 13:44:37.650449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.906 [2024-11-07 13:44:37.650456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:29.906 [2024-11-07 13:44:37.650474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.906 qpair failed and we were unable to recover it. 00:39:29.906 [2024-11-07 13:44:37.660425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.906 [2024-11-07 13:44:37.660531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.906 [2024-11-07 13:44:37.660548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.906 [2024-11-07 13:44:37.660556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.906 [2024-11-07 13:44:37.660562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:29.906 [2024-11-07 13:44:37.660578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.906 qpair failed and we were unable to recover it. 00:39:29.906 [2024-11-07 13:44:37.670426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.906 [2024-11-07 13:44:37.670523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.906 [2024-11-07 13:44:37.670539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.906 [2024-11-07 13:44:37.670548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.906 [2024-11-07 13:44:37.670554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:29.906 [2024-11-07 13:44:37.670570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.906 qpair failed and we were unable to recover it. 00:39:29.906 [2024-11-07 13:44:37.680386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.906 [2024-11-07 13:44:37.680454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.906 [2024-11-07 13:44:37.680470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.906 [2024-11-07 13:44:37.680478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.906 [2024-11-07 13:44:37.680485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:29.906 [2024-11-07 13:44:37.680500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.906 qpair failed and we were unable to recover it. 00:39:29.906 [2024-11-07 13:44:37.690471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.906 [2024-11-07 13:44:37.690544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.906 [2024-11-07 13:44:37.690562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.906 [2024-11-07 13:44:37.690571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.906 [2024-11-07 13:44:37.690577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:29.906 [2024-11-07 13:44:37.690592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.906 qpair failed and we were unable to recover it. 00:39:29.906 [2024-11-07 13:44:37.700530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.906 [2024-11-07 13:44:37.700598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.906 [2024-11-07 13:44:37.700614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.906 [2024-11-07 13:44:37.700623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.906 [2024-11-07 13:44:37.700629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:29.906 [2024-11-07 13:44:37.700645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.906 qpair failed and we were unable to recover it. 00:39:29.906 [2024-11-07 13:44:37.710536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.906 [2024-11-07 13:44:37.710624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.906 [2024-11-07 13:44:37.710641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.906 [2024-11-07 13:44:37.710649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.906 [2024-11-07 13:44:37.710656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:29.906 [2024-11-07 13:44:37.710672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.906 qpair failed and we were unable to recover it. 00:39:29.906 [2024-11-07 13:44:37.720549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.906 [2024-11-07 13:44:37.720621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.906 [2024-11-07 13:44:37.720636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.906 [2024-11-07 13:44:37.720644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.906 [2024-11-07 13:44:37.720651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:29.906 [2024-11-07 13:44:37.720666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.906 qpair failed and we were unable to recover it. 00:39:29.906 [2024-11-07 13:44:37.730559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.906 [2024-11-07 13:44:37.730629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.906 [2024-11-07 13:44:37.730645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.906 [2024-11-07 13:44:37.730653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.906 [2024-11-07 13:44:37.730662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:29.906 [2024-11-07 13:44:37.730678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.906 qpair failed and we were unable to recover it. 00:39:29.906 [2024-11-07 13:44:37.740622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.906 [2024-11-07 13:44:37.740691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.906 [2024-11-07 13:44:37.740707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.906 [2024-11-07 13:44:37.740715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.906 [2024-11-07 13:44:37.740722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:29.906 [2024-11-07 13:44:37.740737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.906 qpair failed and we were unable to recover it. 00:39:29.907 [2024-11-07 13:44:37.750654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.907 [2024-11-07 13:44:37.750769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.907 [2024-11-07 13:44:37.750785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.907 [2024-11-07 13:44:37.750793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.907 [2024-11-07 13:44:37.750800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:29.907 [2024-11-07 13:44:37.750816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.907 qpair failed and we were unable to recover it. 00:39:29.907 [2024-11-07 13:44:37.760653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.907 [2024-11-07 13:44:37.760718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.907 [2024-11-07 13:44:37.760734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.907 [2024-11-07 13:44:37.760742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.907 [2024-11-07 13:44:37.760748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:29.907 [2024-11-07 13:44:37.760764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.907 qpair failed and we were unable to recover it. 00:39:29.907 [2024-11-07 13:44:37.770678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.907 [2024-11-07 13:44:37.770743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.907 [2024-11-07 13:44:37.770760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.907 [2024-11-07 13:44:37.770768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.907 [2024-11-07 13:44:37.770774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:29.907 [2024-11-07 13:44:37.770789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.907 qpair failed and we were unable to recover it. 00:39:29.907 [2024-11-07 13:44:37.780763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.907 [2024-11-07 13:44:37.780849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.907 [2024-11-07 13:44:37.780869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.907 [2024-11-07 13:44:37.780877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.907 [2024-11-07 13:44:37.780883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:29.907 [2024-11-07 13:44:37.780900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.907 qpair failed and we were unable to recover it. 00:39:29.907 [2024-11-07 13:44:37.790776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.907 [2024-11-07 13:44:37.790845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.907 [2024-11-07 13:44:37.790867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.907 [2024-11-07 13:44:37.790875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.907 [2024-11-07 13:44:37.790881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:29.907 [2024-11-07 13:44:37.790898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.907 qpair failed and we were unable to recover it. 00:39:29.907 [2024-11-07 13:44:37.800669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.907 [2024-11-07 13:44:37.800731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.907 [2024-11-07 13:44:37.800747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.907 [2024-11-07 13:44:37.800755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.907 [2024-11-07 13:44:37.800762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:29.907 [2024-11-07 13:44:37.800777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.907 qpair failed and we were unable to recover it. 00:39:29.907 [2024-11-07 13:44:37.810775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.907 [2024-11-07 13:44:37.810844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.907 [2024-11-07 13:44:37.810860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.907 [2024-11-07 13:44:37.810872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.907 [2024-11-07 13:44:37.810878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:29.907 [2024-11-07 13:44:37.810895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.907 qpair failed and we were unable to recover it. 00:39:29.907 [2024-11-07 13:44:37.820841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.907 [2024-11-07 13:44:37.820927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.907 [2024-11-07 13:44:37.820946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.907 [2024-11-07 13:44:37.820955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.907 [2024-11-07 13:44:37.820962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:29.907 [2024-11-07 13:44:37.820978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.907 qpair failed and we were unable to recover it. 00:39:29.907 [2024-11-07 13:44:37.830761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.907 [2024-11-07 13:44:37.830839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.907 [2024-11-07 13:44:37.830874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.907 [2024-11-07 13:44:37.830883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.907 [2024-11-07 13:44:37.830889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:29.907 [2024-11-07 13:44:37.830905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.907 qpair failed and we were unable to recover it. 00:39:29.907 [2024-11-07 13:44:37.840961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.907 [2024-11-07 13:44:37.841028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.907 [2024-11-07 13:44:37.841045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.907 [2024-11-07 13:44:37.841053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.907 [2024-11-07 13:44:37.841059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:29.907 [2024-11-07 13:44:37.841075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.907 qpair failed and we were unable to recover it. 00:39:29.907 [2024-11-07 13:44:37.850937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.907 [2024-11-07 13:44:37.851051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.907 [2024-11-07 13:44:37.851068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.907 [2024-11-07 13:44:37.851076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.907 [2024-11-07 13:44:37.851083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:29.907 [2024-11-07 13:44:37.851100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.907 qpair failed and we were unable to recover it. 00:39:29.907 [2024-11-07 13:44:37.860854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.907 [2024-11-07 13:44:37.860935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.907 [2024-11-07 13:44:37.860951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.907 [2024-11-07 13:44:37.860959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.907 [2024-11-07 13:44:37.860969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:29.907 [2024-11-07 13:44:37.860986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.907 qpair failed and we were unable to recover it. 00:39:29.907 [2024-11-07 13:44:37.870986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.907 [2024-11-07 13:44:37.871062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.907 [2024-11-07 13:44:37.871079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.907 [2024-11-07 13:44:37.871087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.907 [2024-11-07 13:44:37.871093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:29.907 [2024-11-07 13:44:37.871108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.907 qpair failed and we were unable to recover it. 00:39:29.907 [2024-11-07 13:44:37.880974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.907 [2024-11-07 13:44:37.881068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.908 [2024-11-07 13:44:37.881085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.908 [2024-11-07 13:44:37.881093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.908 [2024-11-07 13:44:37.881099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:29.908 [2024-11-07 13:44:37.881115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.908 qpair failed and we were unable to recover it. 00:39:29.908 [2024-11-07 13:44:37.890994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.908 [2024-11-07 13:44:37.891057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.908 [2024-11-07 13:44:37.891074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.908 [2024-11-07 13:44:37.891082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.908 [2024-11-07 13:44:37.891088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:29.908 [2024-11-07 13:44:37.891104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.908 qpair failed and we were unable to recover it. 00:39:29.908 [2024-11-07 13:44:37.901033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.908 [2024-11-07 13:44:37.901105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.908 [2024-11-07 13:44:37.901121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.908 [2024-11-07 13:44:37.901129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.908 [2024-11-07 13:44:37.901136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:29.908 [2024-11-07 13:44:37.901151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.908 qpair failed and we were unable to recover it. 00:39:30.170 [2024-11-07 13:44:37.911125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.170 [2024-11-07 13:44:37.911193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.170 [2024-11-07 13:44:37.911209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.170 [2024-11-07 13:44:37.911218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.170 [2024-11-07 13:44:37.911224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.170 [2024-11-07 13:44:37.911239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.170 qpair failed and we were unable to recover it. 00:39:30.170 [2024-11-07 13:44:37.921109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.170 [2024-11-07 13:44:37.921179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.170 [2024-11-07 13:44:37.921195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.170 [2024-11-07 13:44:37.921203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.170 [2024-11-07 13:44:37.921210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.170 [2024-11-07 13:44:37.921225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.170 qpair failed and we were unable to recover it. 00:39:30.170 [2024-11-07 13:44:37.931148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.170 [2024-11-07 13:44:37.931228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.170 [2024-11-07 13:44:37.931244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.170 [2024-11-07 13:44:37.931252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.170 [2024-11-07 13:44:37.931259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.170 [2024-11-07 13:44:37.931275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.170 qpair failed and we were unable to recover it. 00:39:30.170 [2024-11-07 13:44:37.941073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.170 [2024-11-07 13:44:37.941158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.170 [2024-11-07 13:44:37.941174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.170 [2024-11-07 13:44:37.941182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.170 [2024-11-07 13:44:37.941189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.170 [2024-11-07 13:44:37.941204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.170 qpair failed and we were unable to recover it. 00:39:30.170 [2024-11-07 13:44:37.951211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.170 [2024-11-07 13:44:37.951311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.170 [2024-11-07 13:44:37.951330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.170 [2024-11-07 13:44:37.951339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.170 [2024-11-07 13:44:37.951345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.170 [2024-11-07 13:44:37.951361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.170 qpair failed and we were unable to recover it. 00:39:30.170 [2024-11-07 13:44:37.961146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.170 [2024-11-07 13:44:37.961243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.170 [2024-11-07 13:44:37.961260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.170 [2024-11-07 13:44:37.961268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.170 [2024-11-07 13:44:37.961274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.170 [2024-11-07 13:44:37.961290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.170 qpair failed and we were unable to recover it. 00:39:30.170 [2024-11-07 13:44:37.971264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.170 [2024-11-07 13:44:37.971333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.170 [2024-11-07 13:44:37.971350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.170 [2024-11-07 13:44:37.971358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.170 [2024-11-07 13:44:37.971364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.170 [2024-11-07 13:44:37.971381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.170 qpair failed and we were unable to recover it. 00:39:30.170 [2024-11-07 13:44:37.981286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.170 [2024-11-07 13:44:37.981353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.170 [2024-11-07 13:44:37.981369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.171 [2024-11-07 13:44:37.981377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.171 [2024-11-07 13:44:37.981383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.171 [2024-11-07 13:44:37.981401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.171 qpair failed and we were unable to recover it. 00:39:30.171 [2024-11-07 13:44:37.991293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.171 [2024-11-07 13:44:37.991376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.171 [2024-11-07 13:44:37.991393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.171 [2024-11-07 13:44:37.991405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.171 [2024-11-07 13:44:37.991412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.171 [2024-11-07 13:44:37.991427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.171 qpair failed and we were unable to recover it. 00:39:30.171 [2024-11-07 13:44:38.001303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.171 [2024-11-07 13:44:38.001372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.171 [2024-11-07 13:44:38.001388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.171 [2024-11-07 13:44:38.001397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.171 [2024-11-07 13:44:38.001403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.171 [2024-11-07 13:44:38.001419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.171 qpair failed and we were unable to recover it. 00:39:30.171 [2024-11-07 13:44:38.011271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.171 [2024-11-07 13:44:38.011333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.171 [2024-11-07 13:44:38.011350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.171 [2024-11-07 13:44:38.011358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.171 [2024-11-07 13:44:38.011365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.171 [2024-11-07 13:44:38.011381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.171 qpair failed and we were unable to recover it. 00:39:30.171 [2024-11-07 13:44:38.021367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.171 [2024-11-07 13:44:38.021434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.171 [2024-11-07 13:44:38.021450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.171 [2024-11-07 13:44:38.021458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.171 [2024-11-07 13:44:38.021465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.171 [2024-11-07 13:44:38.021480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.171 qpair failed and we were unable to recover it. 00:39:30.171 [2024-11-07 13:44:38.031425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.171 [2024-11-07 13:44:38.031512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.171 [2024-11-07 13:44:38.031527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.171 [2024-11-07 13:44:38.031536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.171 [2024-11-07 13:44:38.031542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.171 [2024-11-07 13:44:38.031558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.171 qpair failed and we were unable to recover it. 00:39:30.171 [2024-11-07 13:44:38.041412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.171 [2024-11-07 13:44:38.041479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.171 [2024-11-07 13:44:38.041495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.171 [2024-11-07 13:44:38.041503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.171 [2024-11-07 13:44:38.041509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.171 [2024-11-07 13:44:38.041525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.171 qpair failed and we were unable to recover it. 00:39:30.171 [2024-11-07 13:44:38.051348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.171 [2024-11-07 13:44:38.051415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.171 [2024-11-07 13:44:38.051432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.171 [2024-11-07 13:44:38.051440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.171 [2024-11-07 13:44:38.051446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.171 [2024-11-07 13:44:38.051465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.171 qpair failed and we were unable to recover it. 00:39:30.171 [2024-11-07 13:44:38.061527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.171 [2024-11-07 13:44:38.061597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.171 [2024-11-07 13:44:38.061614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.171 [2024-11-07 13:44:38.061621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.171 [2024-11-07 13:44:38.061628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.171 [2024-11-07 13:44:38.061644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.171 qpair failed and we were unable to recover it. 00:39:30.171 [2024-11-07 13:44:38.071507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.171 [2024-11-07 13:44:38.071595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.171 [2024-11-07 13:44:38.071612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.171 [2024-11-07 13:44:38.071620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.171 [2024-11-07 13:44:38.071626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.171 [2024-11-07 13:44:38.071642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.171 qpair failed and we were unable to recover it. 00:39:30.171 [2024-11-07 13:44:38.081545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.171 [2024-11-07 13:44:38.081610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.171 [2024-11-07 13:44:38.081626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.171 [2024-11-07 13:44:38.081634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.171 [2024-11-07 13:44:38.081640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.171 [2024-11-07 13:44:38.081661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.171 qpair failed and we were unable to recover it. 00:39:30.171 [2024-11-07 13:44:38.091583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.171 [2024-11-07 13:44:38.091650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.171 [2024-11-07 13:44:38.091666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.171 [2024-11-07 13:44:38.091674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.171 [2024-11-07 13:44:38.091680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.171 [2024-11-07 13:44:38.091696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.171 qpair failed and we were unable to recover it. 00:39:30.171 [2024-11-07 13:44:38.101606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.171 [2024-11-07 13:44:38.101674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.171 [2024-11-07 13:44:38.101690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.171 [2024-11-07 13:44:38.101698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.171 [2024-11-07 13:44:38.101705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.171 [2024-11-07 13:44:38.101720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.171 qpair failed and we were unable to recover it. 00:39:30.171 [2024-11-07 13:44:38.111646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.171 [2024-11-07 13:44:38.111729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.171 [2024-11-07 13:44:38.111745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.172 [2024-11-07 13:44:38.111754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.172 [2024-11-07 13:44:38.111760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.172 [2024-11-07 13:44:38.111776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.172 qpair failed and we were unable to recover it. 00:39:30.172 [2024-11-07 13:44:38.121636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.172 [2024-11-07 13:44:38.121728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.172 [2024-11-07 13:44:38.121744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.172 [2024-11-07 13:44:38.121755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.172 [2024-11-07 13:44:38.121761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.172 [2024-11-07 13:44:38.121777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.172 qpair failed and we were unable to recover it. 00:39:30.172 [2024-11-07 13:44:38.131647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.172 [2024-11-07 13:44:38.131713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.172 [2024-11-07 13:44:38.131729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.172 [2024-11-07 13:44:38.131737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.172 [2024-11-07 13:44:38.131743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.172 [2024-11-07 13:44:38.131759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.172 qpair failed and we were unable to recover it. 00:39:30.172 [2024-11-07 13:44:38.141707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.172 [2024-11-07 13:44:38.141776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.172 [2024-11-07 13:44:38.141792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.172 [2024-11-07 13:44:38.141800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.172 [2024-11-07 13:44:38.141806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.172 [2024-11-07 13:44:38.141822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.172 qpair failed and we were unable to recover it. 00:39:30.172 [2024-11-07 13:44:38.151741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.172 [2024-11-07 13:44:38.151808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.172 [2024-11-07 13:44:38.151825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.172 [2024-11-07 13:44:38.151833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.172 [2024-11-07 13:44:38.151839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.172 [2024-11-07 13:44:38.151855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.172 qpair failed and we were unable to recover it. 00:39:30.172 [2024-11-07 13:44:38.161667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.172 [2024-11-07 13:44:38.161733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.172 [2024-11-07 13:44:38.161749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.172 [2024-11-07 13:44:38.161757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.172 [2024-11-07 13:44:38.161764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.172 [2024-11-07 13:44:38.161782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.172 qpair failed and we were unable to recover it. 00:39:30.172 [2024-11-07 13:44:38.171694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.172 [2024-11-07 13:44:38.171762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.172 [2024-11-07 13:44:38.171778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.172 [2024-11-07 13:44:38.171786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.172 [2024-11-07 13:44:38.171793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.172 [2024-11-07 13:44:38.171808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.172 qpair failed and we were unable to recover it. 00:39:30.435 [2024-11-07 13:44:38.181819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.435 [2024-11-07 13:44:38.181892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.435 [2024-11-07 13:44:38.181908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.435 [2024-11-07 13:44:38.181916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.435 [2024-11-07 13:44:38.181922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.435 [2024-11-07 13:44:38.181939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.435 qpair failed and we were unable to recover it. 00:39:30.435 [2024-11-07 13:44:38.191846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.435 [2024-11-07 13:44:38.191966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.435 [2024-11-07 13:44:38.191984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.435 [2024-11-07 13:44:38.191992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.435 [2024-11-07 13:44:38.191998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.435 [2024-11-07 13:44:38.192014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.435 qpair failed and we were unable to recover it. 00:39:30.435 [2024-11-07 13:44:38.201809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.435 [2024-11-07 13:44:38.201881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.435 [2024-11-07 13:44:38.201898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.435 [2024-11-07 13:44:38.201906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.435 [2024-11-07 13:44:38.201913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.435 [2024-11-07 13:44:38.201930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.435 qpair failed and we were unable to recover it. 00:39:30.435 [2024-11-07 13:44:38.211891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.435 [2024-11-07 13:44:38.211956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.435 [2024-11-07 13:44:38.211972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.435 [2024-11-07 13:44:38.211980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.435 [2024-11-07 13:44:38.211987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.435 [2024-11-07 13:44:38.212003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.435 qpair failed and we were unable to recover it. 00:39:30.435 [2024-11-07 13:44:38.221938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.435 [2024-11-07 13:44:38.222035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.435 [2024-11-07 13:44:38.222051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.435 [2024-11-07 13:44:38.222060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.435 [2024-11-07 13:44:38.222066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.435 [2024-11-07 13:44:38.222082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.435 qpair failed and we were unable to recover it. 00:39:30.435 [2024-11-07 13:44:38.231943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.435 [2024-11-07 13:44:38.232006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.435 [2024-11-07 13:44:38.232022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.435 [2024-11-07 13:44:38.232030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.435 [2024-11-07 13:44:38.232036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.435 [2024-11-07 13:44:38.232052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.435 qpair failed and we were unable to recover it. 00:39:30.435 [2024-11-07 13:44:38.241975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.435 [2024-11-07 13:44:38.242040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.435 [2024-11-07 13:44:38.242057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.435 [2024-11-07 13:44:38.242065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.435 [2024-11-07 13:44:38.242071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.435 [2024-11-07 13:44:38.242087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.435 qpair failed and we were unable to recover it. 00:39:30.435 [2024-11-07 13:44:38.251978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.435 [2024-11-07 13:44:38.252047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.435 [2024-11-07 13:44:38.252065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.435 [2024-11-07 13:44:38.252074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.435 [2024-11-07 13:44:38.252080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.435 [2024-11-07 13:44:38.252096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.435 qpair failed and we were unable to recover it. 00:39:30.435 [2024-11-07 13:44:38.261951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.435 [2024-11-07 13:44:38.262019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.435 [2024-11-07 13:44:38.262035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.435 [2024-11-07 13:44:38.262043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.435 [2024-11-07 13:44:38.262050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.435 [2024-11-07 13:44:38.262065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.435 qpair failed and we were unable to recover it. 00:39:30.435 [2024-11-07 13:44:38.272054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.435 [2024-11-07 13:44:38.272140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.435 [2024-11-07 13:44:38.272155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.436 [2024-11-07 13:44:38.272163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.436 [2024-11-07 13:44:38.272170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.436 [2024-11-07 13:44:38.272186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.436 qpair failed and we were unable to recover it. 00:39:30.436 [2024-11-07 13:44:38.282095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.436 [2024-11-07 13:44:38.282159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.436 [2024-11-07 13:44:38.282175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.436 [2024-11-07 13:44:38.282183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.436 [2024-11-07 13:44:38.282190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.436 [2024-11-07 13:44:38.282205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.436 qpair failed and we were unable to recover it. 00:39:30.436 [2024-11-07 13:44:38.292128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.436 [2024-11-07 13:44:38.292201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.436 [2024-11-07 13:44:38.292217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.436 [2024-11-07 13:44:38.292225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.436 [2024-11-07 13:44:38.292233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.436 [2024-11-07 13:44:38.292250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.436 qpair failed and we were unable to recover it. 00:39:30.436 [2024-11-07 13:44:38.302166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.436 [2024-11-07 13:44:38.302231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.436 [2024-11-07 13:44:38.302247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.436 [2024-11-07 13:44:38.302255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.436 [2024-11-07 13:44:38.302261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.436 [2024-11-07 13:44:38.302277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.436 qpair failed and we were unable to recover it. 00:39:30.436 [2024-11-07 13:44:38.312173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.436 [2024-11-07 13:44:38.312268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.436 [2024-11-07 13:44:38.312285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.436 [2024-11-07 13:44:38.312293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.436 [2024-11-07 13:44:38.312299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.436 [2024-11-07 13:44:38.312318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.436 qpair failed and we were unable to recover it. 00:39:30.436 [2024-11-07 13:44:38.322231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.436 [2024-11-07 13:44:38.322330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.436 [2024-11-07 13:44:38.322347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.436 [2024-11-07 13:44:38.322355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.436 [2024-11-07 13:44:38.322361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.436 [2024-11-07 13:44:38.322377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.436 qpair failed and we were unable to recover it. 00:39:30.436 [2024-11-07 13:44:38.332235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.436 [2024-11-07 13:44:38.332304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.436 [2024-11-07 13:44:38.332320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.436 [2024-11-07 13:44:38.332328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.436 [2024-11-07 13:44:38.332334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.436 [2024-11-07 13:44:38.332350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.436 qpair failed and we were unable to recover it. 00:39:30.436 [2024-11-07 13:44:38.342276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.436 [2024-11-07 13:44:38.342343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.436 [2024-11-07 13:44:38.342359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.436 [2024-11-07 13:44:38.342372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.436 [2024-11-07 13:44:38.342378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.436 [2024-11-07 13:44:38.342394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.436 qpair failed and we were unable to recover it. 00:39:30.436 [2024-11-07 13:44:38.352288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.436 [2024-11-07 13:44:38.352350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.436 [2024-11-07 13:44:38.352366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.436 [2024-11-07 13:44:38.352374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.436 [2024-11-07 13:44:38.352380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.436 [2024-11-07 13:44:38.352396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.436 qpair failed and we were unable to recover it. 00:39:30.436 [2024-11-07 13:44:38.362271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.436 [2024-11-07 13:44:38.362368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.436 [2024-11-07 13:44:38.362385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.436 [2024-11-07 13:44:38.362393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.436 [2024-11-07 13:44:38.362400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.436 [2024-11-07 13:44:38.362415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.436 qpair failed and we were unable to recover it. 00:39:30.436 [2024-11-07 13:44:38.372352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.436 [2024-11-07 13:44:38.372420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.436 [2024-11-07 13:44:38.372436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.436 [2024-11-07 13:44:38.372445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.436 [2024-11-07 13:44:38.372451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.436 [2024-11-07 13:44:38.372466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.436 qpair failed and we were unable to recover it. 00:39:30.436 [2024-11-07 13:44:38.382427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.436 [2024-11-07 13:44:38.382492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.436 [2024-11-07 13:44:38.382511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.436 [2024-11-07 13:44:38.382519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.436 [2024-11-07 13:44:38.382525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.436 [2024-11-07 13:44:38.382541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.436 qpair failed and we were unable to recover it. 00:39:30.436 [2024-11-07 13:44:38.392334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.436 [2024-11-07 13:44:38.392434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.436 [2024-11-07 13:44:38.392451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.436 [2024-11-07 13:44:38.392460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.437 [2024-11-07 13:44:38.392466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.437 [2024-11-07 13:44:38.392486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.437 qpair failed and we were unable to recover it. 00:39:30.437 [2024-11-07 13:44:38.402357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.437 [2024-11-07 13:44:38.402431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.437 [2024-11-07 13:44:38.402448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.437 [2024-11-07 13:44:38.402455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.437 [2024-11-07 13:44:38.402462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.437 [2024-11-07 13:44:38.402478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.437 qpair failed and we were unable to recover it. 00:39:30.437 [2024-11-07 13:44:38.412386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.437 [2024-11-07 13:44:38.412451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.437 [2024-11-07 13:44:38.412467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.437 [2024-11-07 13:44:38.412476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.437 [2024-11-07 13:44:38.412482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.437 [2024-11-07 13:44:38.412499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.437 qpair failed and we were unable to recover it. 00:39:30.437 [2024-11-07 13:44:38.422400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.437 [2024-11-07 13:44:38.422465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.437 [2024-11-07 13:44:38.422481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.437 [2024-11-07 13:44:38.422489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.437 [2024-11-07 13:44:38.422498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.437 [2024-11-07 13:44:38.422514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.437 qpair failed and we were unable to recover it. 00:39:30.437 [2024-11-07 13:44:38.432538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.437 [2024-11-07 13:44:38.432660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.437 [2024-11-07 13:44:38.432677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.437 [2024-11-07 13:44:38.432685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.437 [2024-11-07 13:44:38.432691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.437 [2024-11-07 13:44:38.432708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.437 qpair failed and we were unable to recover it. 00:39:30.699 [2024-11-07 13:44:38.442675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.699 [2024-11-07 13:44:38.442769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.699 [2024-11-07 13:44:38.442785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.699 [2024-11-07 13:44:38.442794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.699 [2024-11-07 13:44:38.442800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.699 [2024-11-07 13:44:38.442816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.699 qpair failed and we were unable to recover it. 00:39:30.699 [2024-11-07 13:44:38.452611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.699 [2024-11-07 13:44:38.452698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.699 [2024-11-07 13:44:38.452713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.699 [2024-11-07 13:44:38.452723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.699 [2024-11-07 13:44:38.452729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.699 [2024-11-07 13:44:38.452746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.699 qpair failed and we were unable to recover it. 00:39:30.699 [2024-11-07 13:44:38.462630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.699 [2024-11-07 13:44:38.462698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.699 [2024-11-07 13:44:38.462715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.699 [2024-11-07 13:44:38.462723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.699 [2024-11-07 13:44:38.462729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.700 [2024-11-07 13:44:38.462745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.700 qpair failed and we were unable to recover it. 00:39:30.700 [2024-11-07 13:44:38.472659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.700 [2024-11-07 13:44:38.472723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.700 [2024-11-07 13:44:38.472740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.700 [2024-11-07 13:44:38.472748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.700 [2024-11-07 13:44:38.472754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.700 [2024-11-07 13:44:38.472770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.700 qpair failed and we were unable to recover it. 00:39:30.700 [2024-11-07 13:44:38.482667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.700 [2024-11-07 13:44:38.482735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.700 [2024-11-07 13:44:38.482751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.700 [2024-11-07 13:44:38.482760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.700 [2024-11-07 13:44:38.482766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.700 [2024-11-07 13:44:38.482782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.700 qpair failed and we were unable to recover it. 00:39:30.700 [2024-11-07 13:44:38.492724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.700 [2024-11-07 13:44:38.492799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.700 [2024-11-07 13:44:38.492815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.700 [2024-11-07 13:44:38.492823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.700 [2024-11-07 13:44:38.492830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.700 [2024-11-07 13:44:38.492846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.700 qpair failed and we were unable to recover it. 00:39:30.700 [2024-11-07 13:44:38.502734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.700 [2024-11-07 13:44:38.502799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.700 [2024-11-07 13:44:38.502815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.700 [2024-11-07 13:44:38.502823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.700 [2024-11-07 13:44:38.502829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.700 [2024-11-07 13:44:38.502845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.700 qpair failed and we were unable to recover it. 00:39:30.700 [2024-11-07 13:44:38.512850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.700 [2024-11-07 13:44:38.512951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.700 [2024-11-07 13:44:38.512970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.700 [2024-11-07 13:44:38.512978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.700 [2024-11-07 13:44:38.512985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.700 [2024-11-07 13:44:38.513001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.700 qpair failed and we were unable to recover it. 00:39:30.700 [2024-11-07 13:44:38.522781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.700 [2024-11-07 13:44:38.522849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.700 [2024-11-07 13:44:38.522871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.700 [2024-11-07 13:44:38.522880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.700 [2024-11-07 13:44:38.522886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.700 [2024-11-07 13:44:38.522903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.700 qpair failed and we were unable to recover it. 00:39:30.700 [2024-11-07 13:44:38.532808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.700 [2024-11-07 13:44:38.532903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.700 [2024-11-07 13:44:38.532920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.700 [2024-11-07 13:44:38.532928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.700 [2024-11-07 13:44:38.532935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.700 [2024-11-07 13:44:38.532951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.700 qpair failed and we were unable to recover it. 00:39:30.700 [2024-11-07 13:44:38.542767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.700 [2024-11-07 13:44:38.542832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.700 [2024-11-07 13:44:38.542848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.700 [2024-11-07 13:44:38.542856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.700 [2024-11-07 13:44:38.542867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.700 [2024-11-07 13:44:38.542887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.700 qpair failed and we were unable to recover it. 00:39:30.700 [2024-11-07 13:44:38.552886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.700 [2024-11-07 13:44:38.552955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.700 [2024-11-07 13:44:38.552971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.700 [2024-11-07 13:44:38.552982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.700 [2024-11-07 13:44:38.552989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.700 [2024-11-07 13:44:38.553005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.700 qpair failed and we were unable to recover it. 00:39:30.700 [2024-11-07 13:44:38.562797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.700 [2024-11-07 13:44:38.562891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.700 [2024-11-07 13:44:38.562909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.700 [2024-11-07 13:44:38.562916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.700 [2024-11-07 13:44:38.562923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.700 [2024-11-07 13:44:38.562939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.700 qpair failed and we were unable to recover it. 00:39:30.700 [2024-11-07 13:44:38.572922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.700 [2024-11-07 13:44:38.572993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.700 [2024-11-07 13:44:38.573009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.700 [2024-11-07 13:44:38.573017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.700 [2024-11-07 13:44:38.573024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.700 [2024-11-07 13:44:38.573041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.700 qpair failed and we were unable to recover it. 00:39:30.700 [2024-11-07 13:44:38.582949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.700 [2024-11-07 13:44:38.583022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.700 [2024-11-07 13:44:38.583038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.700 [2024-11-07 13:44:38.583046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.700 [2024-11-07 13:44:38.583052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.700 [2024-11-07 13:44:38.583068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.700 qpair failed and we were unable to recover it. 00:39:30.700 [2024-11-07 13:44:38.592873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.700 [2024-11-07 13:44:38.592985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.701 [2024-11-07 13:44:38.593001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.701 [2024-11-07 13:44:38.593010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.701 [2024-11-07 13:44:38.593016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.701 [2024-11-07 13:44:38.593034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.701 qpair failed and we were unable to recover it. 00:39:30.701 [2024-11-07 13:44:38.602926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.701 [2024-11-07 13:44:38.602991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.701 [2024-11-07 13:44:38.603007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.701 [2024-11-07 13:44:38.603015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.701 [2024-11-07 13:44:38.603022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.701 [2024-11-07 13:44:38.603044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.701 qpair failed and we were unable to recover it. 00:39:30.701 [2024-11-07 13:44:38.613102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.701 [2024-11-07 13:44:38.613170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.701 [2024-11-07 13:44:38.613186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.701 [2024-11-07 13:44:38.613194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.701 [2024-11-07 13:44:38.613200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.701 [2024-11-07 13:44:38.613217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.701 qpair failed and we were unable to recover it. 00:39:30.701 [2024-11-07 13:44:38.623065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.701 [2024-11-07 13:44:38.623133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.701 [2024-11-07 13:44:38.623149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.701 [2024-11-07 13:44:38.623157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.701 [2024-11-07 13:44:38.623163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.701 [2024-11-07 13:44:38.623179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.701 qpair failed and we were unable to recover it. 00:39:30.701 [2024-11-07 13:44:38.633088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.701 [2024-11-07 13:44:38.633195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.701 [2024-11-07 13:44:38.633211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.701 [2024-11-07 13:44:38.633220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.701 [2024-11-07 13:44:38.633226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.701 [2024-11-07 13:44:38.633242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.701 qpair failed and we were unable to recover it. 00:39:30.701 [2024-11-07 13:44:38.643094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.701 [2024-11-07 13:44:38.643163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.701 [2024-11-07 13:44:38.643179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.701 [2024-11-07 13:44:38.643187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.701 [2024-11-07 13:44:38.643194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.701 [2024-11-07 13:44:38.643225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.701 qpair failed and we were unable to recover it. 00:39:30.701 [2024-11-07 13:44:38.653145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.701 [2024-11-07 13:44:38.653250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.701 [2024-11-07 13:44:38.653267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.701 [2024-11-07 13:44:38.653275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.701 [2024-11-07 13:44:38.653281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.701 [2024-11-07 13:44:38.653297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.701 qpair failed and we were unable to recover it. 00:39:30.701 [2024-11-07 13:44:38.663254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.701 [2024-11-07 13:44:38.663351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.701 [2024-11-07 13:44:38.663366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.701 [2024-11-07 13:44:38.663375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.701 [2024-11-07 13:44:38.663381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.701 [2024-11-07 13:44:38.663396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.701 qpair failed and we were unable to recover it. 00:39:30.701 [2024-11-07 13:44:38.673153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.701 [2024-11-07 13:44:38.673218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.701 [2024-11-07 13:44:38.673235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.701 [2024-11-07 13:44:38.673243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.701 [2024-11-07 13:44:38.673249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.701 [2024-11-07 13:44:38.673264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.701 qpair failed and we were unable to recover it. 00:39:30.701 [2024-11-07 13:44:38.683258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.701 [2024-11-07 13:44:38.683328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.701 [2024-11-07 13:44:38.683344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.701 [2024-11-07 13:44:38.683355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.701 [2024-11-07 13:44:38.683361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.701 [2024-11-07 13:44:38.683377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.701 qpair failed and we were unable to recover it. 00:39:30.701 [2024-11-07 13:44:38.693269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.701 [2024-11-07 13:44:38.693335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.701 [2024-11-07 13:44:38.693351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.701 [2024-11-07 13:44:38.693360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.701 [2024-11-07 13:44:38.693366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.701 [2024-11-07 13:44:38.693381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.701 qpair failed and we were unable to recover it. 00:39:30.964 [2024-11-07 13:44:38.703330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.964 [2024-11-07 13:44:38.703394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.964 [2024-11-07 13:44:38.703410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.964 [2024-11-07 13:44:38.703418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.964 [2024-11-07 13:44:38.703425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.964 [2024-11-07 13:44:38.703441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.964 qpair failed and we were unable to recover it. 00:39:30.964 [2024-11-07 13:44:38.713324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.964 [2024-11-07 13:44:38.713394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.964 [2024-11-07 13:44:38.713410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.964 [2024-11-07 13:44:38.713418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.964 [2024-11-07 13:44:38.713424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.964 [2024-11-07 13:44:38.713441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.964 qpair failed and we were unable to recover it. 00:39:30.964 [2024-11-07 13:44:38.723237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.964 [2024-11-07 13:44:38.723307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.964 [2024-11-07 13:44:38.723323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.964 [2024-11-07 13:44:38.723332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.964 [2024-11-07 13:44:38.723338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.964 [2024-11-07 13:44:38.723356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.964 qpair failed and we were unable to recover it. 00:39:30.964 [2024-11-07 13:44:38.733354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.964 [2024-11-07 13:44:38.733431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.964 [2024-11-07 13:44:38.733447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.964 [2024-11-07 13:44:38.733455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.964 [2024-11-07 13:44:38.733462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.964 [2024-11-07 13:44:38.733477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.964 qpair failed and we were unable to recover it. 00:39:30.964 [2024-11-07 13:44:38.743387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.964 [2024-11-07 13:44:38.743483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.964 [2024-11-07 13:44:38.743499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.964 [2024-11-07 13:44:38.743508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.964 [2024-11-07 13:44:38.743514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.964 [2024-11-07 13:44:38.743530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.964 qpair failed and we were unable to recover it. 00:39:30.964 [2024-11-07 13:44:38.753442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.964 [2024-11-07 13:44:38.753537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.964 [2024-11-07 13:44:38.753553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.964 [2024-11-07 13:44:38.753561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.964 [2024-11-07 13:44:38.753567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.965 [2024-11-07 13:44:38.753583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.965 qpair failed and we were unable to recover it. 00:39:30.965 [2024-11-07 13:44:38.763376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.965 [2024-11-07 13:44:38.763480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.965 [2024-11-07 13:44:38.763496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.965 [2024-11-07 13:44:38.763504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.965 [2024-11-07 13:44:38.763511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.965 [2024-11-07 13:44:38.763526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.965 qpair failed and we were unable to recover it. 00:39:30.965 [2024-11-07 13:44:38.773456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.965 [2024-11-07 13:44:38.773524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.965 [2024-11-07 13:44:38.773540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.965 [2024-11-07 13:44:38.773548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.965 [2024-11-07 13:44:38.773554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.965 [2024-11-07 13:44:38.773570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.965 qpair failed and we were unable to recover it. 00:39:30.965 [2024-11-07 13:44:38.783529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.965 [2024-11-07 13:44:38.783635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.965 [2024-11-07 13:44:38.783651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.965 [2024-11-07 13:44:38.783659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.965 [2024-11-07 13:44:38.783665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.965 [2024-11-07 13:44:38.783683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.965 qpair failed and we were unable to recover it. 00:39:30.965 [2024-11-07 13:44:38.793537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.965 [2024-11-07 13:44:38.793607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.965 [2024-11-07 13:44:38.793623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.965 [2024-11-07 13:44:38.793631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.965 [2024-11-07 13:44:38.793637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.965 [2024-11-07 13:44:38.793653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.965 qpair failed and we were unable to recover it. 00:39:30.965 [2024-11-07 13:44:38.803535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.965 [2024-11-07 13:44:38.803605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.965 [2024-11-07 13:44:38.803621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.965 [2024-11-07 13:44:38.803629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.965 [2024-11-07 13:44:38.803636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.965 [2024-11-07 13:44:38.803652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.965 qpair failed and we were unable to recover it. 00:39:30.965 [2024-11-07 13:44:38.813626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.965 [2024-11-07 13:44:38.813696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.965 [2024-11-07 13:44:38.813715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.965 [2024-11-07 13:44:38.813723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.965 [2024-11-07 13:44:38.813729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.965 [2024-11-07 13:44:38.813746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.965 qpair failed and we were unable to recover it. 00:39:30.965 [2024-11-07 13:44:38.823622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.965 [2024-11-07 13:44:38.823716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.965 [2024-11-07 13:44:38.823732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.965 [2024-11-07 13:44:38.823741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.965 [2024-11-07 13:44:38.823747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.965 [2024-11-07 13:44:38.823763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.965 qpair failed and we were unable to recover it. 00:39:30.965 [2024-11-07 13:44:38.833662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.965 [2024-11-07 13:44:38.833747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.965 [2024-11-07 13:44:38.833763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.965 [2024-11-07 13:44:38.833771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.965 [2024-11-07 13:44:38.833778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.965 [2024-11-07 13:44:38.833793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.965 qpair failed and we were unable to recover it. 00:39:30.965 [2024-11-07 13:44:38.843656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.965 [2024-11-07 13:44:38.843754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.965 [2024-11-07 13:44:38.843770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.965 [2024-11-07 13:44:38.843779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.965 [2024-11-07 13:44:38.843785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.965 [2024-11-07 13:44:38.843801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.965 qpair failed and we were unable to recover it. 00:39:30.965 [2024-11-07 13:44:38.853596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.965 [2024-11-07 13:44:38.853660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.965 [2024-11-07 13:44:38.853676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.965 [2024-11-07 13:44:38.853686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.965 [2024-11-07 13:44:38.853699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.965 [2024-11-07 13:44:38.853716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.965 qpair failed and we were unable to recover it. 00:39:30.965 [2024-11-07 13:44:38.863643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.965 [2024-11-07 13:44:38.863715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.965 [2024-11-07 13:44:38.863731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.965 [2024-11-07 13:44:38.863739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.965 [2024-11-07 13:44:38.863745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.965 [2024-11-07 13:44:38.863761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.965 qpair failed and we were unable to recover it. 00:39:30.965 [2024-11-07 13:44:38.873776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.966 [2024-11-07 13:44:38.873848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.966 [2024-11-07 13:44:38.873867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.966 [2024-11-07 13:44:38.873876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.966 [2024-11-07 13:44:38.873883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.966 [2024-11-07 13:44:38.873899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.966 qpair failed and we were unable to recover it. 00:39:30.966 [2024-11-07 13:44:38.883768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.966 [2024-11-07 13:44:38.883831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.966 [2024-11-07 13:44:38.883847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.966 [2024-11-07 13:44:38.883855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.966 [2024-11-07 13:44:38.883865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.966 [2024-11-07 13:44:38.883882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.966 qpair failed and we were unable to recover it. 00:39:30.966 [2024-11-07 13:44:38.893788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.966 [2024-11-07 13:44:38.893854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.966 [2024-11-07 13:44:38.893874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.966 [2024-11-07 13:44:38.893882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.966 [2024-11-07 13:44:38.893888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.966 [2024-11-07 13:44:38.893904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.966 qpair failed and we were unable to recover it. 00:39:30.966 [2024-11-07 13:44:38.903911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.966 [2024-11-07 13:44:38.903984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.966 [2024-11-07 13:44:38.904001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.966 [2024-11-07 13:44:38.904009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.966 [2024-11-07 13:44:38.904015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.966 [2024-11-07 13:44:38.904031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.966 qpair failed and we were unable to recover it. 00:39:30.966 [2024-11-07 13:44:38.913898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.966 [2024-11-07 13:44:38.913974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.966 [2024-11-07 13:44:38.913990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.966 [2024-11-07 13:44:38.913999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.966 [2024-11-07 13:44:38.914005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.966 [2024-11-07 13:44:38.914021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.966 qpair failed and we were unable to recover it. 00:39:30.966 [2024-11-07 13:44:38.923912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.966 [2024-11-07 13:44:38.923994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.966 [2024-11-07 13:44:38.924010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.966 [2024-11-07 13:44:38.924018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.966 [2024-11-07 13:44:38.924024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.966 [2024-11-07 13:44:38.924040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.966 qpair failed and we were unable to recover it. 00:39:30.966 [2024-11-07 13:44:38.933872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.966 [2024-11-07 13:44:38.933936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.966 [2024-11-07 13:44:38.933952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.966 [2024-11-07 13:44:38.933960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.966 [2024-11-07 13:44:38.933966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.966 [2024-11-07 13:44:38.933982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.966 qpair failed and we were unable to recover it. 00:39:30.966 [2024-11-07 13:44:38.943942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.966 [2024-11-07 13:44:38.944008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.966 [2024-11-07 13:44:38.944026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.966 [2024-11-07 13:44:38.944034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.966 [2024-11-07 13:44:38.944040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.966 [2024-11-07 13:44:38.944056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.966 qpair failed and we were unable to recover it. 00:39:30.966 [2024-11-07 13:44:38.953976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.966 [2024-11-07 13:44:38.954040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.966 [2024-11-07 13:44:38.954056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.966 [2024-11-07 13:44:38.954064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.966 [2024-11-07 13:44:38.954070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.966 [2024-11-07 13:44:38.954086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.966 qpair failed and we were unable to recover it. 00:39:30.966 [2024-11-07 13:44:38.963990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.966 [2024-11-07 13:44:38.964056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.966 [2024-11-07 13:44:38.964072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.966 [2024-11-07 13:44:38.964081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.966 [2024-11-07 13:44:38.964087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:30.966 [2024-11-07 13:44:38.964102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.966 qpair failed and we were unable to recover it. 00:39:31.229 [2024-11-07 13:44:38.974012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.229 [2024-11-07 13:44:38.974079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.229 [2024-11-07 13:44:38.974096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.230 [2024-11-07 13:44:38.974105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.230 [2024-11-07 13:44:38.974112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.230 [2024-11-07 13:44:38.974131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.230 qpair failed and we were unable to recover it. 00:39:31.230 [2024-11-07 13:44:38.984025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.230 [2024-11-07 13:44:38.984089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.230 [2024-11-07 13:44:38.984104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.230 [2024-11-07 13:44:38.984113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.230 [2024-11-07 13:44:38.984121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.230 [2024-11-07 13:44:38.984137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.230 qpair failed and we were unable to recover it. 00:39:31.230 [2024-11-07 13:44:38.994069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.230 [2024-11-07 13:44:38.994138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.230 [2024-11-07 13:44:38.994154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.230 [2024-11-07 13:44:38.994162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.230 [2024-11-07 13:44:38.994168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.230 [2024-11-07 13:44:38.994184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.230 qpair failed and we were unable to recover it. 00:39:31.230 [2024-11-07 13:44:39.004127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.230 [2024-11-07 13:44:39.004220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.230 [2024-11-07 13:44:39.004236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.230 [2024-11-07 13:44:39.004245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.230 [2024-11-07 13:44:39.004252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.230 [2024-11-07 13:44:39.004267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.230 qpair failed and we were unable to recover it. 00:39:31.230 [2024-11-07 13:44:39.014145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.230 [2024-11-07 13:44:39.014229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.230 [2024-11-07 13:44:39.014245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.230 [2024-11-07 13:44:39.014253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.230 [2024-11-07 13:44:39.014260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.230 [2024-11-07 13:44:39.014275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.230 qpair failed and we were unable to recover it. 00:39:31.230 [2024-11-07 13:44:39.024189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.230 [2024-11-07 13:44:39.024256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.230 [2024-11-07 13:44:39.024272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.230 [2024-11-07 13:44:39.024281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.230 [2024-11-07 13:44:39.024287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.230 [2024-11-07 13:44:39.024303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.230 qpair failed and we were unable to recover it. 00:39:31.230 [2024-11-07 13:44:39.034161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.230 [2024-11-07 13:44:39.034230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.230 [2024-11-07 13:44:39.034246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.230 [2024-11-07 13:44:39.034255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.230 [2024-11-07 13:44:39.034261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.230 [2024-11-07 13:44:39.034276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.230 qpair failed and we were unable to recover it. 00:39:31.230 [2024-11-07 13:44:39.044197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.230 [2024-11-07 13:44:39.044262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.230 [2024-11-07 13:44:39.044278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.230 [2024-11-07 13:44:39.044287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.230 [2024-11-07 13:44:39.044293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.230 [2024-11-07 13:44:39.044308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.230 qpair failed and we were unable to recover it. 00:39:31.230 [2024-11-07 13:44:39.054283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.230 [2024-11-07 13:44:39.054366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.230 [2024-11-07 13:44:39.054382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.230 [2024-11-07 13:44:39.054390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.230 [2024-11-07 13:44:39.054396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.230 [2024-11-07 13:44:39.054412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.230 qpair failed and we were unable to recover it. 00:39:31.230 [2024-11-07 13:44:39.064239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.230 [2024-11-07 13:44:39.064304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.230 [2024-11-07 13:44:39.064320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.230 [2024-11-07 13:44:39.064328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.230 [2024-11-07 13:44:39.064334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.230 [2024-11-07 13:44:39.064350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.230 qpair failed and we were unable to recover it. 00:39:31.230 [2024-11-07 13:44:39.074302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.230 [2024-11-07 13:44:39.074388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.230 [2024-11-07 13:44:39.074406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.230 [2024-11-07 13:44:39.074415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.230 [2024-11-07 13:44:39.074422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.230 [2024-11-07 13:44:39.074437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.230 qpair failed and we were unable to recover it. 00:39:31.230 [2024-11-07 13:44:39.084237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.230 [2024-11-07 13:44:39.084316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.230 [2024-11-07 13:44:39.084332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.230 [2024-11-07 13:44:39.084340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.230 [2024-11-07 13:44:39.084347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.230 [2024-11-07 13:44:39.084362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.230 qpair failed and we were unable to recover it. 00:39:31.231 [2024-11-07 13:44:39.094353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.231 [2024-11-07 13:44:39.094450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.231 [2024-11-07 13:44:39.094466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.231 [2024-11-07 13:44:39.094474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.231 [2024-11-07 13:44:39.094480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.231 [2024-11-07 13:44:39.094496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.231 qpair failed and we were unable to recover it. 00:39:31.231 [2024-11-07 13:44:39.104374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.231 [2024-11-07 13:44:39.104438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.231 [2024-11-07 13:44:39.104454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.231 [2024-11-07 13:44:39.104462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.231 [2024-11-07 13:44:39.104468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.231 [2024-11-07 13:44:39.104485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.231 qpair failed and we were unable to recover it. 00:39:31.231 [2024-11-07 13:44:39.114426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.231 [2024-11-07 13:44:39.114488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.231 [2024-11-07 13:44:39.114509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.231 [2024-11-07 13:44:39.114521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.231 [2024-11-07 13:44:39.114527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.231 [2024-11-07 13:44:39.114543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.231 qpair failed and we were unable to recover it. 00:39:31.231 [2024-11-07 13:44:39.124376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.231 [2024-11-07 13:44:39.124437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.231 [2024-11-07 13:44:39.124453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.231 [2024-11-07 13:44:39.124461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.231 [2024-11-07 13:44:39.124468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.231 [2024-11-07 13:44:39.124483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.231 qpair failed and we were unable to recover it. 00:39:31.231 [2024-11-07 13:44:39.134426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.231 [2024-11-07 13:44:39.134487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.231 [2024-11-07 13:44:39.134503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.231 [2024-11-07 13:44:39.134511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.231 [2024-11-07 13:44:39.134517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.231 [2024-11-07 13:44:39.134532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.231 qpair failed and we were unable to recover it. 00:39:31.231 [2024-11-07 13:44:39.144295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.231 [2024-11-07 13:44:39.144390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.231 [2024-11-07 13:44:39.144407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.231 [2024-11-07 13:44:39.144415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.231 [2024-11-07 13:44:39.144421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.231 [2024-11-07 13:44:39.144437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.231 qpair failed and we were unable to recover it. 00:39:31.231 [2024-11-07 13:44:39.154487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.231 [2024-11-07 13:44:39.154551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.231 [2024-11-07 13:44:39.154567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.231 [2024-11-07 13:44:39.154576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.231 [2024-11-07 13:44:39.154582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.231 [2024-11-07 13:44:39.154601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.231 qpair failed and we were unable to recover it. 00:39:31.231 [2024-11-07 13:44:39.164509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.231 [2024-11-07 13:44:39.164567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.231 [2024-11-07 13:44:39.164583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.231 [2024-11-07 13:44:39.164591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.231 [2024-11-07 13:44:39.164598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.231 [2024-11-07 13:44:39.164613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.231 qpair failed and we were unable to recover it. 00:39:31.231 [2024-11-07 13:44:39.174552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.231 [2024-11-07 13:44:39.174613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.231 [2024-11-07 13:44:39.174629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.231 [2024-11-07 13:44:39.174637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.231 [2024-11-07 13:44:39.174644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.231 [2024-11-07 13:44:39.174659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.231 qpair failed and we were unable to recover it. 00:39:31.231 [2024-11-07 13:44:39.184377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.231 [2024-11-07 13:44:39.184438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.231 [2024-11-07 13:44:39.184461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.231 [2024-11-07 13:44:39.184471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.231 [2024-11-07 13:44:39.184478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.231 [2024-11-07 13:44:39.184499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.231 qpair failed and we were unable to recover it. 00:39:31.231 [2024-11-07 13:44:39.194608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.231 [2024-11-07 13:44:39.194676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.231 [2024-11-07 13:44:39.194693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.231 [2024-11-07 13:44:39.194702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.231 [2024-11-07 13:44:39.194709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.231 [2024-11-07 13:44:39.194726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.231 qpair failed and we were unable to recover it. 00:39:31.231 [2024-11-07 13:44:39.204625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.231 [2024-11-07 13:44:39.204699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.231 [2024-11-07 13:44:39.204722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.231 [2024-11-07 13:44:39.204732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.231 [2024-11-07 13:44:39.204740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.231 [2024-11-07 13:44:39.204760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.231 qpair failed and we were unable to recover it. 00:39:31.231 [2024-11-07 13:44:39.214649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.231 [2024-11-07 13:44:39.214716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.231 [2024-11-07 13:44:39.214734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.231 [2024-11-07 13:44:39.214742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.231 [2024-11-07 13:44:39.214750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.232 [2024-11-07 13:44:39.214768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.232 qpair failed and we were unable to recover it. 00:39:31.232 [2024-11-07 13:44:39.224487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.232 [2024-11-07 13:44:39.224550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.232 [2024-11-07 13:44:39.224567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.232 [2024-11-07 13:44:39.224575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.232 [2024-11-07 13:44:39.224581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.232 [2024-11-07 13:44:39.224597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.232 qpair failed and we were unable to recover it. 00:39:31.495 [2024-11-07 13:44:39.234728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.495 [2024-11-07 13:44:39.234792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.495 [2024-11-07 13:44:39.234808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.495 [2024-11-07 13:44:39.234816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.495 [2024-11-07 13:44:39.234823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.495 [2024-11-07 13:44:39.234839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.495 qpair failed and we were unable to recover it. 00:39:31.495 [2024-11-07 13:44:39.244735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.495 [2024-11-07 13:44:39.244797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.495 [2024-11-07 13:44:39.244814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.495 [2024-11-07 13:44:39.244825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.495 [2024-11-07 13:44:39.244831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.495 [2024-11-07 13:44:39.244847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.495 qpair failed and we were unable to recover it. 00:39:31.495 [2024-11-07 13:44:39.254726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.495 [2024-11-07 13:44:39.254788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.495 [2024-11-07 13:44:39.254804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.495 [2024-11-07 13:44:39.254812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.495 [2024-11-07 13:44:39.254819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.495 [2024-11-07 13:44:39.254834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.495 qpair failed and we were unable to recover it. 00:39:31.495 [2024-11-07 13:44:39.264609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.495 [2024-11-07 13:44:39.264670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.495 [2024-11-07 13:44:39.264686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.495 [2024-11-07 13:44:39.264694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.495 [2024-11-07 13:44:39.264701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.495 [2024-11-07 13:44:39.264717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.495 qpair failed and we were unable to recover it. 00:39:31.495 [2024-11-07 13:44:39.274809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.495 [2024-11-07 13:44:39.274871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.495 [2024-11-07 13:44:39.274887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.495 [2024-11-07 13:44:39.274896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.495 [2024-11-07 13:44:39.274902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.495 [2024-11-07 13:44:39.274918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.495 qpair failed and we were unable to recover it. 00:39:31.495 [2024-11-07 13:44:39.284842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.495 [2024-11-07 13:44:39.284913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.495 [2024-11-07 13:44:39.284929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.495 [2024-11-07 13:44:39.284937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.495 [2024-11-07 13:44:39.284944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.495 [2024-11-07 13:44:39.284962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.495 qpair failed and we were unable to recover it. 00:39:31.495 [2024-11-07 13:44:39.294886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.495 [2024-11-07 13:44:39.294945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.495 [2024-11-07 13:44:39.294962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.495 [2024-11-07 13:44:39.294970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.495 [2024-11-07 13:44:39.294976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.495 [2024-11-07 13:44:39.294992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.495 qpair failed and we were unable to recover it. 00:39:31.495 [2024-11-07 13:44:39.304686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.495 [2024-11-07 13:44:39.304744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.495 [2024-11-07 13:44:39.304759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.495 [2024-11-07 13:44:39.304767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.495 [2024-11-07 13:44:39.304773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.495 [2024-11-07 13:44:39.304791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.495 qpair failed and we were unable to recover it. 00:39:31.495 [2024-11-07 13:44:39.314868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.495 [2024-11-07 13:44:39.314937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.495 [2024-11-07 13:44:39.314953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.495 [2024-11-07 13:44:39.314962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.495 [2024-11-07 13:44:39.314968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.495 [2024-11-07 13:44:39.314984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.495 qpair failed and we were unable to recover it. 00:39:31.495 [2024-11-07 13:44:39.324939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.495 [2024-11-07 13:44:39.325031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.495 [2024-11-07 13:44:39.325048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.495 [2024-11-07 13:44:39.325056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.495 [2024-11-07 13:44:39.325062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.495 [2024-11-07 13:44:39.325079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.495 qpair failed and we were unable to recover it. 00:39:31.495 [2024-11-07 13:44:39.334953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.495 [2024-11-07 13:44:39.335016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.495 [2024-11-07 13:44:39.335032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.495 [2024-11-07 13:44:39.335040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.495 [2024-11-07 13:44:39.335047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.495 [2024-11-07 13:44:39.335063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.495 qpair failed and we were unable to recover it. 00:39:31.495 [2024-11-07 13:44:39.344806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.495 [2024-11-07 13:44:39.344906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.495 [2024-11-07 13:44:39.344923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.495 [2024-11-07 13:44:39.344931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.495 [2024-11-07 13:44:39.344937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.495 [2024-11-07 13:44:39.344954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.495 qpair failed and we were unable to recover it. 00:39:31.495 [2024-11-07 13:44:39.355045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.496 [2024-11-07 13:44:39.355107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.496 [2024-11-07 13:44:39.355123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.496 [2024-11-07 13:44:39.355131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.496 [2024-11-07 13:44:39.355137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.496 [2024-11-07 13:44:39.355153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.496 qpair failed and we were unable to recover it. 00:39:31.496 [2024-11-07 13:44:39.365049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.496 [2024-11-07 13:44:39.365110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.496 [2024-11-07 13:44:39.365126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.496 [2024-11-07 13:44:39.365134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.496 [2024-11-07 13:44:39.365141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.496 [2024-11-07 13:44:39.365161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.496 qpair failed and we were unable to recover it. 00:39:31.496 [2024-11-07 13:44:39.374964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.496 [2024-11-07 13:44:39.375022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.496 [2024-11-07 13:44:39.375040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.496 [2024-11-07 13:44:39.375048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.496 [2024-11-07 13:44:39.375055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.496 [2024-11-07 13:44:39.375071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.496 qpair failed and we were unable to recover it. 00:39:31.496 [2024-11-07 13:44:39.384926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.496 [2024-11-07 13:44:39.385013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.496 [2024-11-07 13:44:39.385029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.496 [2024-11-07 13:44:39.385037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.496 [2024-11-07 13:44:39.385043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.496 [2024-11-07 13:44:39.385059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.496 qpair failed and we were unable to recover it. 00:39:31.496 [2024-11-07 13:44:39.395143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.496 [2024-11-07 13:44:39.395205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.496 [2024-11-07 13:44:39.395221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.496 [2024-11-07 13:44:39.395229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.496 [2024-11-07 13:44:39.395236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.496 [2024-11-07 13:44:39.395251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.496 qpair failed and we were unable to recover it. 00:39:31.496 [2024-11-07 13:44:39.405152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.496 [2024-11-07 13:44:39.405255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.496 [2024-11-07 13:44:39.405272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.496 [2024-11-07 13:44:39.405280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.496 [2024-11-07 13:44:39.405286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.496 [2024-11-07 13:44:39.405302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.496 qpair failed and we were unable to recover it. 00:39:31.496 [2024-11-07 13:44:39.415227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.496 [2024-11-07 13:44:39.415284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.496 [2024-11-07 13:44:39.415300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.496 [2024-11-07 13:44:39.415308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.496 [2024-11-07 13:44:39.415317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.496 [2024-11-07 13:44:39.415333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.496 qpair failed and we were unable to recover it. 00:39:31.496 [2024-11-07 13:44:39.424967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.496 [2024-11-07 13:44:39.425023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.496 [2024-11-07 13:44:39.425039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.496 [2024-11-07 13:44:39.425048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.496 [2024-11-07 13:44:39.425054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.496 [2024-11-07 13:44:39.425069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.496 qpair failed and we were unable to recover it. 00:39:31.496 [2024-11-07 13:44:39.435231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.496 [2024-11-07 13:44:39.435299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.496 [2024-11-07 13:44:39.435315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.496 [2024-11-07 13:44:39.435324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.496 [2024-11-07 13:44:39.435330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.496 [2024-11-07 13:44:39.435345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.496 qpair failed and we were unable to recover it. 00:39:31.496 [2024-11-07 13:44:39.445252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.496 [2024-11-07 13:44:39.445311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.496 [2024-11-07 13:44:39.445327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.496 [2024-11-07 13:44:39.445336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.496 [2024-11-07 13:44:39.445342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.496 [2024-11-07 13:44:39.445358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.496 qpair failed and we were unable to recover it. 00:39:31.496 [2024-11-07 13:44:39.455248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.496 [2024-11-07 13:44:39.455309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.496 [2024-11-07 13:44:39.455325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.496 [2024-11-07 13:44:39.455333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.496 [2024-11-07 13:44:39.455340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.496 [2024-11-07 13:44:39.455356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.496 qpair failed and we were unable to recover it. 00:39:31.496 [2024-11-07 13:44:39.465127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.496 [2024-11-07 13:44:39.465185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.496 [2024-11-07 13:44:39.465201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.496 [2024-11-07 13:44:39.465209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.496 [2024-11-07 13:44:39.465216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.496 [2024-11-07 13:44:39.465232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.496 qpair failed and we were unable to recover it. 00:39:31.496 [2024-11-07 13:44:39.475323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.496 [2024-11-07 13:44:39.475386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.496 [2024-11-07 13:44:39.475402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.496 [2024-11-07 13:44:39.475410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.496 [2024-11-07 13:44:39.475416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.496 [2024-11-07 13:44:39.475432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.497 qpair failed and we were unable to recover it. 00:39:31.497 [2024-11-07 13:44:39.485342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.497 [2024-11-07 13:44:39.485437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.497 [2024-11-07 13:44:39.485453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.497 [2024-11-07 13:44:39.485462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.497 [2024-11-07 13:44:39.485468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.497 [2024-11-07 13:44:39.485484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.497 qpair failed and we were unable to recover it. 00:39:31.497 [2024-11-07 13:44:39.495374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.497 [2024-11-07 13:44:39.495433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.497 [2024-11-07 13:44:39.495449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.497 [2024-11-07 13:44:39.495457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.497 [2024-11-07 13:44:39.495464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.497 [2024-11-07 13:44:39.495479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.497 qpair failed and we were unable to recover it. 00:39:31.766 [2024-11-07 13:44:39.505214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.766 [2024-11-07 13:44:39.505270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.766 [2024-11-07 13:44:39.505289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.766 [2024-11-07 13:44:39.505297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.766 [2024-11-07 13:44:39.505303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.766 [2024-11-07 13:44:39.505319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.766 qpair failed and we were unable to recover it. 00:39:31.766 [2024-11-07 13:44:39.515347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.766 [2024-11-07 13:44:39.515411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.766 [2024-11-07 13:44:39.515426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.766 [2024-11-07 13:44:39.515435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.766 [2024-11-07 13:44:39.515441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.766 [2024-11-07 13:44:39.515456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.766 qpair failed and we were unable to recover it. 00:39:31.766 [2024-11-07 13:44:39.525485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.766 [2024-11-07 13:44:39.525560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.767 [2024-11-07 13:44:39.525576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.767 [2024-11-07 13:44:39.525584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.767 [2024-11-07 13:44:39.525590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.767 [2024-11-07 13:44:39.525607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.767 qpair failed and we were unable to recover it. 00:39:31.767 [2024-11-07 13:44:39.535467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.767 [2024-11-07 13:44:39.535525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.767 [2024-11-07 13:44:39.535541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.767 [2024-11-07 13:44:39.535550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.767 [2024-11-07 13:44:39.535556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.767 [2024-11-07 13:44:39.535572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.767 qpair failed and we were unable to recover it. 00:39:31.767 [2024-11-07 13:44:39.545296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.767 [2024-11-07 13:44:39.545390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.767 [2024-11-07 13:44:39.545407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.767 [2024-11-07 13:44:39.545415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.767 [2024-11-07 13:44:39.545424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.767 [2024-11-07 13:44:39.545440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.767 qpair failed and we were unable to recover it. 00:39:31.767 [2024-11-07 13:44:39.555598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.767 [2024-11-07 13:44:39.555695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.767 [2024-11-07 13:44:39.555712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.767 [2024-11-07 13:44:39.555720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.767 [2024-11-07 13:44:39.555727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.767 [2024-11-07 13:44:39.555742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.767 qpair failed and we were unable to recover it. 00:39:31.768 [2024-11-07 13:44:39.565488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.768 [2024-11-07 13:44:39.565589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.768 [2024-11-07 13:44:39.565605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.768 [2024-11-07 13:44:39.565614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.768 [2024-11-07 13:44:39.565620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.768 [2024-11-07 13:44:39.565636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.768 qpair failed and we were unable to recover it. 00:39:31.768 [2024-11-07 13:44:39.575595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.768 [2024-11-07 13:44:39.575658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.768 [2024-11-07 13:44:39.575675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.768 [2024-11-07 13:44:39.575683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.768 [2024-11-07 13:44:39.575690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.768 [2024-11-07 13:44:39.575705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.768 qpair failed and we were unable to recover it. 00:39:31.768 [2024-11-07 13:44:39.585437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.768 [2024-11-07 13:44:39.585494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.768 [2024-11-07 13:44:39.585509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.768 [2024-11-07 13:44:39.585517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.768 [2024-11-07 13:44:39.585523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.768 [2024-11-07 13:44:39.585539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.768 qpair failed and we were unable to recover it. 00:39:31.768 [2024-11-07 13:44:39.595677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.768 [2024-11-07 13:44:39.595742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.768 [2024-11-07 13:44:39.595758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.769 [2024-11-07 13:44:39.595767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.769 [2024-11-07 13:44:39.595773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.769 [2024-11-07 13:44:39.595789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.769 qpair failed and we were unable to recover it. 00:39:31.769 [2024-11-07 13:44:39.605729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.769 [2024-11-07 13:44:39.605829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.769 [2024-11-07 13:44:39.605845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.769 [2024-11-07 13:44:39.605854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.769 [2024-11-07 13:44:39.605860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.769 [2024-11-07 13:44:39.605880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.769 qpair failed and we were unable to recover it. 00:39:31.769 [2024-11-07 13:44:39.615703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.769 [2024-11-07 13:44:39.615762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.769 [2024-11-07 13:44:39.615778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.769 [2024-11-07 13:44:39.615787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.769 [2024-11-07 13:44:39.615793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.769 [2024-11-07 13:44:39.615808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.769 qpair failed and we were unable to recover it. 00:39:31.769 [2024-11-07 13:44:39.625429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.769 [2024-11-07 13:44:39.625485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.769 [2024-11-07 13:44:39.625500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.769 [2024-11-07 13:44:39.625513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.769 [2024-11-07 13:44:39.625520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.769 [2024-11-07 13:44:39.625535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.769 qpair failed and we were unable to recover it. 00:39:31.769 [2024-11-07 13:44:39.635773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.770 [2024-11-07 13:44:39.635841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.770 [2024-11-07 13:44:39.635860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.770 [2024-11-07 13:44:39.635872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.770 [2024-11-07 13:44:39.635879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.770 [2024-11-07 13:44:39.635898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.770 qpair failed and we were unable to recover it. 00:39:31.770 [2024-11-07 13:44:39.645766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.770 [2024-11-07 13:44:39.645828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.770 [2024-11-07 13:44:39.645844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.770 [2024-11-07 13:44:39.645853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.770 [2024-11-07 13:44:39.645859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.770 [2024-11-07 13:44:39.645878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.770 qpair failed and we were unable to recover it. 00:39:31.770 [2024-11-07 13:44:39.655821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.770 [2024-11-07 13:44:39.655887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.770 [2024-11-07 13:44:39.655903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.770 [2024-11-07 13:44:39.655912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.770 [2024-11-07 13:44:39.655918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.770 [2024-11-07 13:44:39.655934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.770 qpair failed and we were unable to recover it. 00:39:31.770 [2024-11-07 13:44:39.665614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.770 [2024-11-07 13:44:39.665673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.770 [2024-11-07 13:44:39.665689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.770 [2024-11-07 13:44:39.665697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.771 [2024-11-07 13:44:39.665704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.771 [2024-11-07 13:44:39.665719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.771 qpair failed and we were unable to recover it. 00:39:31.771 [2024-11-07 13:44:39.675855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.771 [2024-11-07 13:44:39.675946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.771 [2024-11-07 13:44:39.675962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.771 [2024-11-07 13:44:39.675974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.771 [2024-11-07 13:44:39.675980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.771 [2024-11-07 13:44:39.675996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.771 qpair failed and we were unable to recover it. 00:39:31.771 [2024-11-07 13:44:39.685818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.771 [2024-11-07 13:44:39.685884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.771 [2024-11-07 13:44:39.685901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.771 [2024-11-07 13:44:39.685909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.771 [2024-11-07 13:44:39.685915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.771 [2024-11-07 13:44:39.685931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.771 qpair failed and we were unable to recover it. 00:39:31.771 [2024-11-07 13:44:39.695912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.771 [2024-11-07 13:44:39.695986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.771 [2024-11-07 13:44:39.696002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.771 [2024-11-07 13:44:39.696010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.772 [2024-11-07 13:44:39.696017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.772 [2024-11-07 13:44:39.696033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.772 qpair failed and we were unable to recover it. 00:39:31.772 [2024-11-07 13:44:39.705747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.772 [2024-11-07 13:44:39.705829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.772 [2024-11-07 13:44:39.705846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.772 [2024-11-07 13:44:39.705854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.772 [2024-11-07 13:44:39.705860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.772 [2024-11-07 13:44:39.705880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.772 qpair failed and we were unable to recover it. 00:39:31.772 [2024-11-07 13:44:39.715975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.772 [2024-11-07 13:44:39.716039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.772 [2024-11-07 13:44:39.716055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.772 [2024-11-07 13:44:39.716063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.772 [2024-11-07 13:44:39.716069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.772 [2024-11-07 13:44:39.716088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.772 qpair failed and we were unable to recover it. 00:39:31.772 [2024-11-07 13:44:39.725977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.772 [2024-11-07 13:44:39.726043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.773 [2024-11-07 13:44:39.726059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.773 [2024-11-07 13:44:39.726067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.773 [2024-11-07 13:44:39.726076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.773 [2024-11-07 13:44:39.726093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.773 qpair failed and we were unable to recover it. 00:39:31.773 [2024-11-07 13:44:39.736024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.773 [2024-11-07 13:44:39.736087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.773 [2024-11-07 13:44:39.736103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.773 [2024-11-07 13:44:39.736111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.773 [2024-11-07 13:44:39.736117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.773 [2024-11-07 13:44:39.736133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.773 qpair failed and we were unable to recover it. 00:39:31.773 [2024-11-07 13:44:39.745869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.773 [2024-11-07 13:44:39.745934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.773 [2024-11-07 13:44:39.745950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.773 [2024-11-07 13:44:39.745958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.773 [2024-11-07 13:44:39.745965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.774 [2024-11-07 13:44:39.745980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.774 qpair failed and we were unable to recover it. 00:39:31.774 [2024-11-07 13:44:39.756072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.774 [2024-11-07 13:44:39.756130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.774 [2024-11-07 13:44:39.756147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.774 [2024-11-07 13:44:39.756155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.774 [2024-11-07 13:44:39.756161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:31.774 [2024-11-07 13:44:39.756177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.774 qpair failed and we were unable to recover it. 00:39:32.038 [2024-11-07 13:44:39.766103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.038 [2024-11-07 13:44:39.766166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.038 [2024-11-07 13:44:39.766182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.038 [2024-11-07 13:44:39.766191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.038 [2024-11-07 13:44:39.766197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.038 [2024-11-07 13:44:39.766214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.038 qpair failed and we were unable to recover it. 00:39:32.038 [2024-11-07 13:44:39.775950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.038 [2024-11-07 13:44:39.776005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.038 [2024-11-07 13:44:39.776022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.038 [2024-11-07 13:44:39.776030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.038 [2024-11-07 13:44:39.776036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.038 [2024-11-07 13:44:39.776052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.038 qpair failed and we were unable to recover it. 00:39:32.038 [2024-11-07 13:44:39.785968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.038 [2024-11-07 13:44:39.786025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.038 [2024-11-07 13:44:39.786041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.038 [2024-11-07 13:44:39.786049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.038 [2024-11-07 13:44:39.786055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.038 [2024-11-07 13:44:39.786071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.038 qpair failed and we were unable to recover it. 00:39:32.038 [2024-11-07 13:44:39.796171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.038 [2024-11-07 13:44:39.796238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.038 [2024-11-07 13:44:39.796253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.038 [2024-11-07 13:44:39.796261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.038 [2024-11-07 13:44:39.796268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.038 [2024-11-07 13:44:39.796283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.038 qpair failed and we were unable to recover it. 00:39:32.038 [2024-11-07 13:44:39.806189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.038 [2024-11-07 13:44:39.806246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.038 [2024-11-07 13:44:39.806263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.038 [2024-11-07 13:44:39.806273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.038 [2024-11-07 13:44:39.806279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.038 [2024-11-07 13:44:39.806295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.038 qpair failed and we were unable to recover it. 00:39:32.038 [2024-11-07 13:44:39.815990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.038 [2024-11-07 13:44:39.816060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.038 [2024-11-07 13:44:39.816076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.038 [2024-11-07 13:44:39.816085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.038 [2024-11-07 13:44:39.816091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.038 [2024-11-07 13:44:39.816108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.038 qpair failed and we were unable to recover it. 00:39:32.038 [2024-11-07 13:44:39.825986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.038 [2024-11-07 13:44:39.826043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.038 [2024-11-07 13:44:39.826058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.038 [2024-11-07 13:44:39.826066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.038 [2024-11-07 13:44:39.826073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.038 [2024-11-07 13:44:39.826089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.038 qpair failed and we were unable to recover it. 00:39:32.038 [2024-11-07 13:44:39.836320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.038 [2024-11-07 13:44:39.836395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.038 [2024-11-07 13:44:39.836411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.038 [2024-11-07 13:44:39.836419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.039 [2024-11-07 13:44:39.836425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.039 [2024-11-07 13:44:39.836441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.039 qpair failed and we were unable to recover it. 00:39:32.039 [2024-11-07 13:44:39.846243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.039 [2024-11-07 13:44:39.846301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.039 [2024-11-07 13:44:39.846318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.039 [2024-11-07 13:44:39.846326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.039 [2024-11-07 13:44:39.846333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.039 [2024-11-07 13:44:39.846356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.039 qpair failed and we were unable to recover it. 00:39:32.039 [2024-11-07 13:44:39.856117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.039 [2024-11-07 13:44:39.856173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.039 [2024-11-07 13:44:39.856189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.039 [2024-11-07 13:44:39.856197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.039 [2024-11-07 13:44:39.856204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.039 [2024-11-07 13:44:39.856220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.039 qpair failed and we were unable to recover it. 00:39:32.039 [2024-11-07 13:44:39.866161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.039 [2024-11-07 13:44:39.866267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.039 [2024-11-07 13:44:39.866284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.039 [2024-11-07 13:44:39.866292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.039 [2024-11-07 13:44:39.866299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.039 [2024-11-07 13:44:39.866315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.039 qpair failed and we were unable to recover it. 00:39:32.039 [2024-11-07 13:44:39.876408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.039 [2024-11-07 13:44:39.876472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.039 [2024-11-07 13:44:39.876488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.039 [2024-11-07 13:44:39.876496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.039 [2024-11-07 13:44:39.876503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.039 [2024-11-07 13:44:39.876518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.039 qpair failed and we were unable to recover it. 00:39:32.039 [2024-11-07 13:44:39.886448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.039 [2024-11-07 13:44:39.886509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.039 [2024-11-07 13:44:39.886525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.039 [2024-11-07 13:44:39.886533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.039 [2024-11-07 13:44:39.886540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.039 [2024-11-07 13:44:39.886556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.039 qpair failed and we were unable to recover it. 00:39:32.039 [2024-11-07 13:44:39.896270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.039 [2024-11-07 13:44:39.896327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.039 [2024-11-07 13:44:39.896343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.039 [2024-11-07 13:44:39.896351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.039 [2024-11-07 13:44:39.896358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.039 [2024-11-07 13:44:39.896373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.039 qpair failed and we were unable to recover it. 00:39:32.039 [2024-11-07 13:44:39.906295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.039 [2024-11-07 13:44:39.906356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.039 [2024-11-07 13:44:39.906373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.039 [2024-11-07 13:44:39.906381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.039 [2024-11-07 13:44:39.906387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.039 [2024-11-07 13:44:39.906402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.039 qpair failed and we were unable to recover it. 00:39:32.039 [2024-11-07 13:44:39.916405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.039 [2024-11-07 13:44:39.916469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.039 [2024-11-07 13:44:39.916484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.039 [2024-11-07 13:44:39.916492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.039 [2024-11-07 13:44:39.916499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.039 [2024-11-07 13:44:39.916515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.039 qpair failed and we were unable to recover it. 00:39:32.039 [2024-11-07 13:44:39.926546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.039 [2024-11-07 13:44:39.926608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.039 [2024-11-07 13:44:39.926624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.039 [2024-11-07 13:44:39.926633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.039 [2024-11-07 13:44:39.926639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.039 [2024-11-07 13:44:39.926656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.039 qpair failed and we were unable to recover it. 00:39:32.039 [2024-11-07 13:44:39.936359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.039 [2024-11-07 13:44:39.936415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.039 [2024-11-07 13:44:39.936433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.039 [2024-11-07 13:44:39.936441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.039 [2024-11-07 13:44:39.936447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.039 [2024-11-07 13:44:39.936463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.039 qpair failed and we were unable to recover it. 00:39:32.039 [2024-11-07 13:44:39.946411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.039 [2024-11-07 13:44:39.946468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.039 [2024-11-07 13:44:39.946483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.039 [2024-11-07 13:44:39.946491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.039 [2024-11-07 13:44:39.946498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.039 [2024-11-07 13:44:39.946514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.039 qpair failed and we were unable to recover it. 00:39:32.039 [2024-11-07 13:44:39.956664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.039 [2024-11-07 13:44:39.956732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.039 [2024-11-07 13:44:39.956748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.039 [2024-11-07 13:44:39.956756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.039 [2024-11-07 13:44:39.956762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.039 [2024-11-07 13:44:39.956778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.039 qpair failed and we were unable to recover it. 00:39:32.039 [2024-11-07 13:44:39.966628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.040 [2024-11-07 13:44:39.966694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.040 [2024-11-07 13:44:39.966711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.040 [2024-11-07 13:44:39.966719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.040 [2024-11-07 13:44:39.966728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.040 [2024-11-07 13:44:39.966757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.040 qpair failed and we were unable to recover it. 00:39:32.040 [2024-11-07 13:44:39.976498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.040 [2024-11-07 13:44:39.976550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.040 [2024-11-07 13:44:39.976566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.040 [2024-11-07 13:44:39.976574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.040 [2024-11-07 13:44:39.976583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.040 [2024-11-07 13:44:39.976599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.040 qpair failed and we were unable to recover it. 00:39:32.040 [2024-11-07 13:44:39.986502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.040 [2024-11-07 13:44:39.986560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.040 [2024-11-07 13:44:39.986575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.040 [2024-11-07 13:44:39.986584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.040 [2024-11-07 13:44:39.986590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.040 [2024-11-07 13:44:39.986605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.040 qpair failed and we were unable to recover it. 00:39:32.040 [2024-11-07 13:44:39.996727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.040 [2024-11-07 13:44:39.996784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.040 [2024-11-07 13:44:39.996801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.040 [2024-11-07 13:44:39.996809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.040 [2024-11-07 13:44:39.996815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.040 [2024-11-07 13:44:39.996830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.040 qpair failed and we were unable to recover it. 00:39:32.040 [2024-11-07 13:44:40.006809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.040 [2024-11-07 13:44:40.006910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.040 [2024-11-07 13:44:40.006932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.040 [2024-11-07 13:44:40.006942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.040 [2024-11-07 13:44:40.006949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.040 [2024-11-07 13:44:40.006969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.040 qpair failed and we were unable to recover it. 00:39:32.040 [2024-11-07 13:44:40.016651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.040 [2024-11-07 13:44:40.016716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.040 [2024-11-07 13:44:40.016735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.040 [2024-11-07 13:44:40.016744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.040 [2024-11-07 13:44:40.016751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.040 [2024-11-07 13:44:40.016770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.040 qpair failed and we were unable to recover it. 00:39:32.040 [2024-11-07 13:44:40.026550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.040 [2024-11-07 13:44:40.026609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.040 [2024-11-07 13:44:40.026626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.040 [2024-11-07 13:44:40.026634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.040 [2024-11-07 13:44:40.026641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.040 [2024-11-07 13:44:40.026658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.040 qpair failed and we were unable to recover it. 00:39:32.040 [2024-11-07 13:44:40.036806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.040 [2024-11-07 13:44:40.036898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.040 [2024-11-07 13:44:40.036920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.040 [2024-11-07 13:44:40.036935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.040 [2024-11-07 13:44:40.036946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.040 [2024-11-07 13:44:40.036971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.040 qpair failed and we were unable to recover it. 00:39:32.302 [2024-11-07 13:44:40.046615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.302 [2024-11-07 13:44:40.046678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.302 [2024-11-07 13:44:40.046694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.302 [2024-11-07 13:44:40.046702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.302 [2024-11-07 13:44:40.046709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.302 [2024-11-07 13:44:40.046725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.302 qpair failed and we were unable to recover it. 00:39:32.302 [2024-11-07 13:44:40.056959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.302 [2024-11-07 13:44:40.057024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.302 [2024-11-07 13:44:40.057040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.302 [2024-11-07 13:44:40.057048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.302 [2024-11-07 13:44:40.057055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.302 [2024-11-07 13:44:40.057072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.302 qpair failed and we were unable to recover it. 00:39:32.302 [2024-11-07 13:44:40.066635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.302 [2024-11-07 13:44:40.066693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.302 [2024-11-07 13:44:40.066712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.302 [2024-11-07 13:44:40.066720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.302 [2024-11-07 13:44:40.066727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.302 [2024-11-07 13:44:40.066743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.302 qpair failed and we were unable to recover it. 00:39:32.302 [2024-11-07 13:44:40.076806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.302 [2024-11-07 13:44:40.076895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.302 [2024-11-07 13:44:40.076913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.302 [2024-11-07 13:44:40.076921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.302 [2024-11-07 13:44:40.076928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.302 [2024-11-07 13:44:40.076944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.302 qpair failed and we were unable to recover it. 00:39:32.302 [2024-11-07 13:44:40.086801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.302 [2024-11-07 13:44:40.086874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.302 [2024-11-07 13:44:40.086890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.302 [2024-11-07 13:44:40.086898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.302 [2024-11-07 13:44:40.086905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.302 [2024-11-07 13:44:40.086922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.302 qpair failed and we were unable to recover it. 00:39:32.302 [2024-11-07 13:44:40.096821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.302 [2024-11-07 13:44:40.096882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.302 [2024-11-07 13:44:40.096898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.302 [2024-11-07 13:44:40.096906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.302 [2024-11-07 13:44:40.096914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.302 [2024-11-07 13:44:40.096930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.302 qpair failed and we were unable to recover it. 00:39:32.302 [2024-11-07 13:44:40.106858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.302 [2024-11-07 13:44:40.106944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.302 [2024-11-07 13:44:40.106961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.302 [2024-11-07 13:44:40.106970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.302 [2024-11-07 13:44:40.106981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.302 [2024-11-07 13:44:40.106997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.302 qpair failed and we were unable to recover it. 00:39:32.302 [2024-11-07 13:44:40.116788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.302 [2024-11-07 13:44:40.116842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.302 [2024-11-07 13:44:40.116859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.302 [2024-11-07 13:44:40.116871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.303 [2024-11-07 13:44:40.116879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.303 [2024-11-07 13:44:40.116894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.303 qpair failed and we were unable to recover it. 00:39:32.303 [2024-11-07 13:44:40.127020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.303 [2024-11-07 13:44:40.127084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.303 [2024-11-07 13:44:40.127101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.303 [2024-11-07 13:44:40.127109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.303 [2024-11-07 13:44:40.127115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.303 [2024-11-07 13:44:40.127131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.303 qpair failed and we were unable to recover it. 00:39:32.303 [2024-11-07 13:44:40.136981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.303 [2024-11-07 13:44:40.137039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.303 [2024-11-07 13:44:40.137055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.303 [2024-11-07 13:44:40.137063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.303 [2024-11-07 13:44:40.137078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.303 [2024-11-07 13:44:40.137094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.303 qpair failed and we were unable to recover it. 00:39:32.303 [2024-11-07 13:44:40.146948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.303 [2024-11-07 13:44:40.147005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.303 [2024-11-07 13:44:40.147020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.303 [2024-11-07 13:44:40.147029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.303 [2024-11-07 13:44:40.147035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.303 [2024-11-07 13:44:40.147051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.303 qpair failed and we were unable to recover it. 00:39:32.303 [2024-11-07 13:44:40.157037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.303 [2024-11-07 13:44:40.157095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.303 [2024-11-07 13:44:40.157111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.303 [2024-11-07 13:44:40.157120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.303 [2024-11-07 13:44:40.157126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.303 [2024-11-07 13:44:40.157143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.303 qpair failed and we were unable to recover it. 00:39:32.303 [2024-11-07 13:44:40.167009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.303 [2024-11-07 13:44:40.167096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.303 [2024-11-07 13:44:40.167113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.303 [2024-11-07 13:44:40.167121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.303 [2024-11-07 13:44:40.167128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.303 [2024-11-07 13:44:40.167145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.303 qpair failed and we were unable to recover it. 00:39:32.303 [2024-11-07 13:44:40.177028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.303 [2024-11-07 13:44:40.177088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.303 [2024-11-07 13:44:40.177105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.303 [2024-11-07 13:44:40.177113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.303 [2024-11-07 13:44:40.177119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.303 [2024-11-07 13:44:40.177136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.303 qpair failed and we were unable to recover it. 00:39:32.303 [2024-11-07 13:44:40.187065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.303 [2024-11-07 13:44:40.187123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.303 [2024-11-07 13:44:40.187140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.303 [2024-11-07 13:44:40.187148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.303 [2024-11-07 13:44:40.187154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.303 [2024-11-07 13:44:40.187170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.303 qpair failed and we were unable to recover it. 00:39:32.303 [2024-11-07 13:44:40.197189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.303 [2024-11-07 13:44:40.197249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.303 [2024-11-07 13:44:40.197265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.303 [2024-11-07 13:44:40.197273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.303 [2024-11-07 13:44:40.197279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.303 [2024-11-07 13:44:40.197295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.303 qpair failed and we were unable to recover it. 00:39:32.303 [2024-11-07 13:44:40.207113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.303 [2024-11-07 13:44:40.207172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.303 [2024-11-07 13:44:40.207189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.303 [2024-11-07 13:44:40.207197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.303 [2024-11-07 13:44:40.207203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.303 [2024-11-07 13:44:40.207218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.303 qpair failed and we were unable to recover it. 00:39:32.303 [2024-11-07 13:44:40.217091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.303 [2024-11-07 13:44:40.217190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.303 [2024-11-07 13:44:40.217207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.303 [2024-11-07 13:44:40.217215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.303 [2024-11-07 13:44:40.217222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.303 [2024-11-07 13:44:40.217237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.303 qpair failed and we were unable to recover it. 00:39:32.303 [2024-11-07 13:44:40.227070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.303 [2024-11-07 13:44:40.227126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.303 [2024-11-07 13:44:40.227142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.303 [2024-11-07 13:44:40.227150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.303 [2024-11-07 13:44:40.227156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.303 [2024-11-07 13:44:40.227172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.303 qpair failed and we were unable to recover it. 00:39:32.303 [2024-11-07 13:44:40.237124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.303 [2024-11-07 13:44:40.237180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.303 [2024-11-07 13:44:40.237197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.303 [2024-11-07 13:44:40.237208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.303 [2024-11-07 13:44:40.237214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.303 [2024-11-07 13:44:40.237230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.303 qpair failed and we were unable to recover it. 00:39:32.303 [2024-11-07 13:44:40.247216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.303 [2024-11-07 13:44:40.247275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.304 [2024-11-07 13:44:40.247291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.304 [2024-11-07 13:44:40.247299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.304 [2024-11-07 13:44:40.247306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.304 [2024-11-07 13:44:40.247321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.304 qpair failed and we were unable to recover it. 00:39:32.304 [2024-11-07 13:44:40.257246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.304 [2024-11-07 13:44:40.257306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.304 [2024-11-07 13:44:40.257322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.304 [2024-11-07 13:44:40.257330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.304 [2024-11-07 13:44:40.257337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.304 [2024-11-07 13:44:40.257353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.304 qpair failed and we were unable to recover it. 00:39:32.304 [2024-11-07 13:44:40.267292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.304 [2024-11-07 13:44:40.267369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.304 [2024-11-07 13:44:40.267385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.304 [2024-11-07 13:44:40.267393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.304 [2024-11-07 13:44:40.267402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.304 [2024-11-07 13:44:40.267418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.304 qpair failed and we were unable to recover it. 00:39:32.304 [2024-11-07 13:44:40.277261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.304 [2024-11-07 13:44:40.277362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.304 [2024-11-07 13:44:40.277379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.304 [2024-11-07 13:44:40.277387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.304 [2024-11-07 13:44:40.277394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.304 [2024-11-07 13:44:40.277413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.304 qpair failed and we were unable to recover it. 00:39:32.304 [2024-11-07 13:44:40.287323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.304 [2024-11-07 13:44:40.287378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.304 [2024-11-07 13:44:40.287394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.304 [2024-11-07 13:44:40.287402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.304 [2024-11-07 13:44:40.287409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.304 [2024-11-07 13:44:40.287424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.304 qpair failed and we were unable to recover it. 00:39:32.304 [2024-11-07 13:44:40.297239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.304 [2024-11-07 13:44:40.297292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.304 [2024-11-07 13:44:40.297308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.304 [2024-11-07 13:44:40.297316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.304 [2024-11-07 13:44:40.297322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.304 [2024-11-07 13:44:40.297341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.304 qpair failed and we were unable to recover it. 00:39:32.567 [2024-11-07 13:44:40.307377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.567 [2024-11-07 13:44:40.307436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.567 [2024-11-07 13:44:40.307452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.567 [2024-11-07 13:44:40.307460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.567 [2024-11-07 13:44:40.307467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.567 [2024-11-07 13:44:40.307482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.567 qpair failed and we were unable to recover it. 00:39:32.567 [2024-11-07 13:44:40.317440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.567 [2024-11-07 13:44:40.317519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.567 [2024-11-07 13:44:40.317536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.567 [2024-11-07 13:44:40.317544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.567 [2024-11-07 13:44:40.317551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.567 [2024-11-07 13:44:40.317567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.567 qpair failed and we were unable to recover it. 00:39:32.567 [2024-11-07 13:44:40.327439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.567 [2024-11-07 13:44:40.327499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.567 [2024-11-07 13:44:40.327515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.567 [2024-11-07 13:44:40.327523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.567 [2024-11-07 13:44:40.327530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.567 [2024-11-07 13:44:40.327545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.567 qpair failed and we were unable to recover it. 00:39:32.567 [2024-11-07 13:44:40.337555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.567 [2024-11-07 13:44:40.337623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.567 [2024-11-07 13:44:40.337639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.567 [2024-11-07 13:44:40.337647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.567 [2024-11-07 13:44:40.337653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.567 [2024-11-07 13:44:40.337669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.567 qpair failed and we were unable to recover it. 00:39:32.567 [2024-11-07 13:44:40.347499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.567 [2024-11-07 13:44:40.347553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.567 [2024-11-07 13:44:40.347569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.567 [2024-11-07 13:44:40.347577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.567 [2024-11-07 13:44:40.347583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.567 [2024-11-07 13:44:40.347598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.567 qpair failed and we were unable to recover it. 00:39:32.567 [2024-11-07 13:44:40.357516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.567 [2024-11-07 13:44:40.357605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.567 [2024-11-07 13:44:40.357621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.567 [2024-11-07 13:44:40.357629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.567 [2024-11-07 13:44:40.357635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.567 [2024-11-07 13:44:40.357650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.567 qpair failed and we were unable to recover it. 00:39:32.567 [2024-11-07 13:44:40.367546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.567 [2024-11-07 13:44:40.367603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.567 [2024-11-07 13:44:40.367619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.567 [2024-11-07 13:44:40.367629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.567 [2024-11-07 13:44:40.367636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.567 [2024-11-07 13:44:40.367651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.567 qpair failed and we were unable to recover it. 00:39:32.567 [2024-11-07 13:44:40.377579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.567 [2024-11-07 13:44:40.377635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.567 [2024-11-07 13:44:40.377652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.567 [2024-11-07 13:44:40.377660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.567 [2024-11-07 13:44:40.377666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.567 [2024-11-07 13:44:40.377682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.567 qpair failed and we were unable to recover it. 00:39:32.567 [2024-11-07 13:44:40.387563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.567 [2024-11-07 13:44:40.387626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.567 [2024-11-07 13:44:40.387642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.567 [2024-11-07 13:44:40.387650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.567 [2024-11-07 13:44:40.387656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.567 [2024-11-07 13:44:40.387671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.567 qpair failed and we were unable to recover it. 00:39:32.567 [2024-11-07 13:44:40.397625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.567 [2024-11-07 13:44:40.397677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.567 [2024-11-07 13:44:40.397698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.567 [2024-11-07 13:44:40.397706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.567 [2024-11-07 13:44:40.397712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.567 [2024-11-07 13:44:40.397728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.567 qpair failed and we were unable to recover it. 00:39:32.567 [2024-11-07 13:44:40.407555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.567 [2024-11-07 13:44:40.407611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.567 [2024-11-07 13:44:40.407626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.567 [2024-11-07 13:44:40.407635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.567 [2024-11-07 13:44:40.407641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.567 [2024-11-07 13:44:40.407659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.567 qpair failed and we were unable to recover it. 00:39:32.567 [2024-11-07 13:44:40.417653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.567 [2024-11-07 13:44:40.417710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.568 [2024-11-07 13:44:40.417726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.568 [2024-11-07 13:44:40.417734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.568 [2024-11-07 13:44:40.417741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.568 [2024-11-07 13:44:40.417757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.568 qpair failed and we were unable to recover it. 00:39:32.568 [2024-11-07 13:44:40.427708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.568 [2024-11-07 13:44:40.427762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.568 [2024-11-07 13:44:40.427778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.568 [2024-11-07 13:44:40.427786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.568 [2024-11-07 13:44:40.427792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.568 [2024-11-07 13:44:40.427808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.568 qpair failed and we were unable to recover it. 00:39:32.568 [2024-11-07 13:44:40.437754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.568 [2024-11-07 13:44:40.437814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.568 [2024-11-07 13:44:40.437830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.568 [2024-11-07 13:44:40.437838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.568 [2024-11-07 13:44:40.437844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.568 [2024-11-07 13:44:40.437860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.568 qpair failed and we were unable to recover it. 00:39:32.568 [2024-11-07 13:44:40.447711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.568 [2024-11-07 13:44:40.447769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.568 [2024-11-07 13:44:40.447785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.568 [2024-11-07 13:44:40.447793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.568 [2024-11-07 13:44:40.447800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.568 [2024-11-07 13:44:40.447815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.568 qpair failed and we were unable to recover it. 00:39:32.568 [2024-11-07 13:44:40.457781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.568 [2024-11-07 13:44:40.457835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.568 [2024-11-07 13:44:40.457851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.568 [2024-11-07 13:44:40.457859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.568 [2024-11-07 13:44:40.457869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.568 [2024-11-07 13:44:40.457885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.568 qpair failed and we were unable to recover it. 00:39:32.568 [2024-11-07 13:44:40.467808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.568 [2024-11-07 13:44:40.467866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.568 [2024-11-07 13:44:40.467882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.568 [2024-11-07 13:44:40.467891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.568 [2024-11-07 13:44:40.467897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.568 [2024-11-07 13:44:40.467913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.568 qpair failed and we were unable to recover it. 00:39:32.568 [2024-11-07 13:44:40.477779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.568 [2024-11-07 13:44:40.477833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.568 [2024-11-07 13:44:40.477849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.568 [2024-11-07 13:44:40.477857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.568 [2024-11-07 13:44:40.477868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.568 [2024-11-07 13:44:40.477884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.568 qpair failed and we were unable to recover it. 00:39:32.568 [2024-11-07 13:44:40.487769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.568 [2024-11-07 13:44:40.487823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.568 [2024-11-07 13:44:40.487839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.568 [2024-11-07 13:44:40.487847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.568 [2024-11-07 13:44:40.487853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.568 [2024-11-07 13:44:40.487872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.568 qpair failed and we were unable to recover it. 00:39:32.568 [2024-11-07 13:44:40.497888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.568 [2024-11-07 13:44:40.497945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.568 [2024-11-07 13:44:40.497963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.568 [2024-11-07 13:44:40.497972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.568 [2024-11-07 13:44:40.497981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.568 [2024-11-07 13:44:40.498000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.568 qpair failed and we were unable to recover it. 00:39:32.568 [2024-11-07 13:44:40.507891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.568 [2024-11-07 13:44:40.507957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.568 [2024-11-07 13:44:40.507973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.568 [2024-11-07 13:44:40.507981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.568 [2024-11-07 13:44:40.507987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.568 [2024-11-07 13:44:40.508003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.568 qpair failed and we were unable to recover it. 00:39:32.568 [2024-11-07 13:44:40.517944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.568 [2024-11-07 13:44:40.518039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.568 [2024-11-07 13:44:40.518055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.568 [2024-11-07 13:44:40.518063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.568 [2024-11-07 13:44:40.518070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.568 [2024-11-07 13:44:40.518086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.568 qpair failed and we were unable to recover it. 00:39:32.568 [2024-11-07 13:44:40.527969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.568 [2024-11-07 13:44:40.528025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.568 [2024-11-07 13:44:40.528041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.568 [2024-11-07 13:44:40.528049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.568 [2024-11-07 13:44:40.528055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.568 [2024-11-07 13:44:40.528071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.568 qpair failed and we were unable to recover it. 00:39:32.568 [2024-11-07 13:44:40.537921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.568 [2024-11-07 13:44:40.537997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.568 [2024-11-07 13:44:40.538013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.569 [2024-11-07 13:44:40.538020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.569 [2024-11-07 13:44:40.538030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.569 [2024-11-07 13:44:40.538046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.569 qpair failed and we were unable to recover it. 00:39:32.569 [2024-11-07 13:44:40.547998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.569 [2024-11-07 13:44:40.548072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.569 [2024-11-07 13:44:40.548088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.569 [2024-11-07 13:44:40.548096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.569 [2024-11-07 13:44:40.548102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.569 [2024-11-07 13:44:40.548118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.569 qpair failed and we were unable to recover it. 00:39:32.569 [2024-11-07 13:44:40.558093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.569 [2024-11-07 13:44:40.558160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.569 [2024-11-07 13:44:40.558177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.569 [2024-11-07 13:44:40.558185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.569 [2024-11-07 13:44:40.558191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.569 [2024-11-07 13:44:40.558206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.569 qpair failed and we were unable to recover it. 00:39:32.569 [2024-11-07 13:44:40.568068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.569 [2024-11-07 13:44:40.568139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.569 [2024-11-07 13:44:40.568155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.569 [2024-11-07 13:44:40.568163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.569 [2024-11-07 13:44:40.568171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.569 [2024-11-07 13:44:40.568186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.569 qpair failed and we were unable to recover it. 00:39:32.831 [2024-11-07 13:44:40.578100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.831 [2024-11-07 13:44:40.578163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.831 [2024-11-07 13:44:40.578180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.831 [2024-11-07 13:44:40.578188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.831 [2024-11-07 13:44:40.578194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.832 [2024-11-07 13:44:40.578210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.832 qpair failed and we were unable to recover it. 00:39:32.832 [2024-11-07 13:44:40.588148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.832 [2024-11-07 13:44:40.588205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.832 [2024-11-07 13:44:40.588222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.832 [2024-11-07 13:44:40.588230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.832 [2024-11-07 13:44:40.588236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.832 [2024-11-07 13:44:40.588252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.832 qpair failed and we were unable to recover it. 00:39:32.832 [2024-11-07 13:44:40.598053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.832 [2024-11-07 13:44:40.598108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.832 [2024-11-07 13:44:40.598123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.832 [2024-11-07 13:44:40.598131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.832 [2024-11-07 13:44:40.598138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.832 [2024-11-07 13:44:40.598153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.832 qpair failed and we were unable to recover it. 00:39:32.832 [2024-11-07 13:44:40.608172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.832 [2024-11-07 13:44:40.608229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.832 [2024-11-07 13:44:40.608245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.832 [2024-11-07 13:44:40.608253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.832 [2024-11-07 13:44:40.608259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.832 [2024-11-07 13:44:40.608275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.832 qpair failed and we were unable to recover it. 00:39:32.832 [2024-11-07 13:44:40.618204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.832 [2024-11-07 13:44:40.618261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.832 [2024-11-07 13:44:40.618277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.832 [2024-11-07 13:44:40.618285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.832 [2024-11-07 13:44:40.618292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.832 [2024-11-07 13:44:40.618307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.832 qpair failed and we were unable to recover it. 00:39:32.832 [2024-11-07 13:44:40.628246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.832 [2024-11-07 13:44:40.628300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.832 [2024-11-07 13:44:40.628319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.832 [2024-11-07 13:44:40.628327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.832 [2024-11-07 13:44:40.628334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.832 [2024-11-07 13:44:40.628352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.832 qpair failed and we were unable to recover it. 00:39:32.832 [2024-11-07 13:44:40.638273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.832 [2024-11-07 13:44:40.638342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.832 [2024-11-07 13:44:40.638358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.832 [2024-11-07 13:44:40.638366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.832 [2024-11-07 13:44:40.638372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.832 [2024-11-07 13:44:40.638388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.832 qpair failed and we were unable to recover it. 00:39:32.832 [2024-11-07 13:44:40.648211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.832 [2024-11-07 13:44:40.648265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.832 [2024-11-07 13:44:40.648282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.832 [2024-11-07 13:44:40.648290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.832 [2024-11-07 13:44:40.648296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.832 [2024-11-07 13:44:40.648316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.832 qpair failed and we were unable to recover it. 00:39:32.832 [2024-11-07 13:44:40.658300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.832 [2024-11-07 13:44:40.658361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.832 [2024-11-07 13:44:40.658376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.832 [2024-11-07 13:44:40.658385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.832 [2024-11-07 13:44:40.658391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.832 [2024-11-07 13:44:40.658407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.832 qpair failed and we were unable to recover it. 00:39:32.832 [2024-11-07 13:44:40.668236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.832 [2024-11-07 13:44:40.668295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.832 [2024-11-07 13:44:40.668311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.832 [2024-11-07 13:44:40.668319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.832 [2024-11-07 13:44:40.668328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.832 [2024-11-07 13:44:40.668344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.832 qpair failed and we were unable to recover it. 00:39:32.832 [2024-11-07 13:44:40.678410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.832 [2024-11-07 13:44:40.678481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.832 [2024-11-07 13:44:40.678497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.832 [2024-11-07 13:44:40.678505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.832 [2024-11-07 13:44:40.678511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.832 [2024-11-07 13:44:40.678527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.832 qpair failed and we were unable to recover it. 00:39:32.832 [2024-11-07 13:44:40.688418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.832 [2024-11-07 13:44:40.688488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.832 [2024-11-07 13:44:40.688504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.832 [2024-11-07 13:44:40.688512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.832 [2024-11-07 13:44:40.688518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.832 [2024-11-07 13:44:40.688535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.832 qpair failed and we were unable to recover it. 00:39:32.832 [2024-11-07 13:44:40.698420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.832 [2024-11-07 13:44:40.698474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.832 [2024-11-07 13:44:40.698490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.832 [2024-11-07 13:44:40.698498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.832 [2024-11-07 13:44:40.698505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.832 [2024-11-07 13:44:40.698520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.832 qpair failed and we were unable to recover it. 00:39:32.832 [2024-11-07 13:44:40.708432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.832 [2024-11-07 13:44:40.708488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.832 [2024-11-07 13:44:40.708504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.832 [2024-11-07 13:44:40.708512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.833 [2024-11-07 13:44:40.708519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.833 [2024-11-07 13:44:40.708535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.833 qpair failed and we were unable to recover it. 00:39:32.833 [2024-11-07 13:44:40.718474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.833 [2024-11-07 13:44:40.718528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.833 [2024-11-07 13:44:40.718544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.833 [2024-11-07 13:44:40.718552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.833 [2024-11-07 13:44:40.718559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.833 [2024-11-07 13:44:40.718575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.833 qpair failed and we were unable to recover it. 00:39:32.833 [2024-11-07 13:44:40.728501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.833 [2024-11-07 13:44:40.728555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.833 [2024-11-07 13:44:40.728571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.833 [2024-11-07 13:44:40.728579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.833 [2024-11-07 13:44:40.728586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.833 [2024-11-07 13:44:40.728601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.833 qpair failed and we were unable to recover it. 00:39:32.833 [2024-11-07 13:44:40.738433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.833 [2024-11-07 13:44:40.738493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.833 [2024-11-07 13:44:40.738510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.833 [2024-11-07 13:44:40.738518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.833 [2024-11-07 13:44:40.738524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.833 [2024-11-07 13:44:40.738540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.833 qpair failed and we were unable to recover it. 00:39:32.833 [2024-11-07 13:44:40.748570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.833 [2024-11-07 13:44:40.748624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.833 [2024-11-07 13:44:40.748640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.833 [2024-11-07 13:44:40.748649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.833 [2024-11-07 13:44:40.748655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.833 [2024-11-07 13:44:40.748670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.833 qpair failed and we were unable to recover it. 00:39:32.833 [2024-11-07 13:44:40.758603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.833 [2024-11-07 13:44:40.758663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.833 [2024-11-07 13:44:40.758679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.833 [2024-11-07 13:44:40.758687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.833 [2024-11-07 13:44:40.758694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.833 [2024-11-07 13:44:40.758709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.833 qpair failed and we were unable to recover it. 00:39:32.833 [2024-11-07 13:44:40.768611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.833 [2024-11-07 13:44:40.768667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.833 [2024-11-07 13:44:40.768683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.833 [2024-11-07 13:44:40.768692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.833 [2024-11-07 13:44:40.768698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.833 [2024-11-07 13:44:40.768713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.833 qpair failed and we were unable to recover it. 00:39:32.833 [2024-11-07 13:44:40.778629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.833 [2024-11-07 13:44:40.778709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.833 [2024-11-07 13:44:40.778724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.833 [2024-11-07 13:44:40.778732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.833 [2024-11-07 13:44:40.778739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.833 [2024-11-07 13:44:40.778755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.833 qpair failed and we were unable to recover it. 00:39:32.833 [2024-11-07 13:44:40.788712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.833 [2024-11-07 13:44:40.788779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.833 [2024-11-07 13:44:40.788794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.833 [2024-11-07 13:44:40.788802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.833 [2024-11-07 13:44:40.788808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.833 [2024-11-07 13:44:40.788824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.833 qpair failed and we were unable to recover it. 00:39:32.833 [2024-11-07 13:44:40.798593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.833 [2024-11-07 13:44:40.798645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.833 [2024-11-07 13:44:40.798661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.833 [2024-11-07 13:44:40.798672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.833 [2024-11-07 13:44:40.798678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.833 [2024-11-07 13:44:40.798694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.833 qpair failed and we were unable to recover it. 00:39:32.833 [2024-11-07 13:44:40.808711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.833 [2024-11-07 13:44:40.808768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.833 [2024-11-07 13:44:40.808784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.833 [2024-11-07 13:44:40.808793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.833 [2024-11-07 13:44:40.808799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.833 [2024-11-07 13:44:40.808814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.833 qpair failed and we were unable to recover it. 00:39:32.833 [2024-11-07 13:44:40.818738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.833 [2024-11-07 13:44:40.818796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.833 [2024-11-07 13:44:40.818812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.834 [2024-11-07 13:44:40.818820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.834 [2024-11-07 13:44:40.818827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.834 [2024-11-07 13:44:40.818842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.834 qpair failed and we were unable to recover it. 00:39:32.834 [2024-11-07 13:44:40.828796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.834 [2024-11-07 13:44:40.828854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.834 [2024-11-07 13:44:40.828874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.834 [2024-11-07 13:44:40.828882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.834 [2024-11-07 13:44:40.828889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:32.834 [2024-11-07 13:44:40.828905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:32.834 qpair failed and we were unable to recover it. 00:39:33.096 [2024-11-07 13:44:40.838813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.096 [2024-11-07 13:44:40.838872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.096 [2024-11-07 13:44:40.838888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.096 [2024-11-07 13:44:40.838896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.096 [2024-11-07 13:44:40.838903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.096 [2024-11-07 13:44:40.838922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.096 qpair failed and we were unable to recover it. 00:39:33.096 [2024-11-07 13:44:40.848839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.096 [2024-11-07 13:44:40.848899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.096 [2024-11-07 13:44:40.848915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.096 [2024-11-07 13:44:40.848922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.096 [2024-11-07 13:44:40.848929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.096 [2024-11-07 13:44:40.848944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.096 qpair failed and we were unable to recover it. 00:39:33.096 [2024-11-07 13:44:40.858876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.096 [2024-11-07 13:44:40.858937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.096 [2024-11-07 13:44:40.858953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.097 [2024-11-07 13:44:40.858961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.097 [2024-11-07 13:44:40.858968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.097 [2024-11-07 13:44:40.858983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.097 qpair failed and we were unable to recover it. 00:39:33.097 [2024-11-07 13:44:40.868902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.097 [2024-11-07 13:44:40.868963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.097 [2024-11-07 13:44:40.868979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.097 [2024-11-07 13:44:40.868988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.097 [2024-11-07 13:44:40.868994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.097 [2024-11-07 13:44:40.869009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.097 qpair failed and we were unable to recover it. 00:39:33.097 [2024-11-07 13:44:40.878805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.097 [2024-11-07 13:44:40.878859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.097 [2024-11-07 13:44:40.878878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.097 [2024-11-07 13:44:40.878886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.097 [2024-11-07 13:44:40.878892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.097 [2024-11-07 13:44:40.878908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.097 qpair failed and we were unable to recover it. 00:39:33.097 [2024-11-07 13:44:40.888922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.097 [2024-11-07 13:44:40.888983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.097 [2024-11-07 13:44:40.888999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.097 [2024-11-07 13:44:40.889007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.097 [2024-11-07 13:44:40.889014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.097 [2024-11-07 13:44:40.889029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.097 qpair failed and we were unable to recover it. 00:39:33.097 [2024-11-07 13:44:40.898984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.097 [2024-11-07 13:44:40.899040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.097 [2024-11-07 13:44:40.899056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.097 [2024-11-07 13:44:40.899064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.097 [2024-11-07 13:44:40.899070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.097 [2024-11-07 13:44:40.899086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.097 qpair failed and we were unable to recover it. 00:39:33.097 [2024-11-07 13:44:40.908897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.097 [2024-11-07 13:44:40.908953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.097 [2024-11-07 13:44:40.908969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.097 [2024-11-07 13:44:40.908984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.097 [2024-11-07 13:44:40.908990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.097 [2024-11-07 13:44:40.909006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.097 qpair failed and we were unable to recover it. 00:39:33.097 [2024-11-07 13:44:40.919074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.097 [2024-11-07 13:44:40.919134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.097 [2024-11-07 13:44:40.919150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.097 [2024-11-07 13:44:40.919158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.097 [2024-11-07 13:44:40.919164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.097 [2024-11-07 13:44:40.919180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.097 qpair failed and we were unable to recover it. 00:39:33.097 [2024-11-07 13:44:40.929032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.097 [2024-11-07 13:44:40.929086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.097 [2024-11-07 13:44:40.929104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.097 [2024-11-07 13:44:40.929112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.097 [2024-11-07 13:44:40.929118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.097 [2024-11-07 13:44:40.929134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.097 qpair failed and we were unable to recover it. 00:39:33.097 [2024-11-07 13:44:40.939074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.097 [2024-11-07 13:44:40.939130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.097 [2024-11-07 13:44:40.939146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.097 [2024-11-07 13:44:40.939154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.097 [2024-11-07 13:44:40.939161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.097 [2024-11-07 13:44:40.939176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.097 qpair failed and we were unable to recover it. 00:39:33.097 [2024-11-07 13:44:40.949014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.097 [2024-11-07 13:44:40.949071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.097 [2024-11-07 13:44:40.949087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.097 [2024-11-07 13:44:40.949095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.097 [2024-11-07 13:44:40.949102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.097 [2024-11-07 13:44:40.949117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.097 qpair failed and we were unable to recover it. 00:39:33.097 [2024-11-07 13:44:40.959140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.097 [2024-11-07 13:44:40.959195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.097 [2024-11-07 13:44:40.959212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.097 [2024-11-07 13:44:40.959220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.097 [2024-11-07 13:44:40.959226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.097 [2024-11-07 13:44:40.959244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.097 qpair failed and we were unable to recover it. 00:39:33.097 [2024-11-07 13:44:40.969167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.097 [2024-11-07 13:44:40.969222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.097 [2024-11-07 13:44:40.969238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.097 [2024-11-07 13:44:40.969246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.097 [2024-11-07 13:44:40.969252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.097 [2024-11-07 13:44:40.969271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.097 qpair failed and we were unable to recover it. 00:39:33.097 [2024-11-07 13:44:40.979179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.097 [2024-11-07 13:44:40.979238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.097 [2024-11-07 13:44:40.979253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.097 [2024-11-07 13:44:40.979261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.097 [2024-11-07 13:44:40.979267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.098 [2024-11-07 13:44:40.979283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.098 qpair failed and we were unable to recover it. 00:39:33.098 [2024-11-07 13:44:40.989225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.098 [2024-11-07 13:44:40.989282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.098 [2024-11-07 13:44:40.989298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.098 [2024-11-07 13:44:40.989306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.098 [2024-11-07 13:44:40.989312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.098 [2024-11-07 13:44:40.989328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.098 qpair failed and we were unable to recover it. 00:39:33.098 [2024-11-07 13:44:40.999258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.098 [2024-11-07 13:44:40.999313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.098 [2024-11-07 13:44:40.999329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.098 [2024-11-07 13:44:40.999338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.098 [2024-11-07 13:44:40.999344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.098 [2024-11-07 13:44:40.999359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.098 qpair failed and we were unable to recover it. 00:39:33.098 [2024-11-07 13:44:41.009250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.098 [2024-11-07 13:44:41.009306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.098 [2024-11-07 13:44:41.009322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.098 [2024-11-07 13:44:41.009330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.098 [2024-11-07 13:44:41.009337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.098 [2024-11-07 13:44:41.009352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.098 qpair failed and we were unable to recover it. 00:39:33.098 [2024-11-07 13:44:41.019197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.098 [2024-11-07 13:44:41.019252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.098 [2024-11-07 13:44:41.019268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.098 [2024-11-07 13:44:41.019276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.098 [2024-11-07 13:44:41.019282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.098 [2024-11-07 13:44:41.019298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.098 qpair failed and we were unable to recover it. 00:39:33.098 [2024-11-07 13:44:41.029272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.098 [2024-11-07 13:44:41.029327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.098 [2024-11-07 13:44:41.029343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.098 [2024-11-07 13:44:41.029351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.098 [2024-11-07 13:44:41.029357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.098 [2024-11-07 13:44:41.029376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.098 qpair failed and we were unable to recover it. 00:39:33.098 [2024-11-07 13:44:41.039385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.098 [2024-11-07 13:44:41.039441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.098 [2024-11-07 13:44:41.039458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.098 [2024-11-07 13:44:41.039466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.098 [2024-11-07 13:44:41.039473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.098 [2024-11-07 13:44:41.039489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.098 qpair failed and we were unable to recover it. 00:39:33.098 [2024-11-07 13:44:41.049383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.098 [2024-11-07 13:44:41.049438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.098 [2024-11-07 13:44:41.049454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.098 [2024-11-07 13:44:41.049462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.098 [2024-11-07 13:44:41.049469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.098 [2024-11-07 13:44:41.049485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.098 qpair failed and we were unable to recover it. 00:39:33.098 [2024-11-07 13:44:41.059408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.098 [2024-11-07 13:44:41.059466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.098 [2024-11-07 13:44:41.059484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.098 [2024-11-07 13:44:41.059493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.098 [2024-11-07 13:44:41.059499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.098 [2024-11-07 13:44:41.059515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.098 qpair failed and we were unable to recover it. 00:39:33.098 [2024-11-07 13:44:41.069444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.098 [2024-11-07 13:44:41.069502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.098 [2024-11-07 13:44:41.069518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.098 [2024-11-07 13:44:41.069526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.098 [2024-11-07 13:44:41.069532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.098 [2024-11-07 13:44:41.069549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.098 qpair failed and we were unable to recover it. 00:39:33.098 [2024-11-07 13:44:41.079377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.098 [2024-11-07 13:44:41.079432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.098 [2024-11-07 13:44:41.079448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.098 [2024-11-07 13:44:41.079456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.098 [2024-11-07 13:44:41.079462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.098 [2024-11-07 13:44:41.079478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.098 qpair failed and we were unable to recover it. 00:39:33.098 [2024-11-07 13:44:41.089504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.098 [2024-11-07 13:44:41.089558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.098 [2024-11-07 13:44:41.089574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.098 [2024-11-07 13:44:41.089582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.098 [2024-11-07 13:44:41.089588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.098 [2024-11-07 13:44:41.089604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.098 qpair failed and we were unable to recover it. 00:39:33.361 [2024-11-07 13:44:41.099544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.361 [2024-11-07 13:44:41.099603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.361 [2024-11-07 13:44:41.099619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.361 [2024-11-07 13:44:41.099627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.361 [2024-11-07 13:44:41.099638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.361 [2024-11-07 13:44:41.099654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.361 qpair failed and we were unable to recover it. 00:39:33.361 [2024-11-07 13:44:41.109590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.361 [2024-11-07 13:44:41.109655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.361 [2024-11-07 13:44:41.109671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.361 [2024-11-07 13:44:41.109679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.361 [2024-11-07 13:44:41.109686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.361 [2024-11-07 13:44:41.109701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.361 qpair failed and we were unable to recover it. 00:39:33.361 [2024-11-07 13:44:41.119572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.361 [2024-11-07 13:44:41.119628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.361 [2024-11-07 13:44:41.119644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.361 [2024-11-07 13:44:41.119653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.361 [2024-11-07 13:44:41.119659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.361 [2024-11-07 13:44:41.119675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.361 qpair failed and we were unable to recover it. 00:39:33.361 [2024-11-07 13:44:41.129596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.361 [2024-11-07 13:44:41.129651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.361 [2024-11-07 13:44:41.129668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.361 [2024-11-07 13:44:41.129676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.361 [2024-11-07 13:44:41.129682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.361 [2024-11-07 13:44:41.129698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.361 qpair failed and we were unable to recover it. 00:39:33.361 [2024-11-07 13:44:41.139622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.361 [2024-11-07 13:44:41.139675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.361 [2024-11-07 13:44:41.139692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.361 [2024-11-07 13:44:41.139700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.361 [2024-11-07 13:44:41.139706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.361 [2024-11-07 13:44:41.139722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.361 qpair failed and we were unable to recover it. 00:39:33.361 [2024-11-07 13:44:41.149648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.361 [2024-11-07 13:44:41.149711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.361 [2024-11-07 13:44:41.149727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.361 [2024-11-07 13:44:41.149735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.361 [2024-11-07 13:44:41.149741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.361 [2024-11-07 13:44:41.149757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.361 qpair failed and we were unable to recover it. 00:39:33.361 [2024-11-07 13:44:41.159679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.361 [2024-11-07 13:44:41.159734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.361 [2024-11-07 13:44:41.159750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.361 [2024-11-07 13:44:41.159758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.361 [2024-11-07 13:44:41.159764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.361 [2024-11-07 13:44:41.159780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.361 qpair failed and we were unable to recover it. 00:39:33.361 [2024-11-07 13:44:41.169694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.361 [2024-11-07 13:44:41.169748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.361 [2024-11-07 13:44:41.169764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.361 [2024-11-07 13:44:41.169772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.361 [2024-11-07 13:44:41.169778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.361 [2024-11-07 13:44:41.169793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.361 qpair failed and we were unable to recover it. 00:39:33.361 [2024-11-07 13:44:41.179689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.361 [2024-11-07 13:44:41.179747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.361 [2024-11-07 13:44:41.179763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.361 [2024-11-07 13:44:41.179771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.361 [2024-11-07 13:44:41.179777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.361 [2024-11-07 13:44:41.179793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.361 qpair failed and we were unable to recover it. 00:39:33.361 [2024-11-07 13:44:41.189727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.361 [2024-11-07 13:44:41.189785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.361 [2024-11-07 13:44:41.189803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.361 [2024-11-07 13:44:41.189810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.361 [2024-11-07 13:44:41.189817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.361 [2024-11-07 13:44:41.189832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.361 qpair failed and we were unable to recover it. 00:39:33.361 [2024-11-07 13:44:41.199774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.361 [2024-11-07 13:44:41.199832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.361 [2024-11-07 13:44:41.199848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.362 [2024-11-07 13:44:41.199856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.362 [2024-11-07 13:44:41.199865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.362 [2024-11-07 13:44:41.199882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.362 qpair failed and we were unable to recover it. 00:39:33.362 [2024-11-07 13:44:41.209811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.362 [2024-11-07 13:44:41.209874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.362 [2024-11-07 13:44:41.209890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.362 [2024-11-07 13:44:41.209898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.362 [2024-11-07 13:44:41.209904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.362 [2024-11-07 13:44:41.209920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.362 qpair failed and we were unable to recover it. 00:39:33.362 [2024-11-07 13:44:41.219823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.362 [2024-11-07 13:44:41.219922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.362 [2024-11-07 13:44:41.219939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.362 [2024-11-07 13:44:41.219947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.362 [2024-11-07 13:44:41.219954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.362 [2024-11-07 13:44:41.219970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.362 qpair failed and we were unable to recover it. 00:39:33.362 [2024-11-07 13:44:41.229851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.362 [2024-11-07 13:44:41.229956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.362 [2024-11-07 13:44:41.229973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.362 [2024-11-07 13:44:41.229981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.362 [2024-11-07 13:44:41.229990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.362 [2024-11-07 13:44:41.230006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.362 qpair failed and we were unable to recover it. 00:39:33.362 [2024-11-07 13:44:41.239859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.362 [2024-11-07 13:44:41.239920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.362 [2024-11-07 13:44:41.239936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.362 [2024-11-07 13:44:41.239944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.362 [2024-11-07 13:44:41.239950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.362 [2024-11-07 13:44:41.239966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.362 qpair failed and we were unable to recover it. 00:39:33.362 [2024-11-07 13:44:41.249892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.362 [2024-11-07 13:44:41.249976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.362 [2024-11-07 13:44:41.249992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.362 [2024-11-07 13:44:41.250000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.362 [2024-11-07 13:44:41.250006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.362 [2024-11-07 13:44:41.250022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.362 qpair failed and we were unable to recover it. 00:39:33.362 [2024-11-07 13:44:41.259928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.362 [2024-11-07 13:44:41.259983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.362 [2024-11-07 13:44:41.259998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.362 [2024-11-07 13:44:41.260007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.362 [2024-11-07 13:44:41.260013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.362 [2024-11-07 13:44:41.260029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.362 qpair failed and we were unable to recover it. 00:39:33.362 [2024-11-07 13:44:41.269968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.362 [2024-11-07 13:44:41.270030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.362 [2024-11-07 13:44:41.270046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.362 [2024-11-07 13:44:41.270054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.362 [2024-11-07 13:44:41.270060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.362 [2024-11-07 13:44:41.270076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.362 qpair failed and we were unable to recover it. 00:39:33.362 [2024-11-07 13:44:41.279999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.362 [2024-11-07 13:44:41.280060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.362 [2024-11-07 13:44:41.280076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.362 [2024-11-07 13:44:41.280084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.362 [2024-11-07 13:44:41.280090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.362 [2024-11-07 13:44:41.280106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.362 qpair failed and we were unable to recover it. 00:39:33.362 [2024-11-07 13:44:41.289903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.362 [2024-11-07 13:44:41.289964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.362 [2024-11-07 13:44:41.289980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.362 [2024-11-07 13:44:41.289988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.362 [2024-11-07 13:44:41.289995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.362 [2024-11-07 13:44:41.290023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.362 qpair failed and we were unable to recover it. 00:39:33.362 [2024-11-07 13:44:41.300031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.362 [2024-11-07 13:44:41.300092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.362 [2024-11-07 13:44:41.300108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.362 [2024-11-07 13:44:41.300116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.362 [2024-11-07 13:44:41.300122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.362 [2024-11-07 13:44:41.300138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.362 qpair failed and we were unable to recover it. 00:39:33.362 [2024-11-07 13:44:41.310081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.362 [2024-11-07 13:44:41.310141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.362 [2024-11-07 13:44:41.310157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.362 [2024-11-07 13:44:41.310165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.362 [2024-11-07 13:44:41.310171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.362 [2024-11-07 13:44:41.310187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.362 qpair failed and we were unable to recover it. 00:39:33.362 [2024-11-07 13:44:41.320081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.362 [2024-11-07 13:44:41.320143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.362 [2024-11-07 13:44:41.320159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.362 [2024-11-07 13:44:41.320167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.362 [2024-11-07 13:44:41.320174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.363 [2024-11-07 13:44:41.320190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.363 qpair failed and we were unable to recover it. 00:39:33.363 [2024-11-07 13:44:41.330017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.363 [2024-11-07 13:44:41.330103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.363 [2024-11-07 13:44:41.330119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.363 [2024-11-07 13:44:41.330128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.363 [2024-11-07 13:44:41.330134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.363 [2024-11-07 13:44:41.330150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.363 qpair failed and we were unable to recover it. 00:39:33.363 [2024-11-07 13:44:41.340184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.363 [2024-11-07 13:44:41.340258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.363 [2024-11-07 13:44:41.340274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.363 [2024-11-07 13:44:41.340282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.363 [2024-11-07 13:44:41.340289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.363 [2024-11-07 13:44:41.340305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.363 qpair failed and we were unable to recover it. 00:39:33.363 [2024-11-07 13:44:41.350162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.363 [2024-11-07 13:44:41.350265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.363 [2024-11-07 13:44:41.350282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.363 [2024-11-07 13:44:41.350290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.363 [2024-11-07 13:44:41.350296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.363 [2024-11-07 13:44:41.350312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.363 qpair failed and we were unable to recover it. 00:39:33.363 [2024-11-07 13:44:41.360166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.363 [2024-11-07 13:44:41.360220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.363 [2024-11-07 13:44:41.360236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.363 [2024-11-07 13:44:41.360247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.363 [2024-11-07 13:44:41.360253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.363 [2024-11-07 13:44:41.360268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.363 qpair failed and we were unable to recover it. 00:39:33.626 [2024-11-07 13:44:41.370194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.626 [2024-11-07 13:44:41.370284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.626 [2024-11-07 13:44:41.370301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.626 [2024-11-07 13:44:41.370309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.626 [2024-11-07 13:44:41.370316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.626 [2024-11-07 13:44:41.370332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.626 qpair failed and we were unable to recover it. 00:39:33.626 [2024-11-07 13:44:41.380229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.626 [2024-11-07 13:44:41.380286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.626 [2024-11-07 13:44:41.380302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.626 [2024-11-07 13:44:41.380310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.626 [2024-11-07 13:44:41.380317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.626 [2024-11-07 13:44:41.380332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.626 qpair failed and we were unable to recover it. 00:39:33.626 [2024-11-07 13:44:41.390238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.626 [2024-11-07 13:44:41.390301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.626 [2024-11-07 13:44:41.390316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.626 [2024-11-07 13:44:41.390324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.626 [2024-11-07 13:44:41.390330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.626 [2024-11-07 13:44:41.390345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.626 qpair failed and we were unable to recover it. 00:39:33.626 [2024-11-07 13:44:41.400357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.626 [2024-11-07 13:44:41.400419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.626 [2024-11-07 13:44:41.400435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.626 [2024-11-07 13:44:41.400444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.626 [2024-11-07 13:44:41.400450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.626 [2024-11-07 13:44:41.400468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.626 qpair failed and we were unable to recover it. 00:39:33.626 [2024-11-07 13:44:41.410228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.626 [2024-11-07 13:44:41.410282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.626 [2024-11-07 13:44:41.410298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.626 [2024-11-07 13:44:41.410306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.626 [2024-11-07 13:44:41.410312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.626 [2024-11-07 13:44:41.410328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.626 qpair failed and we were unable to recover it. 00:39:33.626 [2024-11-07 13:44:41.420243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.626 [2024-11-07 13:44:41.420298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.626 [2024-11-07 13:44:41.420314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.626 [2024-11-07 13:44:41.420322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.626 [2024-11-07 13:44:41.420335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.626 [2024-11-07 13:44:41.420351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.626 qpair failed and we were unable to recover it. 00:39:33.626 [2024-11-07 13:44:41.430393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.626 [2024-11-07 13:44:41.430449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.626 [2024-11-07 13:44:41.430464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.626 [2024-11-07 13:44:41.430472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.626 [2024-11-07 13:44:41.430479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.626 [2024-11-07 13:44:41.430494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.626 qpair failed and we were unable to recover it. 00:39:33.626 [2024-11-07 13:44:41.440411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.626 [2024-11-07 13:44:41.440472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.626 [2024-11-07 13:44:41.440487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.626 [2024-11-07 13:44:41.440496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.626 [2024-11-07 13:44:41.440502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.626 [2024-11-07 13:44:41.440518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.626 qpair failed and we were unable to recover it. 00:39:33.626 [2024-11-07 13:44:41.450432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.626 [2024-11-07 13:44:41.450488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.626 [2024-11-07 13:44:41.450505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.626 [2024-11-07 13:44:41.450513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.626 [2024-11-07 13:44:41.450519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.626 [2024-11-07 13:44:41.450535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.626 qpair failed and we were unable to recover it. 00:39:33.626 [2024-11-07 13:44:41.460367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.626 [2024-11-07 13:44:41.460420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.626 [2024-11-07 13:44:41.460436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.626 [2024-11-07 13:44:41.460444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.626 [2024-11-07 13:44:41.460450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.626 [2024-11-07 13:44:41.460466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.626 qpair failed and we were unable to recover it. 00:39:33.626 [2024-11-07 13:44:41.470483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.626 [2024-11-07 13:44:41.470559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.626 [2024-11-07 13:44:41.470575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.627 [2024-11-07 13:44:41.470583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.627 [2024-11-07 13:44:41.470589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.627 [2024-11-07 13:44:41.470606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.627 qpair failed and we were unable to recover it. 00:39:33.627 [2024-11-07 13:44:41.480516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.627 [2024-11-07 13:44:41.480605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.627 [2024-11-07 13:44:41.480620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.627 [2024-11-07 13:44:41.480630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.627 [2024-11-07 13:44:41.480636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.627 [2024-11-07 13:44:41.480651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.627 qpair failed and we were unable to recover it. 00:39:33.627 [2024-11-07 13:44:41.490547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.627 [2024-11-07 13:44:41.490610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.627 [2024-11-07 13:44:41.490636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.627 [2024-11-07 13:44:41.490646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.627 [2024-11-07 13:44:41.490653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.627 [2024-11-07 13:44:41.490674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.627 qpair failed and we were unable to recover it. 00:39:33.627 [2024-11-07 13:44:41.500592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.627 [2024-11-07 13:44:41.500682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.627 [2024-11-07 13:44:41.500700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.627 [2024-11-07 13:44:41.500709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.627 [2024-11-07 13:44:41.500716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.627 [2024-11-07 13:44:41.500734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.627 qpair failed and we were unable to recover it. 00:39:33.627 [2024-11-07 13:44:41.510582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.627 [2024-11-07 13:44:41.510644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.627 [2024-11-07 13:44:41.510660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.627 [2024-11-07 13:44:41.510669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.627 [2024-11-07 13:44:41.510675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.627 [2024-11-07 13:44:41.510692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.627 qpair failed and we were unable to recover it. 00:39:33.627 [2024-11-07 13:44:41.520666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.627 [2024-11-07 13:44:41.520720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.627 [2024-11-07 13:44:41.520736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.627 [2024-11-07 13:44:41.520744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.627 [2024-11-07 13:44:41.520751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.627 [2024-11-07 13:44:41.520767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.627 qpair failed and we were unable to recover it. 00:39:33.627 [2024-11-07 13:44:41.530649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.627 [2024-11-07 13:44:41.530728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.627 [2024-11-07 13:44:41.530744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.627 [2024-11-07 13:44:41.530754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.627 [2024-11-07 13:44:41.530761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.627 [2024-11-07 13:44:41.530780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.627 qpair failed and we were unable to recover it. 00:39:33.627 [2024-11-07 13:44:41.540683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.627 [2024-11-07 13:44:41.540772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.627 [2024-11-07 13:44:41.540788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.627 [2024-11-07 13:44:41.540797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.627 [2024-11-07 13:44:41.540803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.627 [2024-11-07 13:44:41.540819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.627 qpair failed and we were unable to recover it. 00:39:33.627 [2024-11-07 13:44:41.550711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.627 [2024-11-07 13:44:41.550768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.627 [2024-11-07 13:44:41.550785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.627 [2024-11-07 13:44:41.550793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.627 [2024-11-07 13:44:41.550799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.627 [2024-11-07 13:44:41.550815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.627 qpair failed and we were unable to recover it. 00:39:33.627 [2024-11-07 13:44:41.560713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.627 [2024-11-07 13:44:41.560767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.627 [2024-11-07 13:44:41.560783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.627 [2024-11-07 13:44:41.560791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.627 [2024-11-07 13:44:41.560798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.627 [2024-11-07 13:44:41.560813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.627 qpair failed and we were unable to recover it. 00:39:33.627 [2024-11-07 13:44:41.570748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.627 [2024-11-07 13:44:41.570805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.627 [2024-11-07 13:44:41.570821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.627 [2024-11-07 13:44:41.570829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.627 [2024-11-07 13:44:41.570836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.627 [2024-11-07 13:44:41.570852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.627 qpair failed and we were unable to recover it. 00:39:33.627 [2024-11-07 13:44:41.580693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.627 [2024-11-07 13:44:41.580749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.627 [2024-11-07 13:44:41.580765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.627 [2024-11-07 13:44:41.580773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.627 [2024-11-07 13:44:41.580780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.627 [2024-11-07 13:44:41.580795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.627 qpair failed and we were unable to recover it. 00:39:33.627 [2024-11-07 13:44:41.590694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.627 [2024-11-07 13:44:41.590750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.627 [2024-11-07 13:44:41.590766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.627 [2024-11-07 13:44:41.590774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.627 [2024-11-07 13:44:41.590780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.627 [2024-11-07 13:44:41.590796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.627 qpair failed and we were unable to recover it. 00:39:33.627 [2024-11-07 13:44:41.600830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.627 [2024-11-07 13:44:41.600891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.628 [2024-11-07 13:44:41.600907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.628 [2024-11-07 13:44:41.600916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.628 [2024-11-07 13:44:41.600922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.628 [2024-11-07 13:44:41.600937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.628 qpair failed and we were unable to recover it. 00:39:33.628 [2024-11-07 13:44:41.610869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.628 [2024-11-07 13:44:41.610926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.628 [2024-11-07 13:44:41.610943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.628 [2024-11-07 13:44:41.610951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.628 [2024-11-07 13:44:41.610957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.628 [2024-11-07 13:44:41.610973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.628 qpair failed and we were unable to recover it. 00:39:33.628 [2024-11-07 13:44:41.620782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.628 [2024-11-07 13:44:41.620833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.628 [2024-11-07 13:44:41.620851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.628 [2024-11-07 13:44:41.620860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.628 [2024-11-07 13:44:41.620870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.628 [2024-11-07 13:44:41.620893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.628 qpair failed and we were unable to recover it. 00:39:33.889 [2024-11-07 13:44:41.630927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.889 [2024-11-07 13:44:41.630983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.889 [2024-11-07 13:44:41.630999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.889 [2024-11-07 13:44:41.631007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.889 [2024-11-07 13:44:41.631014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.889 [2024-11-07 13:44:41.631030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.889 qpair failed and we were unable to recover it. 00:39:33.889 [2024-11-07 13:44:41.640911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.889 [2024-11-07 13:44:41.640971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.889 [2024-11-07 13:44:41.640987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.889 [2024-11-07 13:44:41.640995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.889 [2024-11-07 13:44:41.641002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.889 [2024-11-07 13:44:41.641017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.889 qpair failed and we were unable to recover it. 00:39:33.889 [2024-11-07 13:44:41.650954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.889 [2024-11-07 13:44:41.651015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.889 [2024-11-07 13:44:41.651030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.890 [2024-11-07 13:44:41.651039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.890 [2024-11-07 13:44:41.651045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.890 [2024-11-07 13:44:41.651061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.890 qpair failed and we were unable to recover it. 00:39:33.890 [2024-11-07 13:44:41.660977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.890 [2024-11-07 13:44:41.661038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.890 [2024-11-07 13:44:41.661054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.890 [2024-11-07 13:44:41.661063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.890 [2024-11-07 13:44:41.661072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.890 [2024-11-07 13:44:41.661088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.890 qpair failed and we were unable to recover it. 00:39:33.890 [2024-11-07 13:44:41.671013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.890 [2024-11-07 13:44:41.671070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.890 [2024-11-07 13:44:41.671086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.890 [2024-11-07 13:44:41.671094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.890 [2024-11-07 13:44:41.671100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.890 [2024-11-07 13:44:41.671115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.890 qpair failed and we were unable to recover it. 00:39:33.890 [2024-11-07 13:44:41.681026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.890 [2024-11-07 13:44:41.681086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.890 [2024-11-07 13:44:41.681107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.890 [2024-11-07 13:44:41.681116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.890 [2024-11-07 13:44:41.681122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.890 [2024-11-07 13:44:41.681138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.890 qpair failed and we were unable to recover it. 00:39:33.890 [2024-11-07 13:44:41.691091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.890 [2024-11-07 13:44:41.691175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.890 [2024-11-07 13:44:41.691191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.890 [2024-11-07 13:44:41.691199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.890 [2024-11-07 13:44:41.691206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.890 [2024-11-07 13:44:41.691221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.890 qpair failed and we were unable to recover it. 00:39:33.890 [2024-11-07 13:44:41.701107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.890 [2024-11-07 13:44:41.701160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.890 [2024-11-07 13:44:41.701176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.890 [2024-11-07 13:44:41.701184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.890 [2024-11-07 13:44:41.701191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.890 [2024-11-07 13:44:41.701207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.890 qpair failed and we were unable to recover it. 00:39:33.890 [2024-11-07 13:44:41.711017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.890 [2024-11-07 13:44:41.711071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.890 [2024-11-07 13:44:41.711087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.890 [2024-11-07 13:44:41.711095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.890 [2024-11-07 13:44:41.711101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.890 [2024-11-07 13:44:41.711117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.890 qpair failed and we were unable to recover it. 00:39:33.890 [2024-11-07 13:44:41.721162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.890 [2024-11-07 13:44:41.721217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.890 [2024-11-07 13:44:41.721233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.890 [2024-11-07 13:44:41.721242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.890 [2024-11-07 13:44:41.721248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.890 [2024-11-07 13:44:41.721264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.890 qpair failed and we were unable to recover it. 00:39:33.890 [2024-11-07 13:44:41.731164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.890 [2024-11-07 13:44:41.731217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.890 [2024-11-07 13:44:41.731233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.890 [2024-11-07 13:44:41.731241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.890 [2024-11-07 13:44:41.731248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.890 [2024-11-07 13:44:41.731263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.890 qpair failed and we were unable to recover it. 00:39:33.890 [2024-11-07 13:44:41.741205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.890 [2024-11-07 13:44:41.741266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.890 [2024-11-07 13:44:41.741284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.890 [2024-11-07 13:44:41.741292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.890 [2024-11-07 13:44:41.741298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.890 [2024-11-07 13:44:41.741314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.890 qpair failed and we were unable to recover it. 00:39:33.890 [2024-11-07 13:44:41.751243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.890 [2024-11-07 13:44:41.751300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.890 [2024-11-07 13:44:41.751319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.890 [2024-11-07 13:44:41.751327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.890 [2024-11-07 13:44:41.751333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.890 [2024-11-07 13:44:41.751349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.890 qpair failed and we were unable to recover it. 00:39:33.890 [2024-11-07 13:44:41.761282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.890 [2024-11-07 13:44:41.761333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.890 [2024-11-07 13:44:41.761349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.890 [2024-11-07 13:44:41.761357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.890 [2024-11-07 13:44:41.761364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.890 [2024-11-07 13:44:41.761380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.890 qpair failed and we were unable to recover it. 00:39:33.890 [2024-11-07 13:44:41.771283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.890 [2024-11-07 13:44:41.771341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.890 [2024-11-07 13:44:41.771358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.890 [2024-11-07 13:44:41.771366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.890 [2024-11-07 13:44:41.771372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.890 [2024-11-07 13:44:41.771388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.890 qpair failed and we were unable to recover it. 00:39:33.890 [2024-11-07 13:44:41.781222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.890 [2024-11-07 13:44:41.781280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.890 [2024-11-07 13:44:41.781296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.890 [2024-11-07 13:44:41.781305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.890 [2024-11-07 13:44:41.781311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.891 [2024-11-07 13:44:41.781327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.891 qpair failed and we were unable to recover it. 00:39:33.891 [2024-11-07 13:44:41.791360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.891 [2024-11-07 13:44:41.791423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.891 [2024-11-07 13:44:41.791439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.891 [2024-11-07 13:44:41.791453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.891 [2024-11-07 13:44:41.791459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.891 [2024-11-07 13:44:41.791475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.891 qpair failed and we were unable to recover it. 00:39:33.891 [2024-11-07 13:44:41.801394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.891 [2024-11-07 13:44:41.801448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.891 [2024-11-07 13:44:41.801463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.891 [2024-11-07 13:44:41.801471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.891 [2024-11-07 13:44:41.801478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.891 [2024-11-07 13:44:41.801493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.891 qpair failed and we were unable to recover it. 00:39:33.891 [2024-11-07 13:44:41.811410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.891 [2024-11-07 13:44:41.811478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.891 [2024-11-07 13:44:41.811494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.891 [2024-11-07 13:44:41.811502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.891 [2024-11-07 13:44:41.811508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.891 [2024-11-07 13:44:41.811523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.891 qpair failed and we were unable to recover it. 00:39:33.891 [2024-11-07 13:44:41.821402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.891 [2024-11-07 13:44:41.821501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.891 [2024-11-07 13:44:41.821517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.891 [2024-11-07 13:44:41.821525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.891 [2024-11-07 13:44:41.821532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.891 [2024-11-07 13:44:41.821547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.891 qpair failed and we were unable to recover it. 00:39:33.891 [2024-11-07 13:44:41.831452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.891 [2024-11-07 13:44:41.831508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.891 [2024-11-07 13:44:41.831524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.891 [2024-11-07 13:44:41.831532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.891 [2024-11-07 13:44:41.831538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.891 [2024-11-07 13:44:41.831553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.891 qpair failed and we were unable to recover it. 00:39:33.891 [2024-11-07 13:44:41.841597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.891 [2024-11-07 13:44:41.841654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.891 [2024-11-07 13:44:41.841670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.891 [2024-11-07 13:44:41.841678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.891 [2024-11-07 13:44:41.841685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.891 [2024-11-07 13:44:41.841700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.891 qpair failed and we were unable to recover it. 00:39:33.891 [2024-11-07 13:44:41.851611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.891 [2024-11-07 13:44:41.851666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.891 [2024-11-07 13:44:41.851682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.891 [2024-11-07 13:44:41.851690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.891 [2024-11-07 13:44:41.851696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.891 [2024-11-07 13:44:41.851711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.891 qpair failed and we were unable to recover it. 00:39:33.891 [2024-11-07 13:44:41.861541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.891 [2024-11-07 13:44:41.861596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.891 [2024-11-07 13:44:41.861612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.891 [2024-11-07 13:44:41.861620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.891 [2024-11-07 13:44:41.861626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.891 [2024-11-07 13:44:41.861642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.891 qpair failed and we were unable to recover it. 00:39:33.891 [2024-11-07 13:44:41.871595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.891 [2024-11-07 13:44:41.871683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.891 [2024-11-07 13:44:41.871700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.891 [2024-11-07 13:44:41.871708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.891 [2024-11-07 13:44:41.871714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.891 [2024-11-07 13:44:41.871732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.891 qpair failed and we were unable to recover it. 00:39:33.891 [2024-11-07 13:44:41.881596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.891 [2024-11-07 13:44:41.881654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.891 [2024-11-07 13:44:41.881670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.891 [2024-11-07 13:44:41.881678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.891 [2024-11-07 13:44:41.881685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.891 [2024-11-07 13:44:41.881700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.891 qpair failed and we were unable to recover it. 00:39:33.891 [2024-11-07 13:44:41.891566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.891 [2024-11-07 13:44:41.891619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.891 [2024-11-07 13:44:41.891635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.891 [2024-11-07 13:44:41.891643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.891 [2024-11-07 13:44:41.891650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:33.891 [2024-11-07 13:44:41.891665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:33.891 qpair failed and we were unable to recover it. 00:39:34.154 [2024-11-07 13:44:41.901647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.154 [2024-11-07 13:44:41.901702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.154 [2024-11-07 13:44:41.901717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.154 [2024-11-07 13:44:41.901726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.154 [2024-11-07 13:44:41.901732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.154 [2024-11-07 13:44:41.901748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.154 qpair failed and we were unable to recover it. 00:39:34.154 [2024-11-07 13:44:41.911687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.154 [2024-11-07 13:44:41.911742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.154 [2024-11-07 13:44:41.911758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.154 [2024-11-07 13:44:41.911766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.154 [2024-11-07 13:44:41.911773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.154 [2024-11-07 13:44:41.911788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.154 qpair failed and we were unable to recover it. 00:39:34.154 [2024-11-07 13:44:41.921695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.154 [2024-11-07 13:44:41.921750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.154 [2024-11-07 13:44:41.921765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.154 [2024-11-07 13:44:41.921776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.154 [2024-11-07 13:44:41.921783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.154 [2024-11-07 13:44:41.921799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.154 qpair failed and we were unable to recover it. 00:39:34.154 [2024-11-07 13:44:41.931719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.154 [2024-11-07 13:44:41.931809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.154 [2024-11-07 13:44:41.931825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.154 [2024-11-07 13:44:41.931833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.154 [2024-11-07 13:44:41.931840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.154 [2024-11-07 13:44:41.931877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.154 qpair failed and we were unable to recover it. 00:39:34.154 [2024-11-07 13:44:41.941765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.154 [2024-11-07 13:44:41.941819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.154 [2024-11-07 13:44:41.941834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.154 [2024-11-07 13:44:41.941842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.154 [2024-11-07 13:44:41.941849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.154 [2024-11-07 13:44:41.941870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.154 qpair failed and we were unable to recover it. 00:39:34.154 [2024-11-07 13:44:41.951779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.154 [2024-11-07 13:44:41.951836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.154 [2024-11-07 13:44:41.951852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.154 [2024-11-07 13:44:41.951860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.154 [2024-11-07 13:44:41.951870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.154 [2024-11-07 13:44:41.951889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.154 qpair failed and we were unable to recover it. 00:39:34.154 [2024-11-07 13:44:41.961780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.154 [2024-11-07 13:44:41.961840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.155 [2024-11-07 13:44:41.961856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.155 [2024-11-07 13:44:41.961867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.155 [2024-11-07 13:44:41.961873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.155 [2024-11-07 13:44:41.961892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.155 qpair failed and we were unable to recover it. 00:39:34.155 [2024-11-07 13:44:41.971844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.155 [2024-11-07 13:44:41.971917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.155 [2024-11-07 13:44:41.971933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.155 [2024-11-07 13:44:41.971941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.155 [2024-11-07 13:44:41.971948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.155 [2024-11-07 13:44:41.971963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.155 qpair failed and we were unable to recover it. 00:39:34.155 [2024-11-07 13:44:41.981825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.155 [2024-11-07 13:44:41.981884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.155 [2024-11-07 13:44:41.981901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.155 [2024-11-07 13:44:41.981909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.155 [2024-11-07 13:44:41.981915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.155 [2024-11-07 13:44:41.981931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.155 qpair failed and we were unable to recover it. 00:39:34.155 [2024-11-07 13:44:41.991777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.155 [2024-11-07 13:44:41.991835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.155 [2024-11-07 13:44:41.991851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.155 [2024-11-07 13:44:41.991859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.155 [2024-11-07 13:44:41.991871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.155 [2024-11-07 13:44:41.991887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.155 qpair failed and we were unable to recover it. 00:39:34.155 [2024-11-07 13:44:42.001924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.155 [2024-11-07 13:44:42.001976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.155 [2024-11-07 13:44:42.001992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.155 [2024-11-07 13:44:42.002000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.155 [2024-11-07 13:44:42.002007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.155 [2024-11-07 13:44:42.002022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.155 qpair failed and we were unable to recover it. 00:39:34.155 [2024-11-07 13:44:42.011946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.155 [2024-11-07 13:44:42.012017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.155 [2024-11-07 13:44:42.012034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.155 [2024-11-07 13:44:42.012042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.155 [2024-11-07 13:44:42.012048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.155 [2024-11-07 13:44:42.012065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.155 qpair failed and we were unable to recover it. 00:39:34.155 [2024-11-07 13:44:42.022010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.155 [2024-11-07 13:44:42.022071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.155 [2024-11-07 13:44:42.022087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.155 [2024-11-07 13:44:42.022095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.155 [2024-11-07 13:44:42.022101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.155 [2024-11-07 13:44:42.022117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.155 qpair failed and we were unable to recover it. 00:39:34.155 [2024-11-07 13:44:42.032007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.155 [2024-11-07 13:44:42.032061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.155 [2024-11-07 13:44:42.032077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.155 [2024-11-07 13:44:42.032085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.155 [2024-11-07 13:44:42.032092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.155 [2024-11-07 13:44:42.032107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.155 qpair failed and we were unable to recover it. 00:39:34.155 [2024-11-07 13:44:42.042022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.155 [2024-11-07 13:44:42.042079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.155 [2024-11-07 13:44:42.042095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.155 [2024-11-07 13:44:42.042103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.155 [2024-11-07 13:44:42.042109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.155 [2024-11-07 13:44:42.042125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.155 qpair failed and we were unable to recover it. 00:39:34.155 [2024-11-07 13:44:42.052053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.155 [2024-11-07 13:44:42.052114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.155 [2024-11-07 13:44:42.052133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.155 [2024-11-07 13:44:42.052141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.155 [2024-11-07 13:44:42.052147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.155 [2024-11-07 13:44:42.052163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.155 qpair failed and we were unable to recover it. 00:39:34.155 [2024-11-07 13:44:42.062163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.155 [2024-11-07 13:44:42.062218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.155 [2024-11-07 13:44:42.062234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.155 [2024-11-07 13:44:42.062243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.155 [2024-11-07 13:44:42.062249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.155 [2024-11-07 13:44:42.062264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.155 qpair failed and we were unable to recover it. 00:39:34.155 [2024-11-07 13:44:42.072112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.155 [2024-11-07 13:44:42.072178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.155 [2024-11-07 13:44:42.072194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.155 [2024-11-07 13:44:42.072202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.155 [2024-11-07 13:44:42.072208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.155 [2024-11-07 13:44:42.072223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.155 qpair failed and we were unable to recover it. 00:39:34.155 [2024-11-07 13:44:42.082125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.156 [2024-11-07 13:44:42.082202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.156 [2024-11-07 13:44:42.082217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.156 [2024-11-07 13:44:42.082226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.156 [2024-11-07 13:44:42.082232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.156 [2024-11-07 13:44:42.082247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.156 qpair failed and we were unable to recover it. 00:39:34.156 [2024-11-07 13:44:42.092165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.156 [2024-11-07 13:44:42.092263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.156 [2024-11-07 13:44:42.092279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.156 [2024-11-07 13:44:42.092288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.156 [2024-11-07 13:44:42.092296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.156 [2024-11-07 13:44:42.092312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.156 qpair failed and we were unable to recover it. 00:39:34.156 [2024-11-07 13:44:42.102185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.156 [2024-11-07 13:44:42.102238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.156 [2024-11-07 13:44:42.102254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.156 [2024-11-07 13:44:42.102262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.156 [2024-11-07 13:44:42.102268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.156 [2024-11-07 13:44:42.102284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.156 qpair failed and we were unable to recover it. 00:39:34.156 [2024-11-07 13:44:42.112204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.156 [2024-11-07 13:44:42.112263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.156 [2024-11-07 13:44:42.112278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.156 [2024-11-07 13:44:42.112286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.156 [2024-11-07 13:44:42.112292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.156 [2024-11-07 13:44:42.112308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.156 qpair failed and we were unable to recover it. 00:39:34.156 [2024-11-07 13:44:42.122249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.156 [2024-11-07 13:44:42.122302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.156 [2024-11-07 13:44:42.122318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.156 [2024-11-07 13:44:42.122326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.156 [2024-11-07 13:44:42.122332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.156 [2024-11-07 13:44:42.122349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.156 qpair failed and we were unable to recover it. 00:39:34.156 [2024-11-07 13:44:42.132261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.156 [2024-11-07 13:44:42.132318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.156 [2024-11-07 13:44:42.132334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.156 [2024-11-07 13:44:42.132342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.156 [2024-11-07 13:44:42.132348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.156 [2024-11-07 13:44:42.132364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.156 qpair failed and we were unable to recover it. 00:39:34.156 [2024-11-07 13:44:42.142274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.156 [2024-11-07 13:44:42.142334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.156 [2024-11-07 13:44:42.142350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.156 [2024-11-07 13:44:42.142358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.156 [2024-11-07 13:44:42.142365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.156 [2024-11-07 13:44:42.142380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.156 qpair failed and we were unable to recover it. 00:39:34.156 [2024-11-07 13:44:42.152326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.156 [2024-11-07 13:44:42.152381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.156 [2024-11-07 13:44:42.152397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.156 [2024-11-07 13:44:42.152405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.156 [2024-11-07 13:44:42.152411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.156 [2024-11-07 13:44:42.152427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.156 qpair failed and we were unable to recover it. 00:39:34.419 [2024-11-07 13:44:42.162356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.419 [2024-11-07 13:44:42.162412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.419 [2024-11-07 13:44:42.162429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.419 [2024-11-07 13:44:42.162437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.419 [2024-11-07 13:44:42.162444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.419 [2024-11-07 13:44:42.162459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.419 qpair failed and we were unable to recover it. 00:39:34.419 [2024-11-07 13:44:42.172372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.419 [2024-11-07 13:44:42.172427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.419 [2024-11-07 13:44:42.172443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.419 [2024-11-07 13:44:42.172452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.419 [2024-11-07 13:44:42.172458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.419 [2024-11-07 13:44:42.172474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.419 qpair failed and we were unable to recover it. 00:39:34.419 [2024-11-07 13:44:42.182363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.419 [2024-11-07 13:44:42.182414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.419 [2024-11-07 13:44:42.182433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.419 [2024-11-07 13:44:42.182441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.419 [2024-11-07 13:44:42.182448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.419 [2024-11-07 13:44:42.182463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.419 qpair failed and we were unable to recover it. 00:39:34.419 [2024-11-07 13:44:42.192421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.419 [2024-11-07 13:44:42.192478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.419 [2024-11-07 13:44:42.192494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.419 [2024-11-07 13:44:42.192506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.419 [2024-11-07 13:44:42.192512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.419 [2024-11-07 13:44:42.192528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.419 qpair failed and we were unable to recover it. 00:39:34.419 [2024-11-07 13:44:42.202450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.419 [2024-11-07 13:44:42.202507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.419 [2024-11-07 13:44:42.202523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.419 [2024-11-07 13:44:42.202531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.419 [2024-11-07 13:44:42.202537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.419 [2024-11-07 13:44:42.202552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.419 qpair failed and we were unable to recover it. 00:39:34.419 [2024-11-07 13:44:42.212446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.419 [2024-11-07 13:44:42.212501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.419 [2024-11-07 13:44:42.212517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.419 [2024-11-07 13:44:42.212525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.419 [2024-11-07 13:44:42.212531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.419 [2024-11-07 13:44:42.212546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.419 qpair failed and we were unable to recover it. 00:39:34.419 [2024-11-07 13:44:42.222490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.419 [2024-11-07 13:44:42.222544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.419 [2024-11-07 13:44:42.222560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.419 [2024-11-07 13:44:42.222568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.419 [2024-11-07 13:44:42.222581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.419 [2024-11-07 13:44:42.222597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.419 qpair failed and we were unable to recover it. 00:39:34.419 [2024-11-07 13:44:42.232518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.419 [2024-11-07 13:44:42.232584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.419 [2024-11-07 13:44:42.232607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.419 [2024-11-07 13:44:42.232617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.419 [2024-11-07 13:44:42.232624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.419 [2024-11-07 13:44:42.232644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.419 qpair failed and we were unable to recover it. 00:39:34.419 [2024-11-07 13:44:42.242561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.419 [2024-11-07 13:44:42.242626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.420 [2024-11-07 13:44:42.242650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.420 [2024-11-07 13:44:42.242660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.420 [2024-11-07 13:44:42.242667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.420 [2024-11-07 13:44:42.242687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.420 qpair failed and we were unable to recover it. 00:39:34.420 [2024-11-07 13:44:42.252541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.420 [2024-11-07 13:44:42.252600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.420 [2024-11-07 13:44:42.252624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.420 [2024-11-07 13:44:42.252633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.420 [2024-11-07 13:44:42.252640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.420 [2024-11-07 13:44:42.252660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.420 qpair failed and we were unable to recover it. 00:39:34.420 [2024-11-07 13:44:42.262586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.420 [2024-11-07 13:44:42.262677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.420 [2024-11-07 13:44:42.262700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.420 [2024-11-07 13:44:42.262710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.420 [2024-11-07 13:44:42.262717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.420 [2024-11-07 13:44:42.262737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.420 qpair failed and we were unable to recover it. 00:39:34.420 [2024-11-07 13:44:42.272649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.420 [2024-11-07 13:44:42.272715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.420 [2024-11-07 13:44:42.272732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.420 [2024-11-07 13:44:42.272741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.420 [2024-11-07 13:44:42.272748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.420 [2024-11-07 13:44:42.272765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.420 qpair failed and we were unable to recover it. 00:39:34.420 [2024-11-07 13:44:42.282657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.420 [2024-11-07 13:44:42.282710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.420 [2024-11-07 13:44:42.282727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.420 [2024-11-07 13:44:42.282735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.420 [2024-11-07 13:44:42.282742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.420 [2024-11-07 13:44:42.282761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.420 qpair failed and we were unable to recover it. 00:39:34.420 [2024-11-07 13:44:42.292671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.420 [2024-11-07 13:44:42.292727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.420 [2024-11-07 13:44:42.292743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.420 [2024-11-07 13:44:42.292751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.420 [2024-11-07 13:44:42.292757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.420 [2024-11-07 13:44:42.292773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.420 qpair failed and we were unable to recover it. 00:39:34.420 [2024-11-07 13:44:42.302681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.420 [2024-11-07 13:44:42.302740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.420 [2024-11-07 13:44:42.302756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.420 [2024-11-07 13:44:42.302765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.420 [2024-11-07 13:44:42.302771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.420 [2024-11-07 13:44:42.302787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.420 qpair failed and we were unable to recover it. 00:39:34.420 [2024-11-07 13:44:42.312725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.420 [2024-11-07 13:44:42.312781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.420 [2024-11-07 13:44:42.312800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.420 [2024-11-07 13:44:42.312808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.420 [2024-11-07 13:44:42.312814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.420 [2024-11-07 13:44:42.312830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.420 qpair failed and we were unable to recover it. 00:39:34.420 [2024-11-07 13:44:42.322770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.420 [2024-11-07 13:44:42.322824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.420 [2024-11-07 13:44:42.322840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.420 [2024-11-07 13:44:42.322849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.420 [2024-11-07 13:44:42.322855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.420 [2024-11-07 13:44:42.322874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.420 qpair failed and we were unable to recover it. 00:39:34.420 [2024-11-07 13:44:42.332801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.420 [2024-11-07 13:44:42.332859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.420 [2024-11-07 13:44:42.332879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.420 [2024-11-07 13:44:42.332887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.420 [2024-11-07 13:44:42.332896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.420 [2024-11-07 13:44:42.332913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.420 qpair failed and we were unable to recover it. 00:39:34.420 [2024-11-07 13:44:42.342788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.420 [2024-11-07 13:44:42.342844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.420 [2024-11-07 13:44:42.342860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.420 [2024-11-07 13:44:42.342873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.420 [2024-11-07 13:44:42.342879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.420 [2024-11-07 13:44:42.342895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.420 qpair failed and we were unable to recover it. 00:39:34.420 [2024-11-07 13:44:42.352732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.420 [2024-11-07 13:44:42.352790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.420 [2024-11-07 13:44:42.352806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.420 [2024-11-07 13:44:42.352817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.420 [2024-11-07 13:44:42.352823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.420 [2024-11-07 13:44:42.352839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.420 qpair failed and we were unable to recover it. 00:39:34.420 [2024-11-07 13:44:42.362879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.421 [2024-11-07 13:44:42.362935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.421 [2024-11-07 13:44:42.362951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.421 [2024-11-07 13:44:42.362960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.421 [2024-11-07 13:44:42.362966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.421 [2024-11-07 13:44:42.362982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.421 qpair failed and we were unable to recover it. 00:39:34.421 [2024-11-07 13:44:42.372933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.421 [2024-11-07 13:44:42.373002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.421 [2024-11-07 13:44:42.373018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.421 [2024-11-07 13:44:42.373025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.421 [2024-11-07 13:44:42.373032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.421 [2024-11-07 13:44:42.373047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.421 qpair failed and we were unable to recover it. 00:39:34.421 [2024-11-07 13:44:42.382910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.421 [2024-11-07 13:44:42.382968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.421 [2024-11-07 13:44:42.382984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.421 [2024-11-07 13:44:42.382992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.421 [2024-11-07 13:44:42.382998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.421 [2024-11-07 13:44:42.383014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.421 qpair failed and we were unable to recover it. 00:39:34.421 [2024-11-07 13:44:42.392969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.421 [2024-11-07 13:44:42.393025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.421 [2024-11-07 13:44:42.393041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.421 [2024-11-07 13:44:42.393049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.421 [2024-11-07 13:44:42.393055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.421 [2024-11-07 13:44:42.393071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.421 qpair failed and we were unable to recover it. 00:39:34.421 [2024-11-07 13:44:42.402979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.421 [2024-11-07 13:44:42.403080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.421 [2024-11-07 13:44:42.403097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.421 [2024-11-07 13:44:42.403105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.421 [2024-11-07 13:44:42.403112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.421 [2024-11-07 13:44:42.403128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.421 qpair failed and we were unable to recover it. 00:39:34.421 [2024-11-07 13:44:42.412931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.421 [2024-11-07 13:44:42.413013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.421 [2024-11-07 13:44:42.413029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.421 [2024-11-07 13:44:42.413037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.421 [2024-11-07 13:44:42.413044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.421 [2024-11-07 13:44:42.413059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.421 qpair failed and we were unable to recover it. 00:39:34.684 [2024-11-07 13:44:42.423006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.684 [2024-11-07 13:44:42.423064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.684 [2024-11-07 13:44:42.423080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.684 [2024-11-07 13:44:42.423088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.684 [2024-11-07 13:44:42.423094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.684 [2024-11-07 13:44:42.423110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.684 qpair failed and we were unable to recover it. 00:39:34.684 [2024-11-07 13:44:42.433058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.684 [2024-11-07 13:44:42.433120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.684 [2024-11-07 13:44:42.433136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.684 [2024-11-07 13:44:42.433144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.684 [2024-11-07 13:44:42.433151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.684 [2024-11-07 13:44:42.433166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.684 qpair failed and we were unable to recover it. 00:39:34.684 [2024-11-07 13:44:42.443258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.684 [2024-11-07 13:44:42.443321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.684 [2024-11-07 13:44:42.443337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.684 [2024-11-07 13:44:42.443346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.684 [2024-11-07 13:44:42.443352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.684 [2024-11-07 13:44:42.443368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.684 qpair failed and we were unable to recover it. 00:39:34.684 [2024-11-07 13:44:42.453120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.684 [2024-11-07 13:44:42.453177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.685 [2024-11-07 13:44:42.453194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.685 [2024-11-07 13:44:42.453202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.685 [2024-11-07 13:44:42.453208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.685 [2024-11-07 13:44:42.453224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.685 qpair failed and we were unable to recover it. 00:39:34.685 [2024-11-07 13:44:42.463141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.685 [2024-11-07 13:44:42.463199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.685 [2024-11-07 13:44:42.463216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.685 [2024-11-07 13:44:42.463223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.685 [2024-11-07 13:44:42.463230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.685 [2024-11-07 13:44:42.463245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.685 qpair failed and we were unable to recover it. 00:39:34.685 [2024-11-07 13:44:42.473191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.685 [2024-11-07 13:44:42.473248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.685 [2024-11-07 13:44:42.473264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.685 [2024-11-07 13:44:42.473272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.685 [2024-11-07 13:44:42.473279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.685 [2024-11-07 13:44:42.473294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.685 qpair failed and we were unable to recover it. 00:39:34.685 [2024-11-07 13:44:42.483235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.685 [2024-11-07 13:44:42.483299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.685 [2024-11-07 13:44:42.483316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.685 [2024-11-07 13:44:42.483328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.685 [2024-11-07 13:44:42.483334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.685 [2024-11-07 13:44:42.483350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.685 qpair failed and we were unable to recover it. 00:39:34.685 [2024-11-07 13:44:42.493255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.685 [2024-11-07 13:44:42.493313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.685 [2024-11-07 13:44:42.493330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.685 [2024-11-07 13:44:42.493338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.685 [2024-11-07 13:44:42.493344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.685 [2024-11-07 13:44:42.493359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.685 qpair failed and we were unable to recover it. 00:39:34.685 [2024-11-07 13:44:42.503246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.685 [2024-11-07 13:44:42.503310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.685 [2024-11-07 13:44:42.503326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.685 [2024-11-07 13:44:42.503334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.685 [2024-11-07 13:44:42.503340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.685 [2024-11-07 13:44:42.503355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.685 qpair failed and we were unable to recover it. 00:39:34.685 [2024-11-07 13:44:42.513263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.685 [2024-11-07 13:44:42.513348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.685 [2024-11-07 13:44:42.513364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.685 [2024-11-07 13:44:42.513372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.685 [2024-11-07 13:44:42.513379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.685 [2024-11-07 13:44:42.513394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.685 qpair failed and we were unable to recover it. 00:39:34.685 [2024-11-07 13:44:42.523342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.685 [2024-11-07 13:44:42.523398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.685 [2024-11-07 13:44:42.523414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.685 [2024-11-07 13:44:42.523423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.685 [2024-11-07 13:44:42.523429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.685 [2024-11-07 13:44:42.523448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.685 qpair failed and we were unable to recover it. 00:39:34.685 [2024-11-07 13:44:42.533319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.685 [2024-11-07 13:44:42.533374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.685 [2024-11-07 13:44:42.533390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.685 [2024-11-07 13:44:42.533399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.685 [2024-11-07 13:44:42.533405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.685 [2024-11-07 13:44:42.533420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.685 qpair failed and we were unable to recover it. 00:39:34.685 [2024-11-07 13:44:42.543365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.685 [2024-11-07 13:44:42.543422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.685 [2024-11-07 13:44:42.543438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.685 [2024-11-07 13:44:42.543446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.685 [2024-11-07 13:44:42.543453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.685 [2024-11-07 13:44:42.543468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.685 qpair failed and we were unable to recover it. 00:39:34.685 [2024-11-07 13:44:42.553383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.685 [2024-11-07 13:44:42.553443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.685 [2024-11-07 13:44:42.553459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.685 [2024-11-07 13:44:42.553467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.685 [2024-11-07 13:44:42.553474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.685 [2024-11-07 13:44:42.553490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.685 qpair failed and we were unable to recover it. 00:39:34.685 [2024-11-07 13:44:42.563426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.685 [2024-11-07 13:44:42.563481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.685 [2024-11-07 13:44:42.563497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.685 [2024-11-07 13:44:42.563505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.685 [2024-11-07 13:44:42.563511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.685 [2024-11-07 13:44:42.563527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.685 qpair failed and we were unable to recover it. 00:39:34.685 [2024-11-07 13:44:42.573437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.685 [2024-11-07 13:44:42.573496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.685 [2024-11-07 13:44:42.573513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.685 [2024-11-07 13:44:42.573521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.685 [2024-11-07 13:44:42.573527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.686 [2024-11-07 13:44:42.573543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.686 qpair failed and we were unable to recover it. 00:39:34.686 [2024-11-07 13:44:42.583442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.686 [2024-11-07 13:44:42.583500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.686 [2024-11-07 13:44:42.583516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.686 [2024-11-07 13:44:42.583524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.686 [2024-11-07 13:44:42.583532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.686 [2024-11-07 13:44:42.583547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.686 qpair failed and we were unable to recover it. 00:39:34.686 [2024-11-07 13:44:42.593506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.686 [2024-11-07 13:44:42.593566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.686 [2024-11-07 13:44:42.593582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.686 [2024-11-07 13:44:42.593590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.686 [2024-11-07 13:44:42.593596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.686 [2024-11-07 13:44:42.593612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.686 qpair failed and we were unable to recover it. 00:39:34.686 [2024-11-07 13:44:42.603532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.686 [2024-11-07 13:44:42.603590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.686 [2024-11-07 13:44:42.603606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.686 [2024-11-07 13:44:42.603614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.686 [2024-11-07 13:44:42.603620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.686 [2024-11-07 13:44:42.603636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.686 qpair failed and we were unable to recover it. 00:39:34.686 [2024-11-07 13:44:42.613549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.686 [2024-11-07 13:44:42.613634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.686 [2024-11-07 13:44:42.613653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.686 [2024-11-07 13:44:42.613661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.686 [2024-11-07 13:44:42.613667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.686 [2024-11-07 13:44:42.613696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.686 qpair failed and we were unable to recover it. 00:39:34.686 [2024-11-07 13:44:42.623568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.686 [2024-11-07 13:44:42.623625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.686 [2024-11-07 13:44:42.623641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.686 [2024-11-07 13:44:42.623649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.686 [2024-11-07 13:44:42.623655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.686 [2024-11-07 13:44:42.623671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.686 qpair failed and we were unable to recover it. 00:39:34.686 [2024-11-07 13:44:42.633586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.686 [2024-11-07 13:44:42.633674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.686 [2024-11-07 13:44:42.633690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.686 [2024-11-07 13:44:42.633699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.686 [2024-11-07 13:44:42.633705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.686 [2024-11-07 13:44:42.633721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.686 qpair failed and we were unable to recover it. 00:39:34.686 [2024-11-07 13:44:42.643687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.686 [2024-11-07 13:44:42.643748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.686 [2024-11-07 13:44:42.643763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.686 [2024-11-07 13:44:42.643772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.686 [2024-11-07 13:44:42.643778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.686 [2024-11-07 13:44:42.643793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.686 qpair failed and we were unable to recover it. 00:39:34.686 [2024-11-07 13:44:42.653654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.686 [2024-11-07 13:44:42.653705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.686 [2024-11-07 13:44:42.653722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.686 [2024-11-07 13:44:42.653730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.686 [2024-11-07 13:44:42.653738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.686 [2024-11-07 13:44:42.653754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.686 qpair failed and we were unable to recover it. 00:39:34.686 [2024-11-07 13:44:42.663724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.686 [2024-11-07 13:44:42.663800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.686 [2024-11-07 13:44:42.663815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.686 [2024-11-07 13:44:42.663823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.686 [2024-11-07 13:44:42.663829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.686 [2024-11-07 13:44:42.663845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.686 qpair failed and we were unable to recover it. 00:39:34.686 [2024-11-07 13:44:42.673696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.686 [2024-11-07 13:44:42.673779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.686 [2024-11-07 13:44:42.673795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.686 [2024-11-07 13:44:42.673803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.686 [2024-11-07 13:44:42.673809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.686 [2024-11-07 13:44:42.673824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.686 qpair failed and we were unable to recover it. 00:39:34.686 [2024-11-07 13:44:42.683640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.686 [2024-11-07 13:44:42.683696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.686 [2024-11-07 13:44:42.683713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.686 [2024-11-07 13:44:42.683721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.687 [2024-11-07 13:44:42.683727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.687 [2024-11-07 13:44:42.683743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.687 qpair failed and we were unable to recover it. 00:39:34.949 [2024-11-07 13:44:42.693651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.949 [2024-11-07 13:44:42.693721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.949 [2024-11-07 13:44:42.693737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.949 [2024-11-07 13:44:42.693745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.949 [2024-11-07 13:44:42.693751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.949 [2024-11-07 13:44:42.693767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.949 qpair failed and we were unable to recover it. 00:39:34.949 [2024-11-07 13:44:42.703789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.949 [2024-11-07 13:44:42.703842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.949 [2024-11-07 13:44:42.703858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.949 [2024-11-07 13:44:42.703870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.949 [2024-11-07 13:44:42.703881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.949 [2024-11-07 13:44:42.703897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.949 qpair failed and we were unable to recover it. 00:39:34.949 [2024-11-07 13:44:42.713790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.949 [2024-11-07 13:44:42.713844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.949 [2024-11-07 13:44:42.713865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.949 [2024-11-07 13:44:42.713874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.949 [2024-11-07 13:44:42.713880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.949 [2024-11-07 13:44:42.713897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.949 qpair failed and we were unable to recover it. 00:39:34.949 [2024-11-07 13:44:42.723843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.949 [2024-11-07 13:44:42.723904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.949 [2024-11-07 13:44:42.723920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.949 [2024-11-07 13:44:42.723928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.949 [2024-11-07 13:44:42.723934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.949 [2024-11-07 13:44:42.723951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.949 qpair failed and we were unable to recover it. 00:39:34.949 [2024-11-07 13:44:42.733868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.949 [2024-11-07 13:44:42.733924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.949 [2024-11-07 13:44:42.733941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.949 [2024-11-07 13:44:42.733950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.949 [2024-11-07 13:44:42.733958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.949 [2024-11-07 13:44:42.733975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.949 qpair failed and we were unable to recover it. 00:39:34.949 [2024-11-07 13:44:42.743887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.949 [2024-11-07 13:44:42.743943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.949 [2024-11-07 13:44:42.743962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.949 [2024-11-07 13:44:42.743971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.949 [2024-11-07 13:44:42.743977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.950 [2024-11-07 13:44:42.743994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.950 qpair failed and we were unable to recover it. 00:39:34.950 [2024-11-07 13:44:42.753836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.950 [2024-11-07 13:44:42.753904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.950 [2024-11-07 13:44:42.753920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.950 [2024-11-07 13:44:42.753929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.950 [2024-11-07 13:44:42.753935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.950 [2024-11-07 13:44:42.753951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.950 qpair failed and we were unable to recover it. 00:39:34.950 [2024-11-07 13:44:42.763974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.950 [2024-11-07 13:44:42.764030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.950 [2024-11-07 13:44:42.764046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.950 [2024-11-07 13:44:42.764054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.950 [2024-11-07 13:44:42.764060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.950 [2024-11-07 13:44:42.764076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.950 qpair failed and we were unable to recover it. 00:39:34.950 [2024-11-07 13:44:42.773998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.950 [2024-11-07 13:44:42.774051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.950 [2024-11-07 13:44:42.774067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.950 [2024-11-07 13:44:42.774075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.950 [2024-11-07 13:44:42.774081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.950 [2024-11-07 13:44:42.774098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.950 qpair failed and we were unable to recover it. 00:39:34.950 [2024-11-07 13:44:42.784004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.950 [2024-11-07 13:44:42.784086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.950 [2024-11-07 13:44:42.784102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.950 [2024-11-07 13:44:42.784110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.950 [2024-11-07 13:44:42.784120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.950 [2024-11-07 13:44:42.784135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.950 qpair failed and we were unable to recover it. 00:39:34.950 [2024-11-07 13:44:42.794065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.950 [2024-11-07 13:44:42.794128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.950 [2024-11-07 13:44:42.794144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.950 [2024-11-07 13:44:42.794153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.950 [2024-11-07 13:44:42.794159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.950 [2024-11-07 13:44:42.794175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.950 qpair failed and we were unable to recover it. 00:39:34.950 [2024-11-07 13:44:42.804050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.950 [2024-11-07 13:44:42.804106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.950 [2024-11-07 13:44:42.804122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.950 [2024-11-07 13:44:42.804131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.950 [2024-11-07 13:44:42.804137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.950 [2024-11-07 13:44:42.804153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.950 qpair failed and we were unable to recover it. 00:39:34.950 [2024-11-07 13:44:42.814114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.950 [2024-11-07 13:44:42.814173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.950 [2024-11-07 13:44:42.814189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.950 [2024-11-07 13:44:42.814197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.950 [2024-11-07 13:44:42.814203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.950 [2024-11-07 13:44:42.814219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.950 qpair failed and we were unable to recover it. 00:39:34.950 [2024-11-07 13:44:42.824091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.950 [2024-11-07 13:44:42.824146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.950 [2024-11-07 13:44:42.824162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.950 [2024-11-07 13:44:42.824170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.950 [2024-11-07 13:44:42.824177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.950 [2024-11-07 13:44:42.824193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.950 qpair failed and we were unable to recover it. 00:39:34.950 [2024-11-07 13:44:42.834182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.950 [2024-11-07 13:44:42.834236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.950 [2024-11-07 13:44:42.834252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.950 [2024-11-07 13:44:42.834260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.950 [2024-11-07 13:44:42.834266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.950 [2024-11-07 13:44:42.834282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.950 qpair failed and we were unable to recover it. 00:39:34.950 [2024-11-07 13:44:42.844088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.950 [2024-11-07 13:44:42.844161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.950 [2024-11-07 13:44:42.844177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.950 [2024-11-07 13:44:42.844186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.950 [2024-11-07 13:44:42.844192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.950 [2024-11-07 13:44:42.844208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.950 qpair failed and we were unable to recover it. 00:39:34.950 [2024-11-07 13:44:42.854095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.950 [2024-11-07 13:44:42.854156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.950 [2024-11-07 13:44:42.854172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.950 [2024-11-07 13:44:42.854180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.950 [2024-11-07 13:44:42.854186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.950 [2024-11-07 13:44:42.854203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.950 qpair failed and we were unable to recover it. 00:39:34.950 [2024-11-07 13:44:42.864214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.950 [2024-11-07 13:44:42.864274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.950 [2024-11-07 13:44:42.864291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.950 [2024-11-07 13:44:42.864299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.950 [2024-11-07 13:44:42.864305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.950 [2024-11-07 13:44:42.864321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.950 qpair failed and we were unable to recover it. 00:39:34.950 [2024-11-07 13:44:42.874169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.950 [2024-11-07 13:44:42.874225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.950 [2024-11-07 13:44:42.874243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.951 [2024-11-07 13:44:42.874252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.951 [2024-11-07 13:44:42.874258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.951 [2024-11-07 13:44:42.874274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.951 qpair failed and we were unable to recover it. 00:39:34.951 [2024-11-07 13:44:42.884168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.951 [2024-11-07 13:44:42.884223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.951 [2024-11-07 13:44:42.884239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.951 [2024-11-07 13:44:42.884247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.951 [2024-11-07 13:44:42.884253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.951 [2024-11-07 13:44:42.884269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.951 qpair failed and we were unable to recover it. 00:39:34.951 [2024-11-07 13:44:42.894292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.951 [2024-11-07 13:44:42.894346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.951 [2024-11-07 13:44:42.894363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.951 [2024-11-07 13:44:42.894373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.951 [2024-11-07 13:44:42.894381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500042fe80 00:39:34.951 [2024-11-07 13:44:42.894399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:34.951 qpair failed and we were unable to recover it. 00:39:34.951 [2024-11-07 13:44:42.894723] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:39:34.951 A controller has encountered a failure and is being reset. 00:39:34.951 [2024-11-07 13:44:42.904611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.951 [2024-11-07 13:44:42.904756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.951 [2024-11-07 13:44:42.904851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.951 [2024-11-07 13:44:42.904907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.951 [2024-11-07 13:44:42.904938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500041ff80 00:39:34.951 [2024-11-07 13:44:42.905013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:39:34.951 qpair failed and we were unable to recover it. 00:39:34.951 [2024-11-07 13:44:42.914538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:34.951 [2024-11-07 13:44:42.914659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:34.951 [2024-11-07 13:44:42.914711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:34.951 [2024-11-07 13:44:42.914738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:34.951 [2024-11-07 13:44:42.914760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500041ff80 00:39:34.951 [2024-11-07 13:44:42.914810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:39:34.951 qpair failed and we were unable to recover it. 00:39:34.951 Controller properly reset. 00:39:35.212 Initializing NVMe Controllers 00:39:35.212 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:35.212 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:35.212 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:39:35.212 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:39:35.212 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:39:35.212 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:39:35.212 Initialization complete. Launching workers. 00:39:35.212 Starting thread on core 1 00:39:35.212 Starting thread on core 2 00:39:35.212 Starting thread on core 3 00:39:35.212 Starting thread on core 0 00:39:35.212 13:44:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:39:35.212 00:39:35.212 real 0m11.588s 00:39:35.212 user 0m21.136s 00:39:35.212 sys 0m3.784s 00:39:35.212 13:44:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:35.212 13:44:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:35.212 ************************************ 00:39:35.212 END TEST nvmf_target_disconnect_tc2 00:39:35.212 ************************************ 00:39:35.212 13:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:39:35.212 13:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:39:35.212 13:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:39:35.212 13:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:35.212 13:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:39:35.212 13:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:35.212 13:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:39:35.212 13:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:35.212 13:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:35.212 rmmod nvme_tcp 00:39:35.212 rmmod nvme_fabrics 00:39:35.212 rmmod nvme_keyring 00:39:35.212 13:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:35.212 13:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:39:35.212 13:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:39:35.212 13:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 4148563 ']' 00:39:35.212 13:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 4148563 00:39:35.212 13:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' -z 4148563 ']' 00:39:35.212 13:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # kill -0 4148563 00:39:35.212 13:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # uname 00:39:35.212 13:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:35.212 13:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4148563 00:39:35.212 13:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_4 00:39:35.212 13:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_4 = sudo ']' 00:39:35.212 13:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4148563' 00:39:35.212 killing process with pid 4148563 00:39:35.212 13:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # kill 4148563 00:39:35.212 13:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@976 -- # wait 4148563 00:39:36.154 13:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:36.154 13:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:36.154 13:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:36.154 13:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:39:36.154 13:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:36.154 13:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:39:36.154 13:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:39:36.154 13:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:36.154 13:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:36.154 13:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:36.154 13:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:36.154 13:44:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:38.067 13:44:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:38.067 00:39:38.067 real 0m23.672s 00:39:38.067 user 0m50.647s 00:39:38.067 sys 0m11.047s 00:39:38.067 13:44:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:38.067 13:44:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:39:38.067 ************************************ 00:39:38.067 END TEST nvmf_target_disconnect 00:39:38.067 ************************************ 00:39:38.067 13:44:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:39:38.067 00:39:38.067 real 8m32.191s 00:39:38.067 user 18m33.802s 00:39:38.067 sys 2m36.940s 00:39:38.067 13:44:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:38.067 13:44:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:39:38.067 ************************************ 00:39:38.067 END TEST nvmf_host 00:39:38.067 ************************************ 00:39:38.067 13:44:45 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:39:38.067 13:44:45 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:39:38.067 13:44:45 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:39:38.067 13:44:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:39:38.067 13:44:45 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:38.067 13:44:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:38.067 ************************************ 00:39:38.067 START TEST nvmf_target_core_interrupt_mode 00:39:38.067 ************************************ 00:39:38.067 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:39:38.329 * Looking for test storage... 00:39:38.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:38.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:38.329 --rc genhtml_branch_coverage=1 00:39:38.329 --rc genhtml_function_coverage=1 00:39:38.329 --rc genhtml_legend=1 00:39:38.329 --rc geninfo_all_blocks=1 00:39:38.329 --rc geninfo_unexecuted_blocks=1 00:39:38.329 00:39:38.329 ' 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:38.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:38.329 --rc genhtml_branch_coverage=1 00:39:38.329 --rc genhtml_function_coverage=1 00:39:38.329 --rc genhtml_legend=1 00:39:38.329 --rc geninfo_all_blocks=1 00:39:38.329 --rc geninfo_unexecuted_blocks=1 00:39:38.329 00:39:38.329 ' 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:38.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:38.329 --rc genhtml_branch_coverage=1 00:39:38.329 --rc genhtml_function_coverage=1 00:39:38.329 --rc genhtml_legend=1 00:39:38.329 --rc geninfo_all_blocks=1 00:39:38.329 --rc geninfo_unexecuted_blocks=1 00:39:38.329 00:39:38.329 ' 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:38.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:38.329 --rc genhtml_branch_coverage=1 00:39:38.329 --rc genhtml_function_coverage=1 00:39:38.329 --rc genhtml_legend=1 00:39:38.329 --rc geninfo_all_blocks=1 00:39:38.329 --rc geninfo_unexecuted_blocks=1 00:39:38.329 00:39:38.329 ' 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:38.329 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:38.330 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:38.330 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:38.330 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:38.330 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:38.330 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:38.330 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:38.330 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:38.330 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:39:38.330 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:38.330 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:38.330 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:38.330 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.330 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.330 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.330 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:39:38.330 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.330 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:39:38.330 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:38.330 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:38.330 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:38.330 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:38.330 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:38.330 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:38.330 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:38.330 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:38.330 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:38.330 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:38.330 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:39:38.330 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:39:38.330 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:39:38.330 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:39:38.330 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:39:38.330 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:38.330 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:38.330 ************************************ 00:39:38.330 START TEST nvmf_abort 00:39:38.330 ************************************ 00:39:38.330 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:39:38.593 * Looking for test storage... 00:39:38.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:38.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:38.593 --rc genhtml_branch_coverage=1 00:39:38.593 --rc genhtml_function_coverage=1 00:39:38.593 --rc genhtml_legend=1 00:39:38.593 --rc geninfo_all_blocks=1 00:39:38.593 --rc geninfo_unexecuted_blocks=1 00:39:38.593 00:39:38.593 ' 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:38.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:38.593 --rc genhtml_branch_coverage=1 00:39:38.593 --rc genhtml_function_coverage=1 00:39:38.593 --rc genhtml_legend=1 00:39:38.593 --rc geninfo_all_blocks=1 00:39:38.593 --rc geninfo_unexecuted_blocks=1 00:39:38.593 00:39:38.593 ' 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:38.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:38.593 --rc genhtml_branch_coverage=1 00:39:38.593 --rc genhtml_function_coverage=1 00:39:38.593 --rc genhtml_legend=1 00:39:38.593 --rc geninfo_all_blocks=1 00:39:38.593 --rc geninfo_unexecuted_blocks=1 00:39:38.593 00:39:38.593 ' 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:38.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:38.593 --rc genhtml_branch_coverage=1 00:39:38.593 --rc genhtml_function_coverage=1 00:39:38.593 --rc genhtml_legend=1 00:39:38.593 --rc geninfo_all_blocks=1 00:39:38.593 --rc geninfo_unexecuted_blocks=1 00:39:38.593 00:39:38.593 ' 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:38.593 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.594 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.594 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.594 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:39:38.594 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.594 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:39:38.594 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:38.594 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:38.594 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:38.594 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:38.594 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:38.594 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:38.594 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:38.594 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:38.594 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:38.594 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:38.594 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:38.594 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:39:38.594 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:39:38.594 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:38.594 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:38.594 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:38.594 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:38.594 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:38.594 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:38.594 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:38.594 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:38.594 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:38.594 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:38.594 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:39:38.594 13:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:39:46.736 Found 0000:31:00.0 (0x8086 - 0x159b) 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:39:46.736 Found 0000:31:00.1 (0x8086 - 0x159b) 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:39:46.736 Found net devices under 0000:31:00.0: cvl_0_0 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:46.736 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:46.737 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:46.737 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:46.737 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:46.737 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:46.737 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:46.737 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:39:46.737 Found net devices under 0000:31:00.1: cvl_0_1 00:39:46.737 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:46.737 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:46.737 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:39:46.737 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:46.737 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:46.737 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:46.737 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:46.737 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:46.737 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:46.737 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:46.737 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:46.737 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:46.737 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:46.737 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:46.737 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:46.737 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:46.737 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:46.737 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:46.737 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:46.737 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:46.737 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:46.737 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:46.737 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:46.737 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:46.737 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:46.999 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:46.999 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:46.999 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:46.999 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:46.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:46.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:39:46.999 00:39:46.999 --- 10.0.0.2 ping statistics --- 00:39:46.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:46.999 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:39:46.999 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:46.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:46.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:39:46.999 00:39:46.999 --- 10.0.0.1 ping statistics --- 00:39:46.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:46.999 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:39:46.999 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:46.999 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:39:46.999 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:46.999 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:46.999 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:46.999 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:46.999 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:46.999 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:46.999 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:46.999 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:39:46.999 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:46.999 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:46.999 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:46.999 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=4154613 00:39:46.999 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 4154613 00:39:46.999 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:39:46.999 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 4154613 ']' 00:39:46.999 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:46.999 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:46.999 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:46.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:46.999 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:46.999 13:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:46.999 [2024-11-07 13:44:54.943630] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:46.999 [2024-11-07 13:44:54.946266] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:39:46.999 [2024-11-07 13:44:54.946364] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:47.261 [2024-11-07 13:44:55.121288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:47.261 [2024-11-07 13:44:55.245945] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:47.261 [2024-11-07 13:44:55.246013] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:47.261 [2024-11-07 13:44:55.246030] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:47.261 [2024-11-07 13:44:55.246047] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:47.261 [2024-11-07 13:44:55.246060] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:47.261 [2024-11-07 13:44:55.248740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:47.261 [2024-11-07 13:44:55.248874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:47.261 [2024-11-07 13:44:55.248910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:47.522 [2024-11-07 13:44:55.523983] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:47.522 [2024-11-07 13:44:55.525221] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:47.522 [2024-11-07 13:44:55.525343] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:47.522 [2024-11-07 13:44:55.525590] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:47.818 13:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:47.818 13:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:39:47.818 13:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:47.818 13:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:47.818 13:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:47.818 13:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:47.818 13:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:39:47.818 13:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:47.818 13:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:47.818 [2024-11-07 13:44:55.778282] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:47.818 13:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:47.818 13:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:39:47.818 13:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:47.818 13:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:48.136 Malloc0 00:39:48.136 13:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:48.136 13:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:48.136 13:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:48.136 13:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:48.136 Delay0 00:39:48.136 13:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:48.136 13:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:48.136 13:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:48.136 13:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:48.136 13:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:48.136 13:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:39:48.136 13:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:48.136 13:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:48.136 13:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:48.136 13:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:48.136 13:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:48.136 13:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:48.136 [2024-11-07 13:44:55.906162] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:48.136 13:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:48.136 13:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:48.136 13:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:48.136 13:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:48.136 13:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:48.136 13:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:39:48.136 [2024-11-07 13:44:56.111095] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:39:50.680 Initializing NVMe Controllers 00:39:50.680 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:39:50.680 controller IO queue size 128 less than required 00:39:50.680 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:39:50.680 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:39:50.680 Initialization complete. Launching workers. 00:39:50.680 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 27437 00:39:50.680 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27498, failed to submit 66 00:39:50.680 success 27437, unsuccessful 61, failed 0 00:39:50.680 13:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:50.680 13:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:50.680 13:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:50.680 13:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:50.680 13:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:39:50.680 13:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:39:50.680 13:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:50.680 13:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:39:50.680 13:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:50.680 13:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:39:50.680 13:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:50.680 13:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:50.680 rmmod nvme_tcp 00:39:50.680 rmmod nvme_fabrics 00:39:50.680 rmmod nvme_keyring 00:39:50.680 13:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:50.680 13:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:39:50.680 13:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:39:50.680 13:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 4154613 ']' 00:39:50.680 13:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 4154613 00:39:50.680 13:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 4154613 ']' 00:39:50.680 13:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 4154613 00:39:50.680 13:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:39:50.680 13:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:50.680 13:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4154613 00:39:50.680 13:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:39:50.680 13:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:39:50.680 13:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4154613' 00:39:50.680 killing process with pid 4154613 00:39:50.680 13:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@971 -- # kill 4154613 00:39:50.680 13:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@976 -- # wait 4154613 00:39:51.624 13:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:51.624 13:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:51.624 13:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:51.624 13:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:39:51.624 13:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:39:51.624 13:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:51.624 13:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:39:51.624 13:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:51.624 13:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:51.624 13:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:51.624 13:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:51.624 13:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:53.534 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:53.534 00:39:53.534 real 0m15.165s 00:39:53.534 user 0m12.754s 00:39:53.534 sys 0m7.679s 00:39:53.534 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:53.534 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:53.534 ************************************ 00:39:53.534 END TEST nvmf_abort 00:39:53.534 ************************************ 00:39:53.534 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:39:53.534 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:39:53.534 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:53.534 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:53.534 ************************************ 00:39:53.534 START TEST nvmf_ns_hotplug_stress 00:39:53.534 ************************************ 00:39:53.534 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:39:53.795 * Looking for test storage... 00:39:53.795 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:53.795 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:53.795 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:39:53.795 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:53.795 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:53.795 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:53.795 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:53.795 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:53.795 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:39:53.795 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:39:53.795 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:39:53.795 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:39:53.795 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:39:53.795 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:39:53.795 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:39:53.795 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:53.795 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:39:53.795 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:39:53.795 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:53.795 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:53.795 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:39:53.795 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:39:53.795 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:53.795 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:39:53.795 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:39:53.795 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:39:53.795 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:39:53.795 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:53.795 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:39:53.795 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:39:53.795 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:53.795 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:53.795 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:39:53.795 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:53.795 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:53.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:53.795 --rc genhtml_branch_coverage=1 00:39:53.795 --rc genhtml_function_coverage=1 00:39:53.795 --rc genhtml_legend=1 00:39:53.795 --rc geninfo_all_blocks=1 00:39:53.795 --rc geninfo_unexecuted_blocks=1 00:39:53.795 00:39:53.795 ' 00:39:53.795 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:53.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:53.795 --rc genhtml_branch_coverage=1 00:39:53.796 --rc genhtml_function_coverage=1 00:39:53.796 --rc genhtml_legend=1 00:39:53.796 --rc geninfo_all_blocks=1 00:39:53.796 --rc geninfo_unexecuted_blocks=1 00:39:53.796 00:39:53.796 ' 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:53.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:53.796 --rc genhtml_branch_coverage=1 00:39:53.796 --rc genhtml_function_coverage=1 00:39:53.796 --rc genhtml_legend=1 00:39:53.796 --rc geninfo_all_blocks=1 00:39:53.796 --rc geninfo_unexecuted_blocks=1 00:39:53.796 00:39:53.796 ' 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:53.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:53.796 --rc genhtml_branch_coverage=1 00:39:53.796 --rc genhtml_function_coverage=1 00:39:53.796 --rc genhtml_legend=1 00:39:53.796 --rc geninfo_all_blocks=1 00:39:53.796 --rc geninfo_unexecuted_blocks=1 00:39:53.796 00:39:53.796 ' 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:39:53.796 13:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:40:01.930 Found 0000:31:00.0 (0x8086 - 0x159b) 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:40:01.930 Found 0000:31:00.1 (0x8086 - 0x159b) 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:01.930 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:40:01.931 Found net devices under 0000:31:00.0: cvl_0_0 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:40:01.931 Found net devices under 0000:31:00.1: cvl_0_1 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:01.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:01.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.684 ms 00:40:01.931 00:40:01.931 --- 10.0.0.2 ping statistics --- 00:40:01.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:01.931 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:01.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:01.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:40:01.931 00:40:01.931 --- 10.0.0.1 ping statistics --- 00:40:01.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:01.931 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=4160484 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 4160484 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 4160484 ']' 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:01.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:40:01.931 13:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:40:02.191 [2024-11-07 13:45:09.979480] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:02.191 [2024-11-07 13:45:09.981789] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:40:02.191 [2024-11-07 13:45:09.981882] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:02.191 [2024-11-07 13:45:10.148377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:02.451 [2024-11-07 13:45:10.248571] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:02.451 [2024-11-07 13:45:10.248615] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:02.451 [2024-11-07 13:45:10.248630] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:02.451 [2024-11-07 13:45:10.248640] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:02.451 [2024-11-07 13:45:10.248651] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:02.451 [2024-11-07 13:45:10.250688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:02.451 [2024-11-07 13:45:10.250804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:02.451 [2024-11-07 13:45:10.250829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:02.711 [2024-11-07 13:45:10.488882] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:02.711 [2024-11-07 13:45:10.490086] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:02.711 [2024-11-07 13:45:10.490251] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:02.711 [2024-11-07 13:45:10.490385] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:02.971 13:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:40:02.971 13:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:40:02.971 13:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:02.971 13:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:02.971 13:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:40:02.971 13:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:02.971 13:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:40:02.971 13:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:02.971 [2024-11-07 13:45:10.924013] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:02.971 13:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:40:03.230 13:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:03.490 [2024-11-07 13:45:11.276927] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:03.490 13:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:03.490 13:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:40:03.750 Malloc0 00:40:03.750 13:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:40:04.011 Delay0 00:40:04.011 13:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:04.272 13:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:40:04.272 NULL1 00:40:04.272 13:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:40:04.532 13:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=4161190 00:40:04.532 13:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:40:04.532 13:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:04.532 13:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:04.793 13:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:04.793 13:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:40:04.793 13:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:40:05.053 true 00:40:05.053 13:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:05.053 13:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:05.313 13:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:05.574 13:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:40:05.574 13:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:40:05.574 true 00:40:05.574 13:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:05.574 13:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:05.834 13:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:06.095 13:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:40:06.095 13:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:40:06.095 true 00:40:06.354 13:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:06.354 13:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:06.354 13:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:06.615 13:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:40:06.615 13:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:40:06.875 true 00:40:06.875 13:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:06.875 13:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:07.135 13:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:07.135 13:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:40:07.135 13:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:40:07.395 true 00:40:07.395 13:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:07.395 13:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:07.655 13:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:07.655 13:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:40:07.655 13:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:40:07.914 true 00:40:07.914 13:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:07.914 13:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:08.173 13:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:08.434 13:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:40:08.434 13:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:40:08.434 true 00:40:08.434 13:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:08.434 13:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:08.694 13:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:08.954 13:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:40:08.954 13:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:40:08.954 true 00:40:09.215 13:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:09.215 13:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:09.215 13:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:09.475 13:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:40:09.475 13:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:40:09.736 true 00:40:09.736 13:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:09.736 13:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:09.736 13:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:09.997 13:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:40:09.997 13:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:40:10.257 true 00:40:10.257 13:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:10.257 13:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:10.518 13:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:10.518 13:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:40:10.518 13:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:40:10.779 true 00:40:10.779 13:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:10.779 13:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:11.040 13:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:11.040 13:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:40:11.040 13:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:40:11.300 true 00:40:11.300 13:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:11.300 13:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:11.561 13:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:11.561 13:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:40:11.561 13:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:40:11.824 true 00:40:11.824 13:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:11.824 13:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:12.085 13:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:12.085 13:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:40:12.085 13:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:40:12.345 true 00:40:12.345 13:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:12.345 13:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:12.606 13:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:12.867 13:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:40:12.867 13:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:40:12.867 true 00:40:12.867 13:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:12.867 13:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:13.127 13:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:13.387 13:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:40:13.387 13:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:40:13.387 true 00:40:13.387 13:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:13.387 13:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:13.648 13:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:13.908 13:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:40:13.908 13:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:40:13.908 true 00:40:14.168 13:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:14.168 13:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:14.168 13:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:14.428 13:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:40:14.428 13:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:40:14.689 true 00:40:14.689 13:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:14.689 13:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:14.949 13:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:14.949 13:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:40:14.949 13:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:40:15.209 true 00:40:15.209 13:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:15.209 13:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:15.469 13:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:15.469 13:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:40:15.469 13:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:40:15.729 true 00:40:15.729 13:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:15.729 13:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:15.989 13:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:16.250 13:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:40:16.250 13:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:40:16.250 true 00:40:16.250 13:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:16.250 13:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:16.510 13:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:16.771 13:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:40:16.771 13:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:40:16.771 true 00:40:16.771 13:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:16.771 13:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:17.046 13:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:17.306 13:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:40:17.306 13:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:40:17.306 true 00:40:17.306 13:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:17.306 13:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:17.567 13:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:17.827 13:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:40:17.827 13:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:40:17.827 true 00:40:17.827 13:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:17.827 13:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:18.088 13:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:18.350 13:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:40:18.350 13:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:40:18.611 true 00:40:18.611 13:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:18.611 13:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:18.611 13:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:18.873 13:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:40:18.873 13:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:40:19.133 true 00:40:19.133 13:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:19.133 13:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:19.133 13:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:19.394 13:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:40:19.394 13:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:40:19.655 true 00:40:19.655 13:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:19.655 13:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:19.915 13:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:19.915 13:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:40:19.915 13:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:40:20.175 true 00:40:20.175 13:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:20.175 13:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:20.435 13:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:20.435 13:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:40:20.435 13:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:40:20.694 true 00:40:20.694 13:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:20.694 13:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:20.954 13:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:21.213 13:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:40:21.213 13:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:40:21.213 true 00:40:21.213 13:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:21.214 13:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:21.473 13:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:21.739 13:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:40:21.739 13:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:40:21.739 true 00:40:21.739 13:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:21.739 13:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:22.001 13:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:22.262 13:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:40:22.262 13:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:40:22.262 true 00:40:22.262 13:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:22.262 13:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:22.521 13:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:22.782 13:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:40:22.782 13:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:40:22.782 true 00:40:23.042 13:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:23.042 13:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:23.042 13:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:23.302 13:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:40:23.302 13:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:40:23.563 true 00:40:23.563 13:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:23.563 13:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:23.563 13:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:23.824 13:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:40:23.824 13:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:40:24.084 true 00:40:24.084 13:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:24.084 13:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:24.345 13:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:24.345 13:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:40:24.345 13:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:40:24.605 true 00:40:24.605 13:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:24.605 13:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:24.866 13:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:24.866 13:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:40:24.866 13:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:40:25.127 true 00:40:25.127 13:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:25.127 13:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:25.388 13:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:25.649 13:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:40:25.649 13:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:40:25.649 true 00:40:25.649 13:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:25.649 13:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:25.911 13:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:26.178 13:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:40:26.178 13:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:40:26.178 true 00:40:26.178 13:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:26.178 13:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:26.437 13:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:26.698 13:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:40:26.698 13:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:40:26.698 true 00:40:26.698 13:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:26.698 13:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:26.959 13:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:27.220 13:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:40:27.220 13:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:40:27.220 true 00:40:27.481 13:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:27.481 13:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:27.481 13:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:27.742 13:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:40:27.742 13:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:40:28.002 true 00:40:28.002 13:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:28.002 13:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:28.002 13:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:28.263 13:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:40:28.263 13:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:40:28.523 true 00:40:28.523 13:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:28.523 13:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:28.784 13:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:28.784 13:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:40:28.784 13:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:40:29.045 true 00:40:29.045 13:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:29.045 13:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:29.306 13:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:29.306 13:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:40:29.306 13:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:40:29.567 true 00:40:29.567 13:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:29.567 13:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:29.828 13:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:30.089 13:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:40:30.089 13:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:40:30.089 true 00:40:30.089 13:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:30.089 13:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:30.350 13:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:30.611 13:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:40:30.611 13:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:40:30.611 true 00:40:30.611 13:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:30.611 13:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:30.873 13:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:31.133 13:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:40:31.133 13:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:40:31.133 true 00:40:31.133 13:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:31.134 13:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:31.394 13:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:31.655 13:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:40:31.655 13:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:40:31.916 true 00:40:31.917 13:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:31.917 13:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:31.917 13:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:32.177 13:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:40:32.177 13:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:40:32.438 true 00:40:32.438 13:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:32.438 13:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:32.438 13:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:32.699 13:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:40:32.700 13:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:40:32.960 true 00:40:32.960 13:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:32.960 13:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:33.222 13:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:33.222 13:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:40:33.222 13:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:40:33.482 true 00:40:33.482 13:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:33.482 13:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:33.743 13:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:33.743 13:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:40:33.743 13:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:40:34.004 true 00:40:34.004 13:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:34.004 13:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:34.264 13:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:34.525 13:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:40:34.525 13:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:40:34.525 true 00:40:34.525 13:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:34.525 13:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:34.785 Initializing NVMe Controllers 00:40:34.785 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:34.785 Controller IO queue size 128, less than required. 00:40:34.785 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:34.785 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:40:34.785 Initialization complete. Launching workers. 00:40:34.785 ======================================================== 00:40:34.785 Latency(us) 00:40:34.785 Device Information : IOPS MiB/s Average min max 00:40:34.786 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 26695.41 13.03 4794.57 1697.74 11975.80 00:40:34.786 ======================================================== 00:40:34.786 Total : 26695.41 13.03 4794.57 1697.74 11975.80 00:40:34.786 00:40:34.786 13:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:35.046 13:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:40:35.046 13:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:40:35.046 true 00:40:35.046 13:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4161190 00:40:35.046 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (4161190) - No such process 00:40:35.046 13:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 4161190 00:40:35.046 13:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:35.306 13:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:35.586 13:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:40:35.586 13:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:40:35.586 13:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:40:35.586 13:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:35.586 13:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:40:35.586 null0 00:40:35.586 13:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:40:35.586 13:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:35.586 13:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:40:35.903 null1 00:40:35.903 13:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:40:35.904 13:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:35.904 13:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:40:36.172 null2 00:40:36.172 13:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:40:36.172 13:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:36.172 13:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:40:36.172 null3 00:40:36.172 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:40:36.172 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:36.172 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:40:36.433 null4 00:40:36.433 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:40:36.433 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:36.433 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:40:36.433 null5 00:40:36.693 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:40:36.693 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:36.693 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:40:36.693 null6 00:40:36.693 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:40:36.694 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:36.694 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:40:36.955 null7 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 4167279 4167280 4167282 4167284 4167286 4167288 4167290 4167292 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:36.955 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:36.956 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:36.956 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:36.956 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:36.956 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:37.216 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:37.216 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:37.216 13:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:37.216 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:37.216 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:37.216 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:37.216 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:37.216 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:37.216 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:37.216 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:37.216 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:37.216 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:37.216 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:37.216 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:37.216 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:37.216 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:37.217 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:37.217 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:37.217 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:37.217 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:37.217 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:37.217 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:37.217 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:37.217 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:37.217 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:37.217 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:37.217 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:37.217 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:37.478 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:37.478 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:37.478 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:37.478 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:37.478 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:37.478 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:37.478 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:37.478 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:37.478 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:37.478 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:37.478 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:37.478 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:37.478 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:37.478 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:37.740 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:37.740 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:37.740 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:37.740 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:37.740 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:37.740 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:37.740 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:37.740 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:37.740 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:37.740 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:37.740 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:37.740 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:37.740 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:37.740 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:37.740 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:37.740 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:37.740 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:37.740 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:37.740 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:37.740 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:37.740 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:37.740 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:37.740 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:37.740 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:38.002 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:38.002 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:38.002 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:38.002 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:38.002 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:38.002 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:38.002 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:38.002 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:38.002 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:38.002 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:38.002 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:38.002 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:38.002 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:38.002 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:38.002 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:38.002 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:38.002 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:38.002 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:38.002 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:38.002 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:38.002 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:38.002 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:38.002 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:38.002 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:38.002 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:38.002 13:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:38.263 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:38.263 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:38.263 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:38.263 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:38.263 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:38.263 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:38.263 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:38.263 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:38.263 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:38.264 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:38.264 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:38.264 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:38.264 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:38.264 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:38.264 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:38.264 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:38.264 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:38.524 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:38.524 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:38.524 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:38.524 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:38.524 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:38.525 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:38.525 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:38.525 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:38.525 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:38.525 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:38.525 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:38.525 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:38.525 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:38.525 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:38.525 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:38.525 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:38.525 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:38.525 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:38.525 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:38.525 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:38.785 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:38.785 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:38.786 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:38.786 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:38.786 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:38.786 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:38.786 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:38.786 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:38.786 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:38.786 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:38.786 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:38.786 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:38.786 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:38.786 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:38.786 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:38.786 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:38.786 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:38.786 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:38.786 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:38.786 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:38.786 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:38.786 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:38.786 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:38.786 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:38.786 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:38.786 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:38.786 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:39.047 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:39.047 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:39.047 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:39.047 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:39.047 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:39.047 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:39.047 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:39.047 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:39.047 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:39.047 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:39.047 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:39.047 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:39.047 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:39.047 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:39.047 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:39.047 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:39.047 13:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:39.047 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:39.047 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:39.047 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:39.309 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:39.309 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:39.309 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:39.309 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:39.309 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:39.309 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:39.309 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:39.309 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:39.309 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:39.309 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:39.309 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:39.309 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:39.309 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:39.309 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:39.309 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:39.309 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:39.309 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:39.309 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:39.309 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:39.309 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:39.309 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:39.309 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:39.570 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:39.570 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:39.570 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:39.570 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:39.570 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:39.570 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:39.570 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:39.570 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:39.570 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:39.570 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:39.570 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:39.570 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:39.570 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:39.570 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:39.570 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:39.570 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:39.570 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:39.570 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:39.570 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:39.570 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:39.570 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:39.570 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:39.570 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:39.570 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:39.831 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:39.831 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:39.831 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:39.831 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:39.831 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:39.831 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:39.831 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:39.831 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:39.831 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:39.831 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:39.831 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:39.831 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:39.831 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:39.832 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:39.832 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:39.832 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:39.832 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:39.832 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:39.832 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:39.832 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:39.832 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:39.832 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:39.832 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:39.832 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:39.832 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:39.832 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:39.832 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:39.832 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:40.092 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:40.092 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:40.092 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:40.092 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:40.092 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:40.092 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:40.092 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:40.092 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:40.092 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:40.092 13:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:40.092 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:40.092 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:40.092 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:40.092 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:40.092 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:40.092 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:40.353 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:40.353 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:40.353 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:40.353 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:40.353 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:40.353 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:40.353 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:40.353 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:40.353 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:40.353 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:40.353 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:40.353 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:40.353 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:40.353 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:40.353 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:40.353 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:40.354 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:40.354 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:40.354 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:40.354 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:40.354 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:40.354 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:40.354 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:40.354 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:40.354 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:40.614 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:40.615 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:40.615 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:40.615 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:40.615 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:40.615 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:40.615 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:40.615 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:40.615 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:40.615 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:40.615 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:40.615 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:40.615 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:40.615 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:40.615 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:40.615 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:40.615 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:40.615 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:40:40.615 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:40:40.615 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:40.615 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:40:40.615 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:40.615 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:40:40.615 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:40.615 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:40.615 rmmod nvme_tcp 00:40:40.875 rmmod nvme_fabrics 00:40:40.875 rmmod nvme_keyring 00:40:40.875 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:40.875 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:40:40.875 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:40:40.875 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 4160484 ']' 00:40:40.875 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 4160484 00:40:40.875 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 4160484 ']' 00:40:40.875 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 4160484 00:40:40.875 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:40:40.875 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:40:40.875 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4160484 00:40:40.875 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:40:40.875 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:40:40.875 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4160484' 00:40:40.875 killing process with pid 4160484 00:40:40.875 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 4160484 00:40:40.875 13:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 4160484 00:40:41.445 13:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:41.445 13:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:41.445 13:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:41.445 13:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:40:41.445 13:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:40:41.445 13:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:41.445 13:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:40:41.445 13:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:41.445 13:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:41.445 13:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:41.445 13:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:41.445 13:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:43.988 00:40:43.988 real 0m50.010s 00:40:43.988 user 3m4.472s 00:40:43.988 sys 0m22.169s 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:40:43.988 ************************************ 00:40:43.988 END TEST nvmf_ns_hotplug_stress 00:40:43.988 ************************************ 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:43.988 ************************************ 00:40:43.988 START TEST nvmf_delete_subsystem 00:40:43.988 ************************************ 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:40:43.988 * Looking for test storage... 00:40:43.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:43.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:43.988 --rc genhtml_branch_coverage=1 00:40:43.988 --rc genhtml_function_coverage=1 00:40:43.988 --rc genhtml_legend=1 00:40:43.988 --rc geninfo_all_blocks=1 00:40:43.988 --rc geninfo_unexecuted_blocks=1 00:40:43.988 00:40:43.988 ' 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:43.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:43.988 --rc genhtml_branch_coverage=1 00:40:43.988 --rc genhtml_function_coverage=1 00:40:43.988 --rc genhtml_legend=1 00:40:43.988 --rc geninfo_all_blocks=1 00:40:43.988 --rc geninfo_unexecuted_blocks=1 00:40:43.988 00:40:43.988 ' 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:43.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:43.988 --rc genhtml_branch_coverage=1 00:40:43.988 --rc genhtml_function_coverage=1 00:40:43.988 --rc genhtml_legend=1 00:40:43.988 --rc geninfo_all_blocks=1 00:40:43.988 --rc geninfo_unexecuted_blocks=1 00:40:43.988 00:40:43.988 ' 00:40:43.988 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:43.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:43.988 --rc genhtml_branch_coverage=1 00:40:43.988 --rc genhtml_function_coverage=1 00:40:43.988 --rc genhtml_legend=1 00:40:43.988 --rc geninfo_all_blocks=1 00:40:43.989 --rc geninfo_unexecuted_blocks=1 00:40:43.989 00:40:43.989 ' 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:40:43.989 13:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:52.128 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:52.128 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:40:52.128 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:52.128 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:52.128 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:52.128 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:52.128 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:52.128 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:40:52.128 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:52.128 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:40:52.128 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:40:52.128 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:40:52.128 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:40:52.128 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:40:52.128 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:40:52.128 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:52.128 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:52.128 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:52.128 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:52.128 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:52.128 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:52.128 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:52.128 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:52.128 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:52.128 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:52.128 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:52.128 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:52.128 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:52.128 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:52.128 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:52.128 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:40:52.129 Found 0000:31:00.0 (0x8086 - 0x159b) 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:40:52.129 Found 0000:31:00.1 (0x8086 - 0x159b) 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:40:52.129 Found net devices under 0000:31:00.0: cvl_0_0 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:40:52.129 Found net devices under 0000:31:00.1: cvl_0_1 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:52.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:52.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.588 ms 00:40:52.129 00:40:52.129 --- 10.0.0.2 ping statistics --- 00:40:52.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:52.129 rtt min/avg/max/mdev = 0.588/0.588/0.588/0.000 ms 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:52.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:52.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:40:52.129 00:40:52.129 --- 10.0.0.1 ping statistics --- 00:40:52.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:52.129 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:52.129 13:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:52.129 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:40:52.129 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:52.129 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:52.129 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:52.129 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=4172875 00:40:52.129 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 4172875 00:40:52.129 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:40:52.129 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 4172875 ']' 00:40:52.129 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:52.130 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:40:52.130 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:52.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:52.130 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:40:52.130 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:52.130 [2024-11-07 13:46:00.114131] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:52.130 [2024-11-07 13:46:00.116448] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:40:52.130 [2024-11-07 13:46:00.116529] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:52.389 [2024-11-07 13:46:00.276129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:52.389 [2024-11-07 13:46:00.373848] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:52.389 [2024-11-07 13:46:00.373898] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:52.389 [2024-11-07 13:46:00.373915] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:52.389 [2024-11-07 13:46:00.373925] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:52.389 [2024-11-07 13:46:00.373937] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:52.389 [2024-11-07 13:46:00.375779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:52.389 [2024-11-07 13:46:00.375805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:52.649 [2024-11-07 13:46:00.613400] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:52.649 [2024-11-07 13:46:00.613607] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:52.649 [2024-11-07 13:46:00.613710] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:52.909 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:40:52.909 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:40:52.909 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:52.909 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:52.909 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:52.909 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:52.909 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:52.909 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:52.909 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:52.909 [2024-11-07 13:46:00.912929] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:53.169 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:53.169 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:40:53.169 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:53.169 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:53.169 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:53.169 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:53.169 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:53.169 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:53.169 [2024-11-07 13:46:00.944841] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:53.169 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:53.169 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:40:53.169 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:53.169 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:53.169 NULL1 00:40:53.169 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:53.169 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:40:53.169 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:53.169 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:53.169 Delay0 00:40:53.169 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:53.169 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:53.169 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:53.169 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:53.169 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:53.169 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=4173099 00:40:53.169 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:40:53.169 13:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:40:53.169 [2024-11-07 13:46:01.079128] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:40:55.079 13:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:55.079 13:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:55.079 13:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:55.340 Write completed with error (sct=0, sc=8) 00:40:55.340 starting I/O failed: -6 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 starting I/O failed: -6 00:40:55.340 Write completed with error (sct=0, sc=8) 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Write completed with error (sct=0, sc=8) 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 starting I/O failed: -6 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Write completed with error (sct=0, sc=8) 00:40:55.340 starting I/O failed: -6 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Write completed with error (sct=0, sc=8) 00:40:55.340 Write completed with error (sct=0, sc=8) 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 starting I/O failed: -6 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Write completed with error (sct=0, sc=8) 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 starting I/O failed: -6 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 starting I/O failed: -6 00:40:55.340 Write completed with error (sct=0, sc=8) 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Write completed with error (sct=0, sc=8) 00:40:55.340 starting I/O failed: -6 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Write completed with error (sct=0, sc=8) 00:40:55.340 starting I/O failed: -6 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 starting I/O failed: -6 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Write completed with error (sct=0, sc=8) 00:40:55.340 starting I/O failed: -6 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Write completed with error (sct=0, sc=8) 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Write completed with error (sct=0, sc=8) 00:40:55.340 starting I/O failed: -6 00:40:55.340 [2024-11-07 13:46:03.300020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000027180 is same with the state(6) to be set 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Write completed with error (sct=0, sc=8) 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Write completed with error (sct=0, sc=8) 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Write completed with error (sct=0, sc=8) 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.340 Read completed with error (sct=0, sc=8) 00:40:55.341 Write completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Write completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Write completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Write completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Write completed with error (sct=0, sc=8) 00:40:55.341 Write completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Write completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Write completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Write completed with error (sct=0, sc=8) 00:40:55.341 Write completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 [2024-11-07 13:46:03.300530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000026780 is same with the state(6) to be set 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Write completed with error (sct=0, sc=8) 00:40:55.341 starting I/O failed: -6 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Write completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 starting I/O failed: -6 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Write completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Write completed with error (sct=0, sc=8) 00:40:55.341 starting I/O failed: -6 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Write completed with error (sct=0, sc=8) 00:40:55.341 starting I/O failed: -6 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 starting I/O failed: -6 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Write completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 starting I/O failed: -6 00:40:55.341 Write completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Write completed with error (sct=0, sc=8) 00:40:55.341 starting I/O failed: -6 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Write completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 starting I/O failed: -6 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Write completed with error (sct=0, sc=8) 00:40:55.341 Write completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 starting I/O failed: -6 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 starting I/O failed: -6 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Write completed with error (sct=0, sc=8) 00:40:55.341 Write completed with error (sct=0, sc=8) 00:40:55.341 starting I/O failed: -6 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Write completed with error (sct=0, sc=8) 00:40:55.341 starting I/O failed: -6 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 starting I/O failed: -6 00:40:55.341 Write completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 starting I/O failed: -6 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 starting I/O failed: -6 00:40:55.341 Write completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 starting I/O failed: -6 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 starting I/O failed: -6 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 starting I/O failed: -6 00:40:55.341 Write completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 starting I/O failed: -6 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Write completed with error (sct=0, sc=8) 00:40:55.341 starting I/O failed: -6 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 starting I/O failed: -6 00:40:55.341 Write completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 starting I/O failed: -6 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 starting I/O failed: -6 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 starting I/O failed: -6 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 starting I/O failed: -6 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 starting I/O failed: -6 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 starting I/O failed: -6 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 starting I/O failed: -6 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 starting I/O failed: -6 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 starting I/O failed: -6 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 starting I/O failed: -6 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Write completed with error (sct=0, sc=8) 00:40:55.341 starting I/O failed: -6 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Write completed with error (sct=0, sc=8) 00:40:55.341 starting I/O failed: -6 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 starting I/O failed: -6 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.341 starting I/O failed: -6 00:40:55.341 Read completed with error (sct=0, sc=8) 00:40:55.342 starting I/O failed: -6 00:40:55.342 starting I/O failed: -6 00:40:55.342 starting I/O failed: -6 00:40:55.342 starting I/O failed: -6 00:40:55.342 starting I/O failed: -6 00:40:55.342 starting I/O failed: -6 00:40:55.342 starting I/O failed: -6 00:40:55.342 starting I/O failed: -6 00:40:55.342 starting I/O failed: -6 00:40:55.342 starting I/O failed: -6 00:40:55.342 starting I/O failed: -6 00:40:55.342 starting I/O failed: -6 00:40:55.342 starting I/O failed: -6 00:40:55.342 starting I/O failed: -6 00:40:56.282 [2024-11-07 13:46:04.281894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000025d80 is same with the state(6) to be set 00:40:56.542 Write completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Write completed with error (sct=0, sc=8) 00:40:56.543 Write completed with error (sct=0, sc=8) 00:40:56.543 Write completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Write completed with error (sct=0, sc=8) 00:40:56.543 Write completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Write completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 [2024-11-07 13:46:04.303503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000026c80 is same with the state(6) to be set 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Write completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Write completed with error (sct=0, sc=8) 00:40:56.543 Write completed with error (sct=0, sc=8) 00:40:56.543 Write completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Write completed with error (sct=0, sc=8) 00:40:56.543 Write completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Write completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Write completed with error (sct=0, sc=8) 00:40:56.543 Write completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 [2024-11-07 13:46:04.304393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000027680 is same with the state(6) to be set 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Write completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Write completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Write completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Write completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Write completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Write completed with error (sct=0, sc=8) 00:40:56.543 Write completed with error (sct=0, sc=8) 00:40:56.543 Write completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Write completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Write completed with error (sct=0, sc=8) 00:40:56.543 Write completed with error (sct=0, sc=8) 00:40:56.543 [2024-11-07 13:46:04.305869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030500 is same with the state(6) to be set 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Write completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Write completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Write completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Write completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Write completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Write completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Write completed with error (sct=0, sc=8) 00:40:56.543 Write completed with error (sct=0, sc=8) 00:40:56.543 Write completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Write completed with error (sct=0, sc=8) 00:40:56.543 Write completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 Write completed with error (sct=0, sc=8) 00:40:56.543 Read completed with error (sct=0, sc=8) 00:40:56.543 [2024-11-07 13:46:04.308181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030f00 is same with the state(6) to be set 00:40:56.543 Initializing NVMe Controllers 00:40:56.543 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:56.543 Controller IO queue size 128, less than required. 00:40:56.543 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:56.543 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:40:56.543 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:40:56.543 Initialization complete. Launching workers. 00:40:56.543 ======================================================== 00:40:56.543 Latency(us) 00:40:56.543 Device Information : IOPS MiB/s Average min max 00:40:56.543 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 170.69 0.08 893003.48 524.35 1008642.83 00:40:56.543 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 179.65 0.09 926607.52 501.86 1010421.37 00:40:56.543 ======================================================== 00:40:56.543 Total : 350.34 0.17 910235.10 501.86 1010421.37 00:40:56.543 00:40:56.543 [2024-11-07 13:46:04.309319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000025d80 (9): Bad file descriptor 00:40:56.543 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:40:56.543 13:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:56.543 13:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:40:56.543 13:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4173099 00:40:56.543 13:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:40:57.114 13:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:40:57.114 13:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4173099 00:40:57.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (4173099) - No such process 00:40:57.114 13:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 4173099 00:40:57.114 13:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:40:57.114 13:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 4173099 00:40:57.114 13:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:40:57.114 13:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:57.114 13:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:40:57.114 13:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:57.114 13:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 4173099 00:40:57.114 13:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:40:57.114 13:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:57.114 13:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:40:57.115 13:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:57.115 13:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:40:57.115 13:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:57.115 13:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:57.115 13:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:57.115 13:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:57.115 13:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:57.115 13:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:57.115 [2024-11-07 13:46:04.840817] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:57.115 13:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:57.115 13:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:57.115 13:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:57.115 13:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:57.115 13:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:57.115 13:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=4173763 00:40:57.115 13:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:40:57.115 13:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:40:57.115 13:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4173763 00:40:57.115 13:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:57.115 [2024-11-07 13:46:04.951507] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:40:57.375 13:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:57.375 13:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4173763 00:40:57.375 13:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:57.946 13:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:57.946 13:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4173763 00:40:57.946 13:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:58.518 13:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:58.518 13:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4173763 00:40:58.518 13:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:59.088 13:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:59.088 13:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4173763 00:40:59.088 13:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:59.660 13:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:59.660 13:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4173763 00:40:59.660 13:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:59.921 13:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:59.921 13:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4173763 00:40:59.921 13:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:41:00.181 Initializing NVMe Controllers 00:41:00.181 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:00.181 Controller IO queue size 128, less than required. 00:41:00.181 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:41:00.181 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:41:00.181 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:41:00.182 Initialization complete. Launching workers. 00:41:00.182 ======================================================== 00:41:00.182 Latency(us) 00:41:00.182 Device Information : IOPS MiB/s Average min max 00:41:00.182 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002967.66 1000200.73 1007376.95 00:41:00.182 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004850.05 1000848.58 1043955.94 00:41:00.182 ======================================================== 00:41:00.182 Total : 256.00 0.12 1003908.86 1000200.73 1043955.94 00:41:00.182 00:41:00.442 13:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:41:00.442 13:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4173763 00:41:00.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (4173763) - No such process 00:41:00.442 13:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 4173763 00:41:00.442 13:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:41:00.442 13:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:41:00.442 13:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:00.442 13:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:41:00.443 13:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:00.443 13:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:41:00.443 13:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:00.443 13:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:00.443 rmmod nvme_tcp 00:41:00.443 rmmod nvme_fabrics 00:41:00.443 rmmod nvme_keyring 00:41:00.703 13:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:00.703 13:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:41:00.703 13:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:41:00.703 13:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 4172875 ']' 00:41:00.703 13:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 4172875 00:41:00.703 13:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 4172875 ']' 00:41:00.703 13:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 4172875 00:41:00.703 13:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:41:00.703 13:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:41:00.703 13:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4172875 00:41:00.703 13:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:41:00.703 13:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:41:00.703 13:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4172875' 00:41:00.703 killing process with pid 4172875 00:41:00.703 13:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 4172875 00:41:00.703 13:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 4172875 00:41:01.645 13:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:01.645 13:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:01.645 13:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:01.645 13:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:41:01.645 13:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:41:01.645 13:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:01.645 13:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:41:01.645 13:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:01.645 13:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:01.645 13:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:01.645 13:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:01.645 13:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:03.554 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:03.554 00:41:03.554 real 0m19.878s 00:41:03.554 user 0m27.987s 00:41:03.554 sys 0m8.066s 00:41:03.554 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:41:03.554 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:41:03.554 ************************************ 00:41:03.554 END TEST nvmf_delete_subsystem 00:41:03.554 ************************************ 00:41:03.555 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:41:03.555 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:41:03.555 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:41:03.555 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:03.555 ************************************ 00:41:03.555 START TEST nvmf_host_management 00:41:03.555 ************************************ 00:41:03.555 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:41:03.555 * Looking for test storage... 00:41:03.555 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:03.555 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:41:03.555 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:41:03.555 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:41:03.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:03.816 --rc genhtml_branch_coverage=1 00:41:03.816 --rc genhtml_function_coverage=1 00:41:03.816 --rc genhtml_legend=1 00:41:03.816 --rc geninfo_all_blocks=1 00:41:03.816 --rc geninfo_unexecuted_blocks=1 00:41:03.816 00:41:03.816 ' 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:41:03.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:03.816 --rc genhtml_branch_coverage=1 00:41:03.816 --rc genhtml_function_coverage=1 00:41:03.816 --rc genhtml_legend=1 00:41:03.816 --rc geninfo_all_blocks=1 00:41:03.816 --rc geninfo_unexecuted_blocks=1 00:41:03.816 00:41:03.816 ' 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:41:03.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:03.816 --rc genhtml_branch_coverage=1 00:41:03.816 --rc genhtml_function_coverage=1 00:41:03.816 --rc genhtml_legend=1 00:41:03.816 --rc geninfo_all_blocks=1 00:41:03.816 --rc geninfo_unexecuted_blocks=1 00:41:03.816 00:41:03.816 ' 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:41:03.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:03.816 --rc genhtml_branch_coverage=1 00:41:03.816 --rc genhtml_function_coverage=1 00:41:03.816 --rc genhtml_legend=1 00:41:03.816 --rc geninfo_all_blocks=1 00:41:03.816 --rc geninfo_unexecuted_blocks=1 00:41:03.816 00:41:03.816 ' 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:03.816 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:41:03.817 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:03.817 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:03.817 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:03.817 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:03.817 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:03.817 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:03.817 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:41:03.817 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:03.817 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:41:03.817 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:03.817 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:03.817 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:03.817 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:03.817 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:03.817 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:03.817 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:03.817 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:03.817 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:03.817 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:03.817 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:03.817 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:03.817 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:41:03.817 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:03.817 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:03.817 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:03.817 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:03.817 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:03.817 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:03.817 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:03.817 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:03.817 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:03.817 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:03.817 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:41:03.817 13:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:41:11.961 Found 0000:31:00.0 (0x8086 - 0x159b) 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:41:11.961 Found 0000:31:00.1 (0x8086 - 0x159b) 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:41:11.961 Found net devices under 0000:31:00.0: cvl_0_0 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:41:11.961 Found net devices under 0000:31:00.1: cvl_0_1 00:41:11.961 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:11.962 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:11.962 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:41:11.962 00:41:11.962 --- 10.0.0.2 ping statistics --- 00:41:11.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:11.962 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:11.962 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:11.962 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:41:11.962 00:41:11.962 --- 10.0.0.1 ping statistics --- 00:41:11.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:11.962 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=4179127 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 4179127 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 4179127 ']' 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:11.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:41:11.962 13:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:12.223 [2024-11-07 13:46:19.986215] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:12.223 [2024-11-07 13:46:19.988883] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:41:12.223 [2024-11-07 13:46:19.988985] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:12.223 [2024-11-07 13:46:20.176869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:12.484 [2024-11-07 13:46:20.303937] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:12.484 [2024-11-07 13:46:20.304005] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:12.484 [2024-11-07 13:46:20.304021] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:12.484 [2024-11-07 13:46:20.304032] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:12.484 [2024-11-07 13:46:20.304044] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:12.484 [2024-11-07 13:46:20.306893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:12.484 [2024-11-07 13:46:20.307088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:12.484 [2024-11-07 13:46:20.307235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:12.484 [2024-11-07 13:46:20.307264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:41:12.745 [2024-11-07 13:46:20.582106] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:12.745 [2024-11-07 13:46:20.596603] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:12.745 [2024-11-07 13:46:20.597058] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:12.745 [2024-11-07 13:46:20.597167] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:12.745 [2024-11-07 13:46:20.597397] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:13.006 13:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:41:13.006 13:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:41:13.006 13:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:13.006 13:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:13.006 13:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:13.006 13:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:13.006 13:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:13.006 13:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:13.006 13:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:13.006 [2024-11-07 13:46:20.792543] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:13.006 13:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:13.006 13:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:41:13.006 13:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:13.006 13:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:13.006 13:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:41:13.006 13:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:41:13.006 13:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:41:13.006 13:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:13.006 13:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:13.006 Malloc0 00:41:13.006 [2024-11-07 13:46:20.924344] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:13.006 13:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:13.006 13:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:41:13.006 13:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:13.006 13:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:13.006 13:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=4179447 00:41:13.006 13:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 4179447 /var/tmp/bdevperf.sock 00:41:13.006 13:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 4179447 ']' 00:41:13.006 13:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:41:13.006 13:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:41:13.006 13:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:41:13.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:41:13.006 13:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:41:13.006 13:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:41:13.006 13:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:41:13.006 13:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:13.006 13:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:41:13.006 13:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:41:13.006 13:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:13.006 13:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:13.006 { 00:41:13.006 "params": { 00:41:13.006 "name": "Nvme$subsystem", 00:41:13.006 "trtype": "$TEST_TRANSPORT", 00:41:13.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:13.006 "adrfam": "ipv4", 00:41:13.006 "trsvcid": "$NVMF_PORT", 00:41:13.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:13.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:13.006 "hdgst": ${hdgst:-false}, 00:41:13.006 "ddgst": ${ddgst:-false} 00:41:13.006 }, 00:41:13.006 "method": "bdev_nvme_attach_controller" 00:41:13.006 } 00:41:13.006 EOF 00:41:13.006 )") 00:41:13.006 13:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:41:13.006 13:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:41:13.006 13:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:41:13.006 13:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:13.006 "params": { 00:41:13.006 "name": "Nvme0", 00:41:13.006 "trtype": "tcp", 00:41:13.006 "traddr": "10.0.0.2", 00:41:13.006 "adrfam": "ipv4", 00:41:13.006 "trsvcid": "4420", 00:41:13.006 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:13.006 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:13.006 "hdgst": false, 00:41:13.006 "ddgst": false 00:41:13.006 }, 00:41:13.006 "method": "bdev_nvme_attach_controller" 00:41:13.006 }' 00:41:13.266 [2024-11-07 13:46:21.063659] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:41:13.267 [2024-11-07 13:46:21.063766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4179447 ] 00:41:13.267 [2024-11-07 13:46:21.203770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:13.527 [2024-11-07 13:46:21.299744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:13.787 Running I/O for 10 seconds... 00:41:14.050 13:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:41:14.050 13:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:41:14.050 13:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:41:14.050 13:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:14.050 13:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:14.050 13:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:14.050 13:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:14.050 13:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:41:14.050 13:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:41:14.050 13:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:41:14.050 13:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:41:14.050 13:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:41:14.050 13:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:41:14.050 13:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:41:14.050 13:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:41:14.050 13:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:41:14.050 13:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:14.050 13:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:14.050 13:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:14.050 13:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=131 00:41:14.050 13:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 131 -ge 100 ']' 00:41:14.050 13:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:41:14.050 13:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:41:14.050 13:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:41:14.050 13:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:41:14.050 13:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:14.050 13:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:14.050 [2024-11-07 13:46:21.916058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:14.050 [2024-11-07 13:46:21.916103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:14.050 [2024-11-07 13:46:21.916450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.050 [2024-11-07 13:46:21.916494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.050 [2024-11-07 13:46:21.916534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.050 [2024-11-07 13:46:21.916547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.050 [2024-11-07 13:46:21.916561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.051 [2024-11-07 13:46:21.916572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.051 [2024-11-07 13:46:21.916585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.051 [2024-11-07 13:46:21.916596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.051 [2024-11-07 13:46:21.916610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.051 [2024-11-07 13:46:21.916621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.051 [2024-11-07 13:46:21.916634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.051 [2024-11-07 13:46:21.916645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.051 [2024-11-07 13:46:21.916658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.051 [2024-11-07 13:46:21.916669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.051 [2024-11-07 13:46:21.916686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.051 [2024-11-07 13:46:21.916697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.051 [2024-11-07 13:46:21.916710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.051 [2024-11-07 13:46:21.916721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.051 [2024-11-07 13:46:21.916733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.051 [2024-11-07 13:46:21.916744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.051 [2024-11-07 13:46:21.916758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.051 [2024-11-07 13:46:21.916769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.051 [2024-11-07 13:46:21.916782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.051 [2024-11-07 13:46:21.916793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.051 [2024-11-07 13:46:21.916806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.051 [2024-11-07 13:46:21.916816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.051 [2024-11-07 13:46:21.916829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.051 [2024-11-07 13:46:21.916840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.051 [2024-11-07 13:46:21.916853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.051 [2024-11-07 13:46:21.916871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.051 [2024-11-07 13:46:21.916884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.051 [2024-11-07 13:46:21.916895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.051 [2024-11-07 13:46:21.916908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.051 [2024-11-07 13:46:21.916918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.051 [2024-11-07 13:46:21.916931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.051 [2024-11-07 13:46:21.916942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.051 [2024-11-07 13:46:21.916955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.051 [2024-11-07 13:46:21.916967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.051 [2024-11-07 13:46:21.916980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.051 [2024-11-07 13:46:21.916992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.051 [2024-11-07 13:46:21.917005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.051 [2024-11-07 13:46:21.917017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.051 [2024-11-07 13:46:21.917030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.051 [2024-11-07 13:46:21.917040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.051 [2024-11-07 13:46:21.917053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.051 [2024-11-07 13:46:21.917064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.051 [2024-11-07 13:46:21.917078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.051 [2024-11-07 13:46:21.917088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.051 [2024-11-07 13:46:21.917101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.051 [2024-11-07 13:46:21.917111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.051 [2024-11-07 13:46:21.917124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.051 [2024-11-07 13:46:21.917136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.051 [2024-11-07 13:46:21.917149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.051 [2024-11-07 13:46:21.917160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.051 [2024-11-07 13:46:21.917172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.051 [2024-11-07 13:46:21.917183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.051 [2024-11-07 13:46:21.917197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.051 [2024-11-07 13:46:21.917207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.051 [2024-11-07 13:46:21.917220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.051 [2024-11-07 13:46:21.917230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.051 [2024-11-07 13:46:21.917244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.051 [2024-11-07 13:46:21.917255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.051 [2024-11-07 13:46:21.917268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.051 [2024-11-07 13:46:21.917278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.051 [2024-11-07 13:46:21.917293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.051 [2024-11-07 13:46:21.917304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.051 [2024-11-07 13:46:21.917318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.051 [2024-11-07 13:46:21.917328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.051 [2024-11-07 13:46:21.917341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.051 [2024-11-07 13:46:21.917352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.051 [2024-11-07 13:46:21.917370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.051 [2024-11-07 13:46:21.917381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.051 [2024-11-07 13:46:21.917394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.051 [2024-11-07 13:46:21.917405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.051 [2024-11-07 13:46:21.917419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.051 [2024-11-07 13:46:21.917431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.051 [2024-11-07 13:46:21.917444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.051 [2024-11-07 13:46:21.917455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.051 [2024-11-07 13:46:21.917468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.051 [2024-11-07 13:46:21.917479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.051 [2024-11-07 13:46:21.917491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.051 [2024-11-07 13:46:21.917502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.051 [2024-11-07 13:46:21.917515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.051 [2024-11-07 13:46:21.917526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.051 [2024-11-07 13:46:21.917539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.052 [2024-11-07 13:46:21.917550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.052 [2024-11-07 13:46:21.917562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.052 [2024-11-07 13:46:21.917574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.052 [2024-11-07 13:46:21.917586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.052 [2024-11-07 13:46:21.917598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.052 [2024-11-07 13:46:21.917611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.052 [2024-11-07 13:46:21.917623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.052 [2024-11-07 13:46:21.917637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.052 [2024-11-07 13:46:21.917647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.052 [2024-11-07 13:46:21.917660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.052 [2024-11-07 13:46:21.917670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.052 [2024-11-07 13:46:21.917683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.052 [2024-11-07 13:46:21.917694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.052 [2024-11-07 13:46:21.917707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.052 [2024-11-07 13:46:21.917717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.052 [2024-11-07 13:46:21.917730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.052 [2024-11-07 13:46:21.917741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.052 [2024-11-07 13:46:21.917754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.052 [2024-11-07 13:46:21.917765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.052 [2024-11-07 13:46:21.917777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.052 [2024-11-07 13:46:21.917789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.052 [2024-11-07 13:46:21.917810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.052 [2024-11-07 13:46:21.917820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.052 [2024-11-07 13:46:21.917833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.052 [2024-11-07 13:46:21.917843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.052 [2024-11-07 13:46:21.917856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.052 [2024-11-07 13:46:21.917871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.052 [2024-11-07 13:46:21.917884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.052 [2024-11-07 13:46:21.917894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.052 [2024-11-07 13:46:21.917909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.052 [2024-11-07 13:46:21.917920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.052 [2024-11-07 13:46:21.917933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.052 [2024-11-07 13:46:21.917943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.052 [2024-11-07 13:46:21.917956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.052 [2024-11-07 13:46:21.917967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.052 [2024-11-07 13:46:21.917980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.052 [2024-11-07 13:46:21.917990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.052 [2024-11-07 13:46:21.918003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.052 [2024-11-07 13:46:21.918014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.052 [2024-11-07 13:46:21.918027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.052 [2024-11-07 13:46:21.918038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.052 [2024-11-07 13:46:21.918050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:14.052 [2024-11-07 13:46:21.918061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.052 [2024-11-07 13:46:21.918073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000417b00 is same with the state(6) to be set 00:41:14.052 [2024-11-07 13:46:21.918337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:41:14.052 [2024-11-07 13:46:21.918356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.052 [2024-11-07 13:46:21.918370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:41:14.052 [2024-11-07 13:46:21.918380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.052 [2024-11-07 13:46:21.918392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:41:14.052 [2024-11-07 13:46:21.918406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.052 [2024-11-07 13:46:21.918418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:41:14.052 [2024-11-07 13:46:21.918428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:14.052 [2024-11-07 13:46:21.918438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000416c00 is same with the state(6) to be set 00:41:14.052 [2024-11-07 13:46:21.919709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:41:14.052 13:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:14.052 13:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:41:14.052 13:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:14.052 13:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:14.052 task offset: 29696 on job bdev=Nvme0n1 fails 00:41:14.052 00:41:14.052 Latency(us) 00:41:14.052 [2024-11-07T12:46:22.059Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:14.052 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:41:14.052 Job: Nvme0n1 ended in about 0.17 seconds with error 00:41:14.052 Verification LBA range: start 0x0 length 0x400 00:41:14.052 Nvme0n1 : 0.17 1097.85 68.62 365.95 0.00 40989.33 2621.44 38229.33 00:41:14.052 [2024-11-07T12:46:22.059Z] =================================================================================================================== 00:41:14.052 [2024-11-07T12:46:22.059Z] Total : 1097.85 68.62 365.95 0.00 40989.33 2621.44 38229.33 00:41:14.052 [2024-11-07 13:46:21.924010] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:14.052 [2024-11-07 13:46:21.924044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000416c00 (9): Bad file descriptor 00:41:14.052 13:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:14.052 13:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:41:14.052 [2024-11-07 13:46:22.016057] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:41:14.994 13:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 4179447 00:41:14.994 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (4179447) - No such process 00:41:14.994 13:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:41:14.994 13:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:41:14.994 13:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:41:14.994 13:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:41:14.994 13:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:41:14.994 13:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:41:14.994 13:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:14.994 13:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:14.994 { 00:41:14.994 "params": { 00:41:14.994 "name": "Nvme$subsystem", 00:41:14.994 "trtype": "$TEST_TRANSPORT", 00:41:14.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:14.994 "adrfam": "ipv4", 00:41:14.994 "trsvcid": "$NVMF_PORT", 00:41:14.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:14.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:14.994 "hdgst": ${hdgst:-false}, 00:41:14.994 "ddgst": ${ddgst:-false} 00:41:14.994 }, 00:41:14.994 "method": "bdev_nvme_attach_controller" 00:41:14.994 } 00:41:14.994 EOF 00:41:14.994 )") 00:41:14.994 13:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:41:14.994 13:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:41:14.994 13:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:41:14.994 13:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:14.994 "params": { 00:41:14.994 "name": "Nvme0", 00:41:14.994 "trtype": "tcp", 00:41:14.994 "traddr": "10.0.0.2", 00:41:14.994 "adrfam": "ipv4", 00:41:14.994 "trsvcid": "4420", 00:41:14.994 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:14.994 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:14.994 "hdgst": false, 00:41:14.994 "ddgst": false 00:41:14.994 }, 00:41:14.994 "method": "bdev_nvme_attach_controller" 00:41:14.994 }' 00:41:15.254 [2024-11-07 13:46:23.024551] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:41:15.254 [2024-11-07 13:46:23.024658] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4179793 ] 00:41:15.254 [2024-11-07 13:46:23.161295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:15.254 [2024-11-07 13:46:23.257537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:15.825 Running I/O for 1 seconds... 00:41:16.764 1472.00 IOPS, 92.00 MiB/s 00:41:16.764 Latency(us) 00:41:16.764 [2024-11-07T12:46:24.771Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:16.764 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:41:16.764 Verification LBA range: start 0x0 length 0x400 00:41:16.764 Nvme0n1 : 1.02 1508.01 94.25 0.00 0.00 41694.32 6853.97 36263.25 00:41:16.764 [2024-11-07T12:46:24.771Z] =================================================================================================================== 00:41:16.764 [2024-11-07T12:46:24.771Z] Total : 1508.01 94.25 0.00 0.00 41694.32 6853.97 36263.25 00:41:17.333 13:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:41:17.333 13:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:41:17.333 13:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:41:17.333 13:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:41:17.333 13:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:41:17.333 13:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:17.333 13:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:41:17.333 13:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:17.333 13:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:41:17.333 13:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:17.333 13:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:17.333 rmmod nvme_tcp 00:41:17.333 rmmod nvme_fabrics 00:41:17.333 rmmod nvme_keyring 00:41:17.333 13:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:17.333 13:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:41:17.333 13:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:41:17.333 13:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 4179127 ']' 00:41:17.333 13:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 4179127 00:41:17.333 13:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 4179127 ']' 00:41:17.333 13:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 4179127 00:41:17.333 13:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:41:17.333 13:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:41:17.333 13:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4179127 00:41:17.592 13:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:41:17.592 13:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:41:17.592 13:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4179127' 00:41:17.592 killing process with pid 4179127 00:41:17.592 13:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 4179127 00:41:17.592 13:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 4179127 00:41:18.161 [2024-11-07 13:46:25.944058] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:41:18.161 13:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:18.161 13:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:18.161 13:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:18.161 13:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:41:18.161 13:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:41:18.161 13:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:18.161 13:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:41:18.161 13:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:18.161 13:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:18.161 13:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:18.161 13:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:18.161 13:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:41:20.703 00:41:20.703 real 0m16.652s 00:41:20.703 user 0m23.961s 00:41:20.703 sys 0m8.591s 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:20.703 ************************************ 00:41:20.703 END TEST nvmf_host_management 00:41:20.703 ************************************ 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:20.703 ************************************ 00:41:20.703 START TEST nvmf_lvol 00:41:20.703 ************************************ 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:41:20.703 * Looking for test storage... 00:41:20.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:20.703 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:41:20.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:20.703 --rc genhtml_branch_coverage=1 00:41:20.704 --rc genhtml_function_coverage=1 00:41:20.704 --rc genhtml_legend=1 00:41:20.704 --rc geninfo_all_blocks=1 00:41:20.704 --rc geninfo_unexecuted_blocks=1 00:41:20.704 00:41:20.704 ' 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:41:20.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:20.704 --rc genhtml_branch_coverage=1 00:41:20.704 --rc genhtml_function_coverage=1 00:41:20.704 --rc genhtml_legend=1 00:41:20.704 --rc geninfo_all_blocks=1 00:41:20.704 --rc geninfo_unexecuted_blocks=1 00:41:20.704 00:41:20.704 ' 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:41:20.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:20.704 --rc genhtml_branch_coverage=1 00:41:20.704 --rc genhtml_function_coverage=1 00:41:20.704 --rc genhtml_legend=1 00:41:20.704 --rc geninfo_all_blocks=1 00:41:20.704 --rc geninfo_unexecuted_blocks=1 00:41:20.704 00:41:20.704 ' 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:41:20.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:20.704 --rc genhtml_branch_coverage=1 00:41:20.704 --rc genhtml_function_coverage=1 00:41:20.704 --rc genhtml_legend=1 00:41:20.704 --rc geninfo_all_blocks=1 00:41:20.704 --rc geninfo_unexecuted_blocks=1 00:41:20.704 00:41:20.704 ' 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:41:20.704 13:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:41:28.844 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:28.844 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:41:28.844 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:28.844 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:41:28.845 Found 0000:31:00.0 (0x8086 - 0x159b) 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:41:28.845 Found 0000:31:00.1 (0x8086 - 0x159b) 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:41:28.845 Found net devices under 0000:31:00.0: cvl_0_0 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:41:28.845 Found net devices under 0000:31:00.1: cvl_0_1 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:28.845 13:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:28.845 13:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:28.845 13:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:28.845 13:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:28.845 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:28.845 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.599 ms 00:41:28.845 00:41:28.845 --- 10.0.0.2 ping statistics --- 00:41:28.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:28.845 rtt min/avg/max/mdev = 0.599/0.599/0.599/0.000 ms 00:41:28.845 13:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:28.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:28.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:41:28.845 00:41:28.845 --- 10.0.0.1 ping statistics --- 00:41:28.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:28.845 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:41:28.846 13:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:28.846 13:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:41:28.846 13:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:28.846 13:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:28.846 13:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:28.846 13:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:28.846 13:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:28.846 13:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:28.846 13:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:28.846 13:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:41:28.846 13:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:28.846 13:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:28.846 13:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:41:28.846 13:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=4184788 00:41:28.846 13:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 4184788 00:41:28.846 13:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:41:28.846 13:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 4184788 ']' 00:41:28.846 13:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:28.846 13:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:41:28.846 13:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:28.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:28.846 13:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:41:28.846 13:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:41:28.846 [2024-11-07 13:46:36.174184] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:28.846 [2024-11-07 13:46:36.176545] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:41:28.846 [2024-11-07 13:46:36.176626] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:28.846 [2024-11-07 13:46:36.340122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:41:28.846 [2024-11-07 13:46:36.439476] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:28.846 [2024-11-07 13:46:36.439520] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:28.846 [2024-11-07 13:46:36.439534] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:28.846 [2024-11-07 13:46:36.439545] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:28.846 [2024-11-07 13:46:36.439555] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:28.846 [2024-11-07 13:46:36.441625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:28.846 [2024-11-07 13:46:36.441704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:28.846 [2024-11-07 13:46:36.441707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:28.846 [2024-11-07 13:46:36.680142] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:28.846 [2024-11-07 13:46:36.680246] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:28.846 [2024-11-07 13:46:36.680700] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:28.846 [2024-11-07 13:46:36.680990] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:29.107 13:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:41:29.107 13:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:41:29.107 13:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:29.107 13:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:29.107 13:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:41:29.107 13:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:29.107 13:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:41:29.368 [2024-11-07 13:46:37.138509] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:29.368 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:29.628 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:41:29.628 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:29.889 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:41:29.889 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:41:29.889 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:41:30.149 13:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=1df839e9-74cd-4eaa-9e75-2a683e3f72e8 00:41:30.149 13:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1df839e9-74cd-4eaa-9e75-2a683e3f72e8 lvol 20 00:41:30.409 13:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=9fae19ad-5bf6-48a1-8cb8-bec588039b76 00:41:30.409 13:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:41:30.409 13:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9fae19ad-5bf6-48a1-8cb8-bec588039b76 00:41:30.749 13:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:30.749 [2024-11-07 13:46:38.666705] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:30.749 13:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:31.017 13:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=4185472 00:41:31.017 13:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:41:31.017 13:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:41:31.995 13:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 9fae19ad-5bf6-48a1-8cb8-bec588039b76 MY_SNAPSHOT 00:41:32.256 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=67ce0aca-40da-4e17-9112-59d44aa7aaee 00:41:32.256 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 9fae19ad-5bf6-48a1-8cb8-bec588039b76 30 00:41:32.516 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 67ce0aca-40da-4e17-9112-59d44aa7aaee MY_CLONE 00:41:32.776 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=76811943-b2aa-4eb4-824c-a074e8baceec 00:41:32.776 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 76811943-b2aa-4eb4-824c-a074e8baceec 00:41:33.347 13:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 4185472 00:41:41.481 Initializing NVMe Controllers 00:41:41.481 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:41:41.481 Controller IO queue size 128, less than required. 00:41:41.482 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:41:41.482 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:41:41.482 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:41:41.482 Initialization complete. Launching workers. 00:41:41.482 ======================================================== 00:41:41.482 Latency(us) 00:41:41.482 Device Information : IOPS MiB/s Average min max 00:41:41.482 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 14444.60 56.42 8863.67 290.02 152214.20 00:41:41.482 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11326.50 44.24 11304.98 2510.59 168990.65 00:41:41.482 ======================================================== 00:41:41.482 Total : 25771.10 100.67 9936.63 290.02 168990.65 00:41:41.482 00:41:41.482 13:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:41.743 13:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9fae19ad-5bf6-48a1-8cb8-bec588039b76 00:41:41.743 13:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1df839e9-74cd-4eaa-9e75-2a683e3f72e8 00:41:42.003 13:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:41:42.003 13:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:41:42.003 13:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:41:42.003 13:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:42.003 13:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:41:42.003 13:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:42.003 13:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:41:42.003 13:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:42.003 13:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:42.003 rmmod nvme_tcp 00:41:42.003 rmmod nvme_fabrics 00:41:42.003 rmmod nvme_keyring 00:41:42.003 13:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:42.003 13:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:41:42.003 13:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:41:42.003 13:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 4184788 ']' 00:41:42.003 13:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 4184788 00:41:42.003 13:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 4184788 ']' 00:41:42.003 13:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 4184788 00:41:42.003 13:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:41:42.003 13:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:41:42.003 13:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4184788 00:41:42.003 13:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:41:42.003 13:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:41:42.003 13:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4184788' 00:41:42.003 killing process with pid 4184788 00:41:42.003 13:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 4184788 00:41:42.003 13:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 4184788 00:41:43.409 13:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:43.409 13:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:43.409 13:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:43.409 13:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:41:43.409 13:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:41:43.409 13:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:43.409 13:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:41:43.409 13:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:43.409 13:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:43.409 13:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:43.409 13:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:43.409 13:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:45.323 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:45.323 00:41:45.323 real 0m24.988s 00:41:45.323 user 0m57.136s 00:41:45.323 sys 0m10.837s 00:41:45.323 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:41:45.323 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:41:45.323 ************************************ 00:41:45.323 END TEST nvmf_lvol 00:41:45.323 ************************************ 00:41:45.323 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:41:45.323 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:41:45.323 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:41:45.323 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:45.323 ************************************ 00:41:45.323 START TEST nvmf_lvs_grow 00:41:45.323 ************************************ 00:41:45.323 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:41:45.323 * Looking for test storage... 00:41:45.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:45.323 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:41:45.323 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:41:45.323 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:41:45.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:45.585 --rc genhtml_branch_coverage=1 00:41:45.585 --rc genhtml_function_coverage=1 00:41:45.585 --rc genhtml_legend=1 00:41:45.585 --rc geninfo_all_blocks=1 00:41:45.585 --rc geninfo_unexecuted_blocks=1 00:41:45.585 00:41:45.585 ' 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:41:45.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:45.585 --rc genhtml_branch_coverage=1 00:41:45.585 --rc genhtml_function_coverage=1 00:41:45.585 --rc genhtml_legend=1 00:41:45.585 --rc geninfo_all_blocks=1 00:41:45.585 --rc geninfo_unexecuted_blocks=1 00:41:45.585 00:41:45.585 ' 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:41:45.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:45.585 --rc genhtml_branch_coverage=1 00:41:45.585 --rc genhtml_function_coverage=1 00:41:45.585 --rc genhtml_legend=1 00:41:45.585 --rc geninfo_all_blocks=1 00:41:45.585 --rc geninfo_unexecuted_blocks=1 00:41:45.585 00:41:45.585 ' 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:41:45.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:45.585 --rc genhtml_branch_coverage=1 00:41:45.585 --rc genhtml_function_coverage=1 00:41:45.585 --rc genhtml_legend=1 00:41:45.585 --rc geninfo_all_blocks=1 00:41:45.585 --rc geninfo_unexecuted_blocks=1 00:41:45.585 00:41:45.585 ' 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:45.585 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:45.586 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:45.586 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:41:45.586 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:45.586 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:41:45.586 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:45.586 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:45.586 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:45.586 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:45.586 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:45.586 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:45.586 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:45.586 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:45.586 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:45.586 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:45.586 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:45.586 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:41:45.586 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:41:45.586 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:45.586 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:45.586 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:45.586 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:45.586 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:45.586 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:45.586 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:45.586 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:45.586 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:45.586 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:45.586 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:41:45.586 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:41:53.729 Found 0000:31:00.0 (0x8086 - 0x159b) 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:41:53.729 Found 0000:31:00.1 (0x8086 - 0x159b) 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:41:53.729 Found net devices under 0000:31:00.0: cvl_0_0 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:41:53.729 Found net devices under 0000:31:00.1: cvl_0_1 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:53.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:53.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.502 ms 00:41:53.729 00:41:53.729 --- 10.0.0.2 ping statistics --- 00:41:53.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:53.729 rtt min/avg/max/mdev = 0.502/0.502/0.502/0.000 ms 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:53.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:53.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:41:53.729 00:41:53.729 --- 10.0.0.1 ping statistics --- 00:41:53.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:53.729 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:41:53.729 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:53.730 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:53.730 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:53.730 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:53.730 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:53.730 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:53.730 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:53.730 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:41:53.730 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:53.730 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:53.730 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:53.730 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=4192134 00:41:53.730 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 4192134 00:41:53.730 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:41:53.730 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 4192134 ']' 00:41:53.730 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:53.730 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:41:53.730 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:53.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:53.730 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:41:53.730 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:53.730 [2024-11-07 13:47:01.505244] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:53.730 [2024-11-07 13:47:01.507530] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:41:53.730 [2024-11-07 13:47:01.507616] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:53.730 [2024-11-07 13:47:01.651491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:53.991 [2024-11-07 13:47:01.750147] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:53.991 [2024-11-07 13:47:01.750188] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:53.991 [2024-11-07 13:47:01.750202] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:53.991 [2024-11-07 13:47:01.750214] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:53.991 [2024-11-07 13:47:01.750226] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:53.991 [2024-11-07 13:47:01.751330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:53.991 [2024-11-07 13:47:01.988507] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:53.991 [2024-11-07 13:47:01.988815] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:54.563 13:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:41:54.563 13:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:41:54.563 13:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:54.563 13:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:54.563 13:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:54.563 13:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:54.563 13:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:41:54.563 [2024-11-07 13:47:02.444429] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:54.563 13:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:41:54.563 13:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:41:54.563 13:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:41:54.563 13:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:54.563 ************************************ 00:41:54.563 START TEST lvs_grow_clean 00:41:54.563 ************************************ 00:41:54.563 13:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:41:54.563 13:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:41:54.563 13:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:41:54.563 13:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:41:54.563 13:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:41:54.563 13:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:41:54.563 13:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:41:54.563 13:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:54.563 13:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:54.563 13:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:54.825 13:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:41:54.825 13:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:41:55.086 13:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b2909fed-7629-4cb7-aa8b-673d6d7dc9d8 00:41:55.086 13:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2909fed-7629-4cb7-aa8b-673d6d7dc9d8 00:41:55.086 13:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:41:55.086 13:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:41:55.086 13:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:41:55.086 13:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b2909fed-7629-4cb7-aa8b-673d6d7dc9d8 lvol 150 00:41:55.346 13:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d6bb5970-079c-4696-a1d5-bc7c771ab27e 00:41:55.347 13:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:55.347 13:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:41:55.608 [2024-11-07 13:47:03.356023] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:41:55.608 [2024-11-07 13:47:03.356145] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:41:55.608 true 00:41:55.608 13:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2909fed-7629-4cb7-aa8b-673d6d7dc9d8 00:41:55.608 13:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:41:55.608 13:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:41:55.608 13:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:41:55.868 13:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d6bb5970-079c-4696-a1d5-bc7c771ab27e 00:41:56.128 13:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:56.128 [2024-11-07 13:47:04.060303] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:56.128 13:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:56.388 13:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4192815 00:41:56.388 13:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:56.388 13:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:41:56.388 13:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4192815 /var/tmp/bdevperf.sock 00:41:56.388 13:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 4192815 ']' 00:41:56.388 13:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:41:56.388 13:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:41:56.388 13:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:41:56.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:41:56.388 13:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:41:56.388 13:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:41:56.388 [2024-11-07 13:47:04.320952] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:41:56.388 [2024-11-07 13:47:04.321064] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4192815 ] 00:41:56.648 [2024-11-07 13:47:04.471762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:56.648 [2024-11-07 13:47:04.569428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:57.218 13:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:41:57.218 13:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:41:57.218 13:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:41:57.478 Nvme0n1 00:41:57.478 13:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:41:57.478 [ 00:41:57.478 { 00:41:57.478 "name": "Nvme0n1", 00:41:57.478 "aliases": [ 00:41:57.478 "d6bb5970-079c-4696-a1d5-bc7c771ab27e" 00:41:57.478 ], 00:41:57.478 "product_name": "NVMe disk", 00:41:57.478 "block_size": 4096, 00:41:57.478 "num_blocks": 38912, 00:41:57.478 "uuid": "d6bb5970-079c-4696-a1d5-bc7c771ab27e", 00:41:57.478 "numa_id": 0, 00:41:57.478 "assigned_rate_limits": { 00:41:57.478 "rw_ios_per_sec": 0, 00:41:57.478 "rw_mbytes_per_sec": 0, 00:41:57.478 "r_mbytes_per_sec": 0, 00:41:57.478 "w_mbytes_per_sec": 0 00:41:57.478 }, 00:41:57.478 "claimed": false, 00:41:57.478 "zoned": false, 00:41:57.478 "supported_io_types": { 00:41:57.478 "read": true, 00:41:57.478 "write": true, 00:41:57.478 "unmap": true, 00:41:57.478 "flush": true, 00:41:57.478 "reset": true, 00:41:57.478 "nvme_admin": true, 00:41:57.478 "nvme_io": true, 00:41:57.478 "nvme_io_md": false, 00:41:57.478 "write_zeroes": true, 00:41:57.478 "zcopy": false, 00:41:57.478 "get_zone_info": false, 00:41:57.478 "zone_management": false, 00:41:57.478 "zone_append": false, 00:41:57.478 "compare": true, 00:41:57.478 "compare_and_write": true, 00:41:57.478 "abort": true, 00:41:57.478 "seek_hole": false, 00:41:57.479 "seek_data": false, 00:41:57.479 "copy": true, 00:41:57.479 "nvme_iov_md": false 00:41:57.479 }, 00:41:57.479 "memory_domains": [ 00:41:57.479 { 00:41:57.479 "dma_device_id": "system", 00:41:57.479 "dma_device_type": 1 00:41:57.479 } 00:41:57.479 ], 00:41:57.479 "driver_specific": { 00:41:57.479 "nvme": [ 00:41:57.479 { 00:41:57.479 "trid": { 00:41:57.479 "trtype": "TCP", 00:41:57.479 "adrfam": "IPv4", 00:41:57.479 "traddr": "10.0.0.2", 00:41:57.479 "trsvcid": "4420", 00:41:57.479 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:41:57.479 }, 00:41:57.479 "ctrlr_data": { 00:41:57.479 "cntlid": 1, 00:41:57.479 "vendor_id": "0x8086", 00:41:57.479 "model_number": "SPDK bdev Controller", 00:41:57.479 "serial_number": "SPDK0", 00:41:57.479 "firmware_revision": "25.01", 00:41:57.479 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:57.479 "oacs": { 00:41:57.479 "security": 0, 00:41:57.479 "format": 0, 00:41:57.479 "firmware": 0, 00:41:57.479 "ns_manage": 0 00:41:57.479 }, 00:41:57.479 "multi_ctrlr": true, 00:41:57.479 "ana_reporting": false 00:41:57.479 }, 00:41:57.479 "vs": { 00:41:57.479 "nvme_version": "1.3" 00:41:57.479 }, 00:41:57.479 "ns_data": { 00:41:57.479 "id": 1, 00:41:57.479 "can_share": true 00:41:57.479 } 00:41:57.479 } 00:41:57.479 ], 00:41:57.479 "mp_policy": "active_passive" 00:41:57.479 } 00:41:57.479 } 00:41:57.479 ] 00:41:57.739 13:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4192996 00:41:57.739 13:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:41:57.739 13:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:41:57.739 Running I/O for 10 seconds... 00:41:58.680 Latency(us) 00:41:58.680 [2024-11-07T12:47:06.687Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:58.680 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:58.680 Nvme0n1 : 1.00 15885.00 62.05 0.00 0.00 0.00 0.00 0.00 00:41:58.680 [2024-11-07T12:47:06.687Z] =================================================================================================================== 00:41:58.680 [2024-11-07T12:47:06.687Z] Total : 15885.00 62.05 0.00 0.00 0.00 0.00 0.00 00:41:58.680 00:41:59.622 13:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b2909fed-7629-4cb7-aa8b-673d6d7dc9d8 00:41:59.622 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:59.622 Nvme0n1 : 2.00 16039.00 62.65 0.00 0.00 0.00 0.00 0.00 00:41:59.622 [2024-11-07T12:47:07.629Z] =================================================================================================================== 00:41:59.622 [2024-11-07T12:47:07.629Z] Total : 16039.00 62.65 0.00 0.00 0.00 0.00 0.00 00:41:59.622 00:41:59.881 true 00:41:59.881 13:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2909fed-7629-4cb7-aa8b-673d6d7dc9d8 00:41:59.881 13:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:41:59.881 13:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:41:59.881 13:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:41:59.881 13:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 4192996 00:42:00.821 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:00.821 Nvme0n1 : 3.00 16111.33 62.93 0.00 0.00 0.00 0.00 0.00 00:42:00.821 [2024-11-07T12:47:08.828Z] =================================================================================================================== 00:42:00.821 [2024-11-07T12:47:08.828Z] Total : 16111.33 62.93 0.00 0.00 0.00 0.00 0.00 00:42:00.821 00:42:01.760 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:01.760 Nvme0n1 : 4.00 16147.50 63.08 0.00 0.00 0.00 0.00 0.00 00:42:01.760 [2024-11-07T12:47:09.767Z] =================================================================================================================== 00:42:01.760 [2024-11-07T12:47:09.767Z] Total : 16147.50 63.08 0.00 0.00 0.00 0.00 0.00 00:42:01.760 00:42:02.699 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:02.699 Nvme0n1 : 5.00 16169.20 63.16 0.00 0.00 0.00 0.00 0.00 00:42:02.699 [2024-11-07T12:47:10.706Z] =================================================================================================================== 00:42:02.699 [2024-11-07T12:47:10.706Z] Total : 16169.20 63.16 0.00 0.00 0.00 0.00 0.00 00:42:02.699 00:42:03.639 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:03.639 Nvme0n1 : 6.00 16183.67 63.22 0.00 0.00 0.00 0.00 0.00 00:42:03.639 [2024-11-07T12:47:11.646Z] =================================================================================================================== 00:42:03.639 [2024-11-07T12:47:11.646Z] Total : 16183.67 63.22 0.00 0.00 0.00 0.00 0.00 00:42:03.639 00:42:05.020 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:05.020 Nvme0n1 : 7.00 16212.14 63.33 0.00 0.00 0.00 0.00 0.00 00:42:05.020 [2024-11-07T12:47:13.027Z] =================================================================================================================== 00:42:05.020 [2024-11-07T12:47:13.027Z] Total : 16212.14 63.33 0.00 0.00 0.00 0.00 0.00 00:42:05.020 00:42:05.961 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:05.961 Nvme0n1 : 8.00 16217.62 63.35 0.00 0.00 0.00 0.00 0.00 00:42:05.961 [2024-11-07T12:47:13.968Z] =================================================================================================================== 00:42:05.961 [2024-11-07T12:47:13.968Z] Total : 16217.62 63.35 0.00 0.00 0.00 0.00 0.00 00:42:05.961 00:42:06.901 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:06.901 Nvme0n1 : 9.00 16236.00 63.42 0.00 0.00 0.00 0.00 0.00 00:42:06.901 [2024-11-07T12:47:14.908Z] =================================================================================================================== 00:42:06.901 [2024-11-07T12:47:14.908Z] Total : 16236.00 63.42 0.00 0.00 0.00 0.00 0.00 00:42:06.901 00:42:07.842 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:07.842 Nvme0n1 : 10.00 16250.70 63.48 0.00 0.00 0.00 0.00 0.00 00:42:07.842 [2024-11-07T12:47:15.849Z] =================================================================================================================== 00:42:07.842 [2024-11-07T12:47:15.849Z] Total : 16250.70 63.48 0.00 0.00 0.00 0.00 0.00 00:42:07.842 00:42:07.842 00:42:07.842 Latency(us) 00:42:07.842 [2024-11-07T12:47:15.849Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:07.842 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:07.842 Nvme0n1 : 10.01 16248.91 63.47 0.00 0.00 7872.78 2894.51 16056.32 00:42:07.842 [2024-11-07T12:47:15.849Z] =================================================================================================================== 00:42:07.842 [2024-11-07T12:47:15.849Z] Total : 16248.91 63.47 0.00 0.00 7872.78 2894.51 16056.32 00:42:07.842 { 00:42:07.842 "results": [ 00:42:07.842 { 00:42:07.842 "job": "Nvme0n1", 00:42:07.842 "core_mask": "0x2", 00:42:07.842 "workload": "randwrite", 00:42:07.842 "status": "finished", 00:42:07.842 "queue_depth": 128, 00:42:07.842 "io_size": 4096, 00:42:07.842 "runtime": 10.005101, 00:42:07.842 "iops": 16248.911430279415, 00:42:07.842 "mibps": 63.472310274528965, 00:42:07.842 "io_failed": 0, 00:42:07.842 "io_timeout": 0, 00:42:07.842 "avg_latency_us": 7872.779721969342, 00:42:07.842 "min_latency_us": 2894.5066666666667, 00:42:07.842 "max_latency_us": 16056.32 00:42:07.842 } 00:42:07.842 ], 00:42:07.842 "core_count": 1 00:42:07.842 } 00:42:07.842 13:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4192815 00:42:07.842 13:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 4192815 ']' 00:42:07.842 13:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 4192815 00:42:07.842 13:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:42:07.842 13:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:42:07.842 13:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4192815 00:42:07.842 13:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:42:07.842 13:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:42:07.842 13:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4192815' 00:42:07.842 killing process with pid 4192815 00:42:07.842 13:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 4192815 00:42:07.842 Received shutdown signal, test time was about 10.000000 seconds 00:42:07.842 00:42:07.842 Latency(us) 00:42:07.842 [2024-11-07T12:47:15.849Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:07.842 [2024-11-07T12:47:15.849Z] =================================================================================================================== 00:42:07.842 [2024-11-07T12:47:15.849Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:07.842 13:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 4192815 00:42:08.413 13:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:42:08.413 13:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:08.673 13:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2909fed-7629-4cb7-aa8b-673d6d7dc9d8 00:42:08.673 13:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:42:08.673 13:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:42:08.673 13:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:42:08.673 13:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:42:08.933 [2024-11-07 13:47:16.832286] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:42:08.933 13:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2909fed-7629-4cb7-aa8b-673d6d7dc9d8 00:42:08.933 13:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:42:08.933 13:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2909fed-7629-4cb7-aa8b-673d6d7dc9d8 00:42:08.933 13:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:08.933 13:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:08.933 13:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:08.933 13:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:08.933 13:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:08.933 13:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:08.933 13:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:08.933 13:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:42:08.933 13:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2909fed-7629-4cb7-aa8b-673d6d7dc9d8 00:42:09.195 request: 00:42:09.195 { 00:42:09.195 "uuid": "b2909fed-7629-4cb7-aa8b-673d6d7dc9d8", 00:42:09.195 "method": "bdev_lvol_get_lvstores", 00:42:09.195 "req_id": 1 00:42:09.195 } 00:42:09.195 Got JSON-RPC error response 00:42:09.195 response: 00:42:09.195 { 00:42:09.195 "code": -19, 00:42:09.195 "message": "No such device" 00:42:09.195 } 00:42:09.195 13:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:42:09.195 13:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:09.195 13:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:09.195 13:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:09.195 13:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:42:09.455 aio_bdev 00:42:09.455 13:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d6bb5970-079c-4696-a1d5-bc7c771ab27e 00:42:09.455 13:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=d6bb5970-079c-4696-a1d5-bc7c771ab27e 00:42:09.455 13:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:42:09.455 13:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:42:09.455 13:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:42:09.455 13:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:42:09.455 13:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:42:09.455 13:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d6bb5970-079c-4696-a1d5-bc7c771ab27e -t 2000 00:42:09.716 [ 00:42:09.716 { 00:42:09.716 "name": "d6bb5970-079c-4696-a1d5-bc7c771ab27e", 00:42:09.716 "aliases": [ 00:42:09.716 "lvs/lvol" 00:42:09.716 ], 00:42:09.716 "product_name": "Logical Volume", 00:42:09.716 "block_size": 4096, 00:42:09.716 "num_blocks": 38912, 00:42:09.716 "uuid": "d6bb5970-079c-4696-a1d5-bc7c771ab27e", 00:42:09.716 "assigned_rate_limits": { 00:42:09.716 "rw_ios_per_sec": 0, 00:42:09.716 "rw_mbytes_per_sec": 0, 00:42:09.716 "r_mbytes_per_sec": 0, 00:42:09.716 "w_mbytes_per_sec": 0 00:42:09.716 }, 00:42:09.716 "claimed": false, 00:42:09.716 "zoned": false, 00:42:09.716 "supported_io_types": { 00:42:09.716 "read": true, 00:42:09.716 "write": true, 00:42:09.716 "unmap": true, 00:42:09.716 "flush": false, 00:42:09.716 "reset": true, 00:42:09.716 "nvme_admin": false, 00:42:09.716 "nvme_io": false, 00:42:09.716 "nvme_io_md": false, 00:42:09.716 "write_zeroes": true, 00:42:09.716 "zcopy": false, 00:42:09.716 "get_zone_info": false, 00:42:09.716 "zone_management": false, 00:42:09.716 "zone_append": false, 00:42:09.716 "compare": false, 00:42:09.716 "compare_and_write": false, 00:42:09.716 "abort": false, 00:42:09.716 "seek_hole": true, 00:42:09.716 "seek_data": true, 00:42:09.716 "copy": false, 00:42:09.716 "nvme_iov_md": false 00:42:09.716 }, 00:42:09.716 "driver_specific": { 00:42:09.716 "lvol": { 00:42:09.716 "lvol_store_uuid": "b2909fed-7629-4cb7-aa8b-673d6d7dc9d8", 00:42:09.716 "base_bdev": "aio_bdev", 00:42:09.716 "thin_provision": false, 00:42:09.716 "num_allocated_clusters": 38, 00:42:09.716 "snapshot": false, 00:42:09.716 "clone": false, 00:42:09.716 "esnap_clone": false 00:42:09.716 } 00:42:09.716 } 00:42:09.716 } 00:42:09.716 ] 00:42:09.716 13:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:42:09.716 13:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2909fed-7629-4cb7-aa8b-673d6d7dc9d8 00:42:09.716 13:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:42:09.976 13:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:42:09.976 13:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2909fed-7629-4cb7-aa8b-673d6d7dc9d8 00:42:09.976 13:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:42:09.976 13:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:42:09.976 13:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d6bb5970-079c-4696-a1d5-bc7c771ab27e 00:42:10.237 13:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b2909fed-7629-4cb7-aa8b-673d6d7dc9d8 00:42:10.497 13:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:42:10.497 13:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:42:10.497 00:42:10.497 real 0m15.978s 00:42:10.497 user 0m15.620s 00:42:10.497 sys 0m1.405s 00:42:10.497 13:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:10.497 13:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:42:10.497 ************************************ 00:42:10.497 END TEST lvs_grow_clean 00:42:10.497 ************************************ 00:42:10.497 13:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:42:10.497 13:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:42:10.497 13:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:42:10.497 13:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:42:10.497 ************************************ 00:42:10.497 START TEST lvs_grow_dirty 00:42:10.497 ************************************ 00:42:10.497 13:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:42:10.497 13:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:42:10.497 13:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:42:10.497 13:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:42:10.497 13:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:42:10.497 13:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:42:10.497 13:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:42:10.497 13:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:42:10.757 13:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:42:10.757 13:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:42:10.757 13:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:42:10.757 13:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:42:11.017 13:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a5ccca79-d821-462d-921b-a401348d483a 00:42:11.017 13:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5ccca79-d821-462d-921b-a401348d483a 00:42:11.017 13:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:42:11.278 13:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:42:11.278 13:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:42:11.278 13:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a5ccca79-d821-462d-921b-a401348d483a lvol 150 00:42:11.278 13:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=21456ca5-6114-4b62-97c7-8a2d87acef15 00:42:11.278 13:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:42:11.278 13:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:42:11.538 [2024-11-07 13:47:19.412038] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:42:11.538 [2024-11-07 13:47:19.412156] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:42:11.538 true 00:42:11.538 13:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:42:11.538 13:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5ccca79-d821-462d-921b-a401348d483a 00:42:11.805 13:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:42:11.805 13:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:42:11.805 13:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 21456ca5-6114-4b62-97c7-8a2d87acef15 00:42:12.066 13:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:12.327 [2024-11-07 13:47:20.112320] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:12.327 13:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:42:12.327 13:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2413 00:42:12.327 13:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:42:12.327 13:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:42:12.327 13:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2413 /var/tmp/bdevperf.sock 00:42:12.327 13:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 2413 ']' 00:42:12.327 13:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:42:12.327 13:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:12.327 13:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:42:12.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:42:12.327 13:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:12.327 13:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:42:12.586 [2024-11-07 13:47:20.376134] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:42:12.586 [2024-11-07 13:47:20.376245] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2413 ] 00:42:12.586 [2024-11-07 13:47:20.528280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:12.845 [2024-11-07 13:47:20.629347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:13.414 13:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:42:13.414 13:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:42:13.414 13:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:42:13.414 Nvme0n1 00:42:13.674 13:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:42:13.674 [ 00:42:13.674 { 00:42:13.674 "name": "Nvme0n1", 00:42:13.674 "aliases": [ 00:42:13.674 "21456ca5-6114-4b62-97c7-8a2d87acef15" 00:42:13.674 ], 00:42:13.674 "product_name": "NVMe disk", 00:42:13.674 "block_size": 4096, 00:42:13.674 "num_blocks": 38912, 00:42:13.674 "uuid": "21456ca5-6114-4b62-97c7-8a2d87acef15", 00:42:13.674 "numa_id": 0, 00:42:13.674 "assigned_rate_limits": { 00:42:13.674 "rw_ios_per_sec": 0, 00:42:13.674 "rw_mbytes_per_sec": 0, 00:42:13.674 "r_mbytes_per_sec": 0, 00:42:13.674 "w_mbytes_per_sec": 0 00:42:13.674 }, 00:42:13.674 "claimed": false, 00:42:13.674 "zoned": false, 00:42:13.674 "supported_io_types": { 00:42:13.674 "read": true, 00:42:13.674 "write": true, 00:42:13.674 "unmap": true, 00:42:13.674 "flush": true, 00:42:13.674 "reset": true, 00:42:13.674 "nvme_admin": true, 00:42:13.674 "nvme_io": true, 00:42:13.674 "nvme_io_md": false, 00:42:13.674 "write_zeroes": true, 00:42:13.674 "zcopy": false, 00:42:13.674 "get_zone_info": false, 00:42:13.674 "zone_management": false, 00:42:13.674 "zone_append": false, 00:42:13.674 "compare": true, 00:42:13.674 "compare_and_write": true, 00:42:13.674 "abort": true, 00:42:13.674 "seek_hole": false, 00:42:13.674 "seek_data": false, 00:42:13.674 "copy": true, 00:42:13.674 "nvme_iov_md": false 00:42:13.674 }, 00:42:13.674 "memory_domains": [ 00:42:13.674 { 00:42:13.674 "dma_device_id": "system", 00:42:13.674 "dma_device_type": 1 00:42:13.674 } 00:42:13.674 ], 00:42:13.674 "driver_specific": { 00:42:13.674 "nvme": [ 00:42:13.674 { 00:42:13.674 "trid": { 00:42:13.674 "trtype": "TCP", 00:42:13.674 "adrfam": "IPv4", 00:42:13.674 "traddr": "10.0.0.2", 00:42:13.674 "trsvcid": "4420", 00:42:13.674 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:42:13.674 }, 00:42:13.674 "ctrlr_data": { 00:42:13.674 "cntlid": 1, 00:42:13.674 "vendor_id": "0x8086", 00:42:13.674 "model_number": "SPDK bdev Controller", 00:42:13.674 "serial_number": "SPDK0", 00:42:13.674 "firmware_revision": "25.01", 00:42:13.674 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:13.674 "oacs": { 00:42:13.674 "security": 0, 00:42:13.674 "format": 0, 00:42:13.674 "firmware": 0, 00:42:13.674 "ns_manage": 0 00:42:13.674 }, 00:42:13.674 "multi_ctrlr": true, 00:42:13.674 "ana_reporting": false 00:42:13.674 }, 00:42:13.674 "vs": { 00:42:13.674 "nvme_version": "1.3" 00:42:13.674 }, 00:42:13.674 "ns_data": { 00:42:13.674 "id": 1, 00:42:13.674 "can_share": true 00:42:13.674 } 00:42:13.674 } 00:42:13.674 ], 00:42:13.674 "mp_policy": "active_passive" 00:42:13.674 } 00:42:13.674 } 00:42:13.674 ] 00:42:13.674 13:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2529 00:42:13.674 13:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:42:13.674 13:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:42:13.933 Running I/O for 10 seconds... 00:42:14.872 Latency(us) 00:42:14.872 [2024-11-07T12:47:22.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:14.872 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:14.872 Nvme0n1 : 1.00 15949.00 62.30 0.00 0.00 0.00 0.00 0.00 00:42:14.872 [2024-11-07T12:47:22.879Z] =================================================================================================================== 00:42:14.872 [2024-11-07T12:47:22.879Z] Total : 15949.00 62.30 0.00 0.00 0.00 0.00 0.00 00:42:14.872 00:42:15.814 13:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a5ccca79-d821-462d-921b-a401348d483a 00:42:15.814 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:15.814 Nvme0n1 : 2.00 16087.50 62.84 0.00 0.00 0.00 0.00 0.00 00:42:15.814 [2024-11-07T12:47:23.821Z] =================================================================================================================== 00:42:15.814 [2024-11-07T12:47:23.821Z] Total : 16087.50 62.84 0.00 0.00 0.00 0.00 0.00 00:42:15.814 00:42:15.814 true 00:42:15.814 13:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5ccca79-d821-462d-921b-a401348d483a 00:42:15.814 13:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:42:16.075 13:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:42:16.075 13:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:42:16.075 13:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2529 00:42:17.017 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:17.017 Nvme0n1 : 3.00 16143.67 63.06 0.00 0.00 0.00 0.00 0.00 00:42:17.017 [2024-11-07T12:47:25.024Z] =================================================================================================================== 00:42:17.017 [2024-11-07T12:47:25.024Z] Total : 16143.67 63.06 0.00 0.00 0.00 0.00 0.00 00:42:17.017 00:42:17.958 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:17.958 Nvme0n1 : 4.00 16171.75 63.17 0.00 0.00 0.00 0.00 0.00 00:42:17.958 [2024-11-07T12:47:25.965Z] =================================================================================================================== 00:42:17.958 [2024-11-07T12:47:25.965Z] Total : 16171.75 63.17 0.00 0.00 0.00 0.00 0.00 00:42:17.958 00:42:18.900 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:18.900 Nvme0n1 : 5.00 16192.00 63.25 0.00 0.00 0.00 0.00 0.00 00:42:18.900 [2024-11-07T12:47:26.907Z] =================================================================================================================== 00:42:18.900 [2024-11-07T12:47:26.907Z] Total : 16192.00 63.25 0.00 0.00 0.00 0.00 0.00 00:42:18.900 00:42:19.843 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:19.843 Nvme0n1 : 6.00 16223.83 63.37 0.00 0.00 0.00 0.00 0.00 00:42:19.843 [2024-11-07T12:47:27.850Z] =================================================================================================================== 00:42:19.843 [2024-11-07T12:47:27.850Z] Total : 16223.83 63.37 0.00 0.00 0.00 0.00 0.00 00:42:19.843 00:42:20.786 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:20.786 Nvme0n1 : 7.00 16228.43 63.39 0.00 0.00 0.00 0.00 0.00 00:42:20.786 [2024-11-07T12:47:28.793Z] =================================================================================================================== 00:42:20.786 [2024-11-07T12:47:28.793Z] Total : 16228.43 63.39 0.00 0.00 0.00 0.00 0.00 00:42:20.786 00:42:21.730 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:21.730 Nvme0n1 : 8.00 16247.75 63.47 0.00 0.00 0.00 0.00 0.00 00:42:21.730 [2024-11-07T12:47:29.737Z] =================================================================================================================== 00:42:21.730 [2024-11-07T12:47:29.737Z] Total : 16247.75 63.47 0.00 0.00 0.00 0.00 0.00 00:42:21.730 00:42:23.114 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:23.114 Nvme0n1 : 9.00 16248.67 63.47 0.00 0.00 0.00 0.00 0.00 00:42:23.114 [2024-11-07T12:47:31.121Z] =================================================================================================================== 00:42:23.114 [2024-11-07T12:47:31.121Z] Total : 16248.67 63.47 0.00 0.00 0.00 0.00 0.00 00:42:23.114 00:42:23.781 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:23.781 Nvme0n1 : 10.00 16262.10 63.52 0.00 0.00 0.00 0.00 0.00 00:42:23.781 [2024-11-07T12:47:31.788Z] =================================================================================================================== 00:42:23.781 [2024-11-07T12:47:31.788Z] Total : 16262.10 63.52 0.00 0.00 0.00 0.00 0.00 00:42:23.781 00:42:23.781 00:42:23.781 Latency(us) 00:42:23.781 [2024-11-07T12:47:31.788Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:23.781 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:23.781 Nvme0n1 : 10.01 16264.56 63.53 0.00 0.00 7866.53 3072.00 15510.19 00:42:23.781 [2024-11-07T12:47:31.788Z] =================================================================================================================== 00:42:23.781 [2024-11-07T12:47:31.788Z] Total : 16264.56 63.53 0.00 0.00 7866.53 3072.00 15510.19 00:42:23.781 { 00:42:23.781 "results": [ 00:42:23.781 { 00:42:23.781 "job": "Nvme0n1", 00:42:23.781 "core_mask": "0x2", 00:42:23.781 "workload": "randwrite", 00:42:23.781 "status": "finished", 00:42:23.781 "queue_depth": 128, 00:42:23.781 "io_size": 4096, 00:42:23.781 "runtime": 10.006359, 00:42:23.781 "iops": 16264.557367969708, 00:42:23.781 "mibps": 63.53342721863167, 00:42:23.781 "io_failed": 0, 00:42:23.781 "io_timeout": 0, 00:42:23.781 "avg_latency_us": 7866.528875425757, 00:42:23.781 "min_latency_us": 3072.0, 00:42:23.781 "max_latency_us": 15510.186666666666 00:42:23.781 } 00:42:23.781 ], 00:42:23.781 "core_count": 1 00:42:23.781 } 00:42:23.781 13:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2413 00:42:23.781 13:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 2413 ']' 00:42:23.781 13:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 2413 00:42:23.781 13:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:42:23.781 13:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:42:23.781 13:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2413 00:42:24.104 13:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:42:24.104 13:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:42:24.104 13:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2413' 00:42:24.104 killing process with pid 2413 00:42:24.104 13:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 2413 00:42:24.104 Received shutdown signal, test time was about 10.000000 seconds 00:42:24.104 00:42:24.104 Latency(us) 00:42:24.104 [2024-11-07T12:47:32.111Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:24.104 [2024-11-07T12:47:32.111Z] =================================================================================================================== 00:42:24.104 [2024-11-07T12:47:32.111Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:24.104 13:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 2413 00:42:24.366 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:42:24.627 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:24.627 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5ccca79-d821-462d-921b-a401348d483a 00:42:24.627 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:42:24.887 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:42:24.887 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:42:24.887 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 4192134 00:42:24.887 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 4192134 00:42:24.887 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 4192134 Killed "${NVMF_APP[@]}" "$@" 00:42:24.887 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:42:24.887 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:42:24.887 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:24.887 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:24.887 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:42:24.887 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=4799 00:42:24.887 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 4799 00:42:24.887 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:42:24.887 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 4799 ']' 00:42:24.887 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:24.887 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:24.887 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:24.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:24.887 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:24.887 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:42:25.148 [2024-11-07 13:47:32.943572] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:25.148 [2024-11-07 13:47:32.945932] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:42:25.148 [2024-11-07 13:47:32.946016] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:25.148 [2024-11-07 13:47:33.097587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:25.408 [2024-11-07 13:47:33.192329] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:25.408 [2024-11-07 13:47:33.192373] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:25.408 [2024-11-07 13:47:33.192387] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:25.408 [2024-11-07 13:47:33.192400] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:25.408 [2024-11-07 13:47:33.192412] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:25.408 [2024-11-07 13:47:33.193632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:25.668 [2024-11-07 13:47:33.430356] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:25.668 [2024-11-07 13:47:33.430671] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:25.928 13:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:42:25.928 13:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:42:25.928 13:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:25.928 13:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:25.928 13:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:42:25.928 13:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:25.928 13:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:42:25.928 [2024-11-07 13:47:33.908939] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:42:25.928 [2024-11-07 13:47:33.909084] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:42:25.928 [2024-11-07 13:47:33.909128] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:42:26.190 13:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:42:26.190 13:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 21456ca5-6114-4b62-97c7-8a2d87acef15 00:42:26.190 13:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=21456ca5-6114-4b62-97c7-8a2d87acef15 00:42:26.190 13:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:42:26.190 13:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:42:26.190 13:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:42:26.190 13:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:42:26.190 13:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:42:26.190 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 21456ca5-6114-4b62-97c7-8a2d87acef15 -t 2000 00:42:26.452 [ 00:42:26.452 { 00:42:26.452 "name": "21456ca5-6114-4b62-97c7-8a2d87acef15", 00:42:26.452 "aliases": [ 00:42:26.452 "lvs/lvol" 00:42:26.452 ], 00:42:26.452 "product_name": "Logical Volume", 00:42:26.452 "block_size": 4096, 00:42:26.452 "num_blocks": 38912, 00:42:26.452 "uuid": "21456ca5-6114-4b62-97c7-8a2d87acef15", 00:42:26.452 "assigned_rate_limits": { 00:42:26.452 "rw_ios_per_sec": 0, 00:42:26.452 "rw_mbytes_per_sec": 0, 00:42:26.452 "r_mbytes_per_sec": 0, 00:42:26.452 "w_mbytes_per_sec": 0 00:42:26.452 }, 00:42:26.452 "claimed": false, 00:42:26.452 "zoned": false, 00:42:26.452 "supported_io_types": { 00:42:26.452 "read": true, 00:42:26.452 "write": true, 00:42:26.452 "unmap": true, 00:42:26.452 "flush": false, 00:42:26.452 "reset": true, 00:42:26.452 "nvme_admin": false, 00:42:26.452 "nvme_io": false, 00:42:26.452 "nvme_io_md": false, 00:42:26.452 "write_zeroes": true, 00:42:26.452 "zcopy": false, 00:42:26.452 "get_zone_info": false, 00:42:26.452 "zone_management": false, 00:42:26.452 "zone_append": false, 00:42:26.452 "compare": false, 00:42:26.452 "compare_and_write": false, 00:42:26.452 "abort": false, 00:42:26.452 "seek_hole": true, 00:42:26.452 "seek_data": true, 00:42:26.452 "copy": false, 00:42:26.452 "nvme_iov_md": false 00:42:26.452 }, 00:42:26.452 "driver_specific": { 00:42:26.452 "lvol": { 00:42:26.452 "lvol_store_uuid": "a5ccca79-d821-462d-921b-a401348d483a", 00:42:26.452 "base_bdev": "aio_bdev", 00:42:26.452 "thin_provision": false, 00:42:26.452 "num_allocated_clusters": 38, 00:42:26.452 "snapshot": false, 00:42:26.452 "clone": false, 00:42:26.452 "esnap_clone": false 00:42:26.452 } 00:42:26.452 } 00:42:26.452 } 00:42:26.452 ] 00:42:26.452 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:42:26.452 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5ccca79-d821-462d-921b-a401348d483a 00:42:26.452 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:42:26.452 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:42:26.452 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5ccca79-d821-462d-921b-a401348d483a 00:42:26.452 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:42:26.713 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:42:26.713 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:42:26.974 [2024-11-07 13:47:34.782440] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:42:26.974 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5ccca79-d821-462d-921b-a401348d483a 00:42:26.974 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:42:26.974 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5ccca79-d821-462d-921b-a401348d483a 00:42:26.974 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:26.974 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:26.974 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:26.974 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:26.974 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:26.974 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:26.974 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:26.974 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:42:26.974 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5ccca79-d821-462d-921b-a401348d483a 00:42:27.235 request: 00:42:27.235 { 00:42:27.235 "uuid": "a5ccca79-d821-462d-921b-a401348d483a", 00:42:27.235 "method": "bdev_lvol_get_lvstores", 00:42:27.235 "req_id": 1 00:42:27.235 } 00:42:27.235 Got JSON-RPC error response 00:42:27.235 response: 00:42:27.235 { 00:42:27.235 "code": -19, 00:42:27.235 "message": "No such device" 00:42:27.235 } 00:42:27.235 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:42:27.235 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:27.235 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:27.235 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:27.235 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:42:27.235 aio_bdev 00:42:27.235 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 21456ca5-6114-4b62-97c7-8a2d87acef15 00:42:27.235 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=21456ca5-6114-4b62-97c7-8a2d87acef15 00:42:27.235 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:42:27.235 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:42:27.235 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:42:27.235 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:42:27.235 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:42:27.495 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 21456ca5-6114-4b62-97c7-8a2d87acef15 -t 2000 00:42:27.756 [ 00:42:27.756 { 00:42:27.756 "name": "21456ca5-6114-4b62-97c7-8a2d87acef15", 00:42:27.756 "aliases": [ 00:42:27.756 "lvs/lvol" 00:42:27.756 ], 00:42:27.756 "product_name": "Logical Volume", 00:42:27.756 "block_size": 4096, 00:42:27.756 "num_blocks": 38912, 00:42:27.756 "uuid": "21456ca5-6114-4b62-97c7-8a2d87acef15", 00:42:27.756 "assigned_rate_limits": { 00:42:27.756 "rw_ios_per_sec": 0, 00:42:27.756 "rw_mbytes_per_sec": 0, 00:42:27.756 "r_mbytes_per_sec": 0, 00:42:27.756 "w_mbytes_per_sec": 0 00:42:27.756 }, 00:42:27.756 "claimed": false, 00:42:27.756 "zoned": false, 00:42:27.756 "supported_io_types": { 00:42:27.756 "read": true, 00:42:27.756 "write": true, 00:42:27.756 "unmap": true, 00:42:27.756 "flush": false, 00:42:27.756 "reset": true, 00:42:27.756 "nvme_admin": false, 00:42:27.756 "nvme_io": false, 00:42:27.756 "nvme_io_md": false, 00:42:27.756 "write_zeroes": true, 00:42:27.756 "zcopy": false, 00:42:27.756 "get_zone_info": false, 00:42:27.756 "zone_management": false, 00:42:27.756 "zone_append": false, 00:42:27.756 "compare": false, 00:42:27.756 "compare_and_write": false, 00:42:27.756 "abort": false, 00:42:27.756 "seek_hole": true, 00:42:27.756 "seek_data": true, 00:42:27.756 "copy": false, 00:42:27.756 "nvme_iov_md": false 00:42:27.756 }, 00:42:27.756 "driver_specific": { 00:42:27.756 "lvol": { 00:42:27.756 "lvol_store_uuid": "a5ccca79-d821-462d-921b-a401348d483a", 00:42:27.756 "base_bdev": "aio_bdev", 00:42:27.756 "thin_provision": false, 00:42:27.756 "num_allocated_clusters": 38, 00:42:27.756 "snapshot": false, 00:42:27.756 "clone": false, 00:42:27.756 "esnap_clone": false 00:42:27.756 } 00:42:27.756 } 00:42:27.756 } 00:42:27.756 ] 00:42:27.756 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:42:27.756 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5ccca79-d821-462d-921b-a401348d483a 00:42:27.756 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:42:27.756 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:42:27.756 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5ccca79-d821-462d-921b-a401348d483a 00:42:27.756 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:42:28.017 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:42:28.017 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 21456ca5-6114-4b62-97c7-8a2d87acef15 00:42:28.279 13:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a5ccca79-d821-462d-921b-a401348d483a 00:42:28.279 13:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:42:28.539 13:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:42:28.540 00:42:28.540 real 0m17.956s 00:42:28.540 user 0m35.918s 00:42:28.540 sys 0m3.213s 00:42:28.540 13:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:28.540 13:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:42:28.540 ************************************ 00:42:28.540 END TEST lvs_grow_dirty 00:42:28.540 ************************************ 00:42:28.540 13:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:42:28.540 13:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:42:28.540 13:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:42:28.540 13:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:42:28.540 13:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:42:28.540 13:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:42:28.540 13:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:42:28.540 13:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:42:28.540 13:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:42:28.540 nvmf_trace.0 00:42:28.800 13:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:42:28.800 13:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:42:28.800 13:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:28.800 13:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:42:28.800 13:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:28.800 13:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:42:28.800 13:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:28.800 13:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:28.800 rmmod nvme_tcp 00:42:28.800 rmmod nvme_fabrics 00:42:28.800 rmmod nvme_keyring 00:42:28.800 13:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:28.800 13:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:42:28.800 13:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:42:28.800 13:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 4799 ']' 00:42:28.800 13:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 4799 00:42:28.800 13:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 4799 ']' 00:42:28.800 13:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 4799 00:42:28.800 13:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:42:28.800 13:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:42:28.800 13:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4799 00:42:28.800 13:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:42:28.800 13:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:42:28.800 13:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4799' 00:42:28.800 killing process with pid 4799 00:42:28.800 13:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 4799 00:42:28.800 13:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 4799 00:42:29.797 13:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:29.797 13:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:29.797 13:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:29.797 13:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:42:29.797 13:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:42:29.797 13:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:29.797 13:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:42:29.797 13:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:29.797 13:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:29.797 13:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:29.797 13:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:29.797 13:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:31.708 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:31.708 00:42:31.708 real 0m46.399s 00:42:31.708 user 0m55.521s 00:42:31.708 sys 0m11.146s 00:42:31.708 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:31.708 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:42:31.708 ************************************ 00:42:31.708 END TEST nvmf_lvs_grow 00:42:31.708 ************************************ 00:42:31.709 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:42:31.709 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:42:31.709 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:42:31.709 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:31.709 ************************************ 00:42:31.709 START TEST nvmf_bdev_io_wait 00:42:31.709 ************************************ 00:42:31.709 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:42:31.709 * Looking for test storage... 00:42:31.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:31.709 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:42:31.709 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:42:31.709 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:42:31.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:31.970 --rc genhtml_branch_coverage=1 00:42:31.970 --rc genhtml_function_coverage=1 00:42:31.970 --rc genhtml_legend=1 00:42:31.970 --rc geninfo_all_blocks=1 00:42:31.970 --rc geninfo_unexecuted_blocks=1 00:42:31.970 00:42:31.970 ' 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:42:31.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:31.970 --rc genhtml_branch_coverage=1 00:42:31.970 --rc genhtml_function_coverage=1 00:42:31.970 --rc genhtml_legend=1 00:42:31.970 --rc geninfo_all_blocks=1 00:42:31.970 --rc geninfo_unexecuted_blocks=1 00:42:31.970 00:42:31.970 ' 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:42:31.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:31.970 --rc genhtml_branch_coverage=1 00:42:31.970 --rc genhtml_function_coverage=1 00:42:31.970 --rc genhtml_legend=1 00:42:31.970 --rc geninfo_all_blocks=1 00:42:31.970 --rc geninfo_unexecuted_blocks=1 00:42:31.970 00:42:31.970 ' 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:42:31.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:31.970 --rc genhtml_branch_coverage=1 00:42:31.970 --rc genhtml_function_coverage=1 00:42:31.970 --rc genhtml_legend=1 00:42:31.970 --rc geninfo_all_blocks=1 00:42:31.970 --rc geninfo_unexecuted_blocks=1 00:42:31.970 00:42:31.970 ' 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:31.970 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:31.971 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:31.971 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:42:31.971 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:31.971 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:42:31.971 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:31.971 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:31.971 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:31.971 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:31.971 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:31.971 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:31.971 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:31.971 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:31.971 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:31.971 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:31.971 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:31.971 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:31.971 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:42:31.971 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:31.971 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:31.971 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:31.971 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:31.971 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:31.971 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:31.971 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:31.971 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:31.971 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:31.971 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:31.971 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:42:31.971 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:40.112 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:40.112 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:42:40.112 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:40.112 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:40.112 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:40.112 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:40.112 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:40.112 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:42:40.112 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:40.112 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:42:40.112 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:42:40.112 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:42:40.112 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:42:40.113 Found 0000:31:00.0 (0x8086 - 0x159b) 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:42:40.113 Found 0000:31:00.1 (0x8086 - 0x159b) 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:42:40.113 Found net devices under 0000:31:00.0: cvl_0_0 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:42:40.113 Found net devices under 0000:31:00.1: cvl_0_1 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:40.113 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:40.113 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:40.113 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:40.113 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:40.113 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:40.374 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:40.374 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:40.374 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:40.374 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:40.374 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:40.374 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.599 ms 00:42:40.374 00:42:40.374 --- 10.0.0.2 ping statistics --- 00:42:40.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:40.374 rtt min/avg/max/mdev = 0.599/0.599/0.599/0.000 ms 00:42:40.374 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:40.374 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:40.374 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:42:40.374 00:42:40.374 --- 10.0.0.1 ping statistics --- 00:42:40.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:40.374 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:42:40.374 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:40.374 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:42:40.374 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:40.374 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:40.374 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:40.374 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:40.374 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:40.374 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:40.374 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:40.374 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:42:40.374 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:40.374 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:40.374 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:40.374 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=10255 00:42:40.375 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 10255 00:42:40.375 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:42:40.375 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 10255 ']' 00:42:40.375 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:40.375 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:40.375 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:40.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:40.375 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:40.375 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:40.375 [2024-11-07 13:47:48.329846] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:40.375 [2024-11-07 13:47:48.332497] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:42:40.375 [2024-11-07 13:47:48.332600] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:40.635 [2024-11-07 13:47:48.499685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:40.635 [2024-11-07 13:47:48.601464] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:40.635 [2024-11-07 13:47:48.601504] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:40.635 [2024-11-07 13:47:48.601518] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:40.635 [2024-11-07 13:47:48.601529] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:40.635 [2024-11-07 13:47:48.601541] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:40.635 [2024-11-07 13:47:48.603770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:40.635 [2024-11-07 13:47:48.603853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:40.635 [2024-11-07 13:47:48.603987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:40.635 [2024-11-07 13:47:48.604013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:40.635 [2024-11-07 13:47:48.604456] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:41.206 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:42:41.206 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:42:41.206 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:41.206 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:41.206 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:41.206 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:41.206 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:42:41.206 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:41.206 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:41.206 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:41.206 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:42:41.206 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:41.206 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:41.467 [2024-11-07 13:47:49.316762] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:41.467 [2024-11-07 13:47:49.316939] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:41.467 [2024-11-07 13:47:49.318333] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:42:41.467 [2024-11-07 13:47:49.318456] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:42:41.467 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:41.467 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:41.467 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:41.467 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:41.467 [2024-11-07 13:47:49.328717] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:41.467 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:41.467 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:41.467 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:41.467 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:41.467 Malloc0 00:42:41.467 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:41.467 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:42:41.467 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:41.467 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:41.467 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:41.467 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:41.467 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:41.467 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:41.467 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:41.467 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:41.467 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:41.467 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:41.467 [2024-11-07 13:47:49.452947] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:41.467 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:41.467 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=10605 00:42:41.467 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=10607 00:42:41.467 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:42:41.467 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:42:41.467 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:42:41.467 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:42:41.467 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:41.467 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:41.467 { 00:42:41.467 "params": { 00:42:41.467 "name": "Nvme$subsystem", 00:42:41.467 "trtype": "$TEST_TRANSPORT", 00:42:41.467 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:41.467 "adrfam": "ipv4", 00:42:41.467 "trsvcid": "$NVMF_PORT", 00:42:41.467 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:41.467 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:41.467 "hdgst": ${hdgst:-false}, 00:42:41.467 "ddgst": ${ddgst:-false} 00:42:41.467 }, 00:42:41.467 "method": "bdev_nvme_attach_controller" 00:42:41.467 } 00:42:41.467 EOF 00:42:41.467 )") 00:42:41.467 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=10609 00:42:41.468 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:42:41.468 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:42:41.468 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:42:41.468 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:42:41.468 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:41.468 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=10612 00:42:41.468 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:41.468 { 00:42:41.468 "params": { 00:42:41.468 "name": "Nvme$subsystem", 00:42:41.468 "trtype": "$TEST_TRANSPORT", 00:42:41.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:41.468 "adrfam": "ipv4", 00:42:41.468 "trsvcid": "$NVMF_PORT", 00:42:41.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:41.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:41.468 "hdgst": ${hdgst:-false}, 00:42:41.468 "ddgst": ${ddgst:-false} 00:42:41.468 }, 00:42:41.468 "method": "bdev_nvme_attach_controller" 00:42:41.468 } 00:42:41.468 EOF 00:42:41.468 )") 00:42:41.468 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:42:41.468 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:42:41.468 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:42:41.468 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:42:41.468 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:42:41.468 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:42:41.468 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:41.468 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:41.468 { 00:42:41.468 "params": { 00:42:41.468 "name": "Nvme$subsystem", 00:42:41.468 "trtype": "$TEST_TRANSPORT", 00:42:41.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:41.468 "adrfam": "ipv4", 00:42:41.468 "trsvcid": "$NVMF_PORT", 00:42:41.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:41.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:41.468 "hdgst": ${hdgst:-false}, 00:42:41.468 "ddgst": ${ddgst:-false} 00:42:41.468 }, 00:42:41.468 "method": "bdev_nvme_attach_controller" 00:42:41.468 } 00:42:41.468 EOF 00:42:41.468 )") 00:42:41.468 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:42:41.468 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:42:41.468 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:42:41.468 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:42:41.468 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:42:41.468 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:41.468 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:41.468 { 00:42:41.468 "params": { 00:42:41.468 "name": "Nvme$subsystem", 00:42:41.468 "trtype": "$TEST_TRANSPORT", 00:42:41.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:41.468 "adrfam": "ipv4", 00:42:41.468 "trsvcid": "$NVMF_PORT", 00:42:41.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:41.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:41.468 "hdgst": ${hdgst:-false}, 00:42:41.468 "ddgst": ${ddgst:-false} 00:42:41.468 }, 00:42:41.468 "method": "bdev_nvme_attach_controller" 00:42:41.468 } 00:42:41.468 EOF 00:42:41.468 )") 00:42:41.468 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:42:41.468 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 10605 00:42:41.468 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:42:41.729 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:42:41.729 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:42:41.729 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:42:41.729 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:42:41.729 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:41.729 "params": { 00:42:41.729 "name": "Nvme1", 00:42:41.729 "trtype": "tcp", 00:42:41.729 "traddr": "10.0.0.2", 00:42:41.729 "adrfam": "ipv4", 00:42:41.729 "trsvcid": "4420", 00:42:41.729 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:41.729 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:41.729 "hdgst": false, 00:42:41.729 "ddgst": false 00:42:41.729 }, 00:42:41.729 "method": "bdev_nvme_attach_controller" 00:42:41.729 }' 00:42:41.729 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:42:41.729 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:42:41.729 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:41.729 "params": { 00:42:41.729 "name": "Nvme1", 00:42:41.729 "trtype": "tcp", 00:42:41.729 "traddr": "10.0.0.2", 00:42:41.729 "adrfam": "ipv4", 00:42:41.729 "trsvcid": "4420", 00:42:41.729 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:41.729 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:41.729 "hdgst": false, 00:42:41.729 "ddgst": false 00:42:41.729 }, 00:42:41.729 "method": "bdev_nvme_attach_controller" 00:42:41.729 }' 00:42:41.729 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:42:41.729 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:41.729 "params": { 00:42:41.729 "name": "Nvme1", 00:42:41.729 "trtype": "tcp", 00:42:41.729 "traddr": "10.0.0.2", 00:42:41.729 "adrfam": "ipv4", 00:42:41.729 "trsvcid": "4420", 00:42:41.729 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:41.729 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:41.729 "hdgst": false, 00:42:41.729 "ddgst": false 00:42:41.729 }, 00:42:41.729 "method": "bdev_nvme_attach_controller" 00:42:41.729 }' 00:42:41.729 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:42:41.729 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:41.729 "params": { 00:42:41.729 "name": "Nvme1", 00:42:41.729 "trtype": "tcp", 00:42:41.729 "traddr": "10.0.0.2", 00:42:41.729 "adrfam": "ipv4", 00:42:41.729 "trsvcid": "4420", 00:42:41.729 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:41.729 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:41.729 "hdgst": false, 00:42:41.729 "ddgst": false 00:42:41.729 }, 00:42:41.729 "method": "bdev_nvme_attach_controller" 00:42:41.729 }' 00:42:41.729 [2024-11-07 13:47:49.529674] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:42:41.729 [2024-11-07 13:47:49.529752] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:42:41.729 [2024-11-07 13:47:49.537375] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:42:41.729 [2024-11-07 13:47:49.537375] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:42:41.729 [2024-11-07 13:47:49.537484] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 [2024-11-07 13:47:49.537486] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib--proc-type=auto ] 00:42:41.729 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:42:41.729 [2024-11-07 13:47:49.539961] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:42:41.729 [2024-11-07 13:47:49.540056] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:42:41.729 [2024-11-07 13:47:49.729270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:41.989 [2024-11-07 13:47:49.769795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:41.989 [2024-11-07 13:47:49.819476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:41.989 [2024-11-07 13:47:49.828030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:42:41.989 [2024-11-07 13:47:49.864180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:42:41.989 [2024-11-07 13:47:49.865746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:41.989 [2024-11-07 13:47:49.912890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:42:41.989 [2024-11-07 13:47:49.959176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:42:42.249 Running I/O for 1 seconds... 00:42:42.509 Running I/O for 1 seconds... 00:42:42.509 Running I/O for 1 seconds... 00:42:42.509 Running I/O for 1 seconds... 00:42:43.450 7910.00 IOPS, 30.90 MiB/s 00:42:43.450 Latency(us) 00:42:43.450 [2024-11-07T12:47:51.457Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:43.450 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:42:43.450 Nvme1n1 : 1.02 7919.24 30.93 0.00 0.00 16030.47 2239.15 23046.83 00:42:43.450 [2024-11-07T12:47:51.457Z] =================================================================================================================== 00:42:43.450 [2024-11-07T12:47:51.457Z] Total : 7919.24 30.93 0.00 0.00 16030.47 2239.15 23046.83 00:42:43.450 7442.00 IOPS, 29.07 MiB/s 00:42:43.450 Latency(us) 00:42:43.450 [2024-11-07T12:47:51.457Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:43.450 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:42:43.450 Nvme1n1 : 1.01 7526.84 29.40 0.00 0.00 16950.65 5215.57 30583.47 00:42:43.450 [2024-11-07T12:47:51.457Z] =================================================================================================================== 00:42:43.450 [2024-11-07T12:47:51.457Z] Total : 7526.84 29.40 0.00 0.00 16950.65 5215.57 30583.47 00:42:43.450 172912.00 IOPS, 675.44 MiB/s 00:42:43.450 Latency(us) 00:42:43.450 [2024-11-07T12:47:51.457Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:43.450 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:42:43.450 Nvme1n1 : 1.00 172553.84 674.04 0.00 0.00 737.77 341.33 2061.65 00:42:43.450 [2024-11-07T12:47:51.457Z] =================================================================================================================== 00:42:43.450 [2024-11-07T12:47:51.457Z] Total : 172553.84 674.04 0.00 0.00 737.77 341.33 2061.65 00:42:43.450 13350.00 IOPS, 52.15 MiB/s 00:42:43.451 Latency(us) 00:42:43.451 [2024-11-07T12:47:51.458Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:43.451 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:42:43.451 Nvme1n1 : 1.01 13423.01 52.43 0.00 0.00 9507.06 2839.89 15947.09 00:42:43.451 [2024-11-07T12:47:51.458Z] =================================================================================================================== 00:42:43.451 [2024-11-07T12:47:51.458Z] Total : 13423.01 52.43 0.00 0.00 9507.06 2839.89 15947.09 00:42:43.711 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 10607 00:42:43.971 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 10609 00:42:43.971 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 10612 00:42:43.971 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:43.971 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:43.971 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:43.971 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:43.971 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:42:43.971 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:42:43.971 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:43.971 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:42:43.971 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:43.971 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:42:43.971 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:43.971 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:43.971 rmmod nvme_tcp 00:42:43.971 rmmod nvme_fabrics 00:42:43.971 rmmod nvme_keyring 00:42:43.971 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:43.971 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:42:43.971 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:42:43.971 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 10255 ']' 00:42:43.971 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 10255 00:42:43.971 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 10255 ']' 00:42:43.971 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 10255 00:42:43.971 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:42:43.971 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:42:43.971 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 10255 00:42:44.231 13:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:42:44.231 13:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:42:44.231 13:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 10255' 00:42:44.231 killing process with pid 10255 00:42:44.231 13:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 10255 00:42:44.231 13:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 10255 00:42:44.803 13:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:44.803 13:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:44.803 13:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:44.803 13:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:42:44.803 13:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:42:44.803 13:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:44.803 13:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:42:44.803 13:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:44.803 13:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:44.803 13:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:44.803 13:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:44.803 13:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:47.347 13:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:47.347 00:42:47.347 real 0m15.209s 00:42:47.347 user 0m20.787s 00:42:47.347 sys 0m8.660s 00:42:47.347 13:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:47.347 13:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:47.347 ************************************ 00:42:47.347 END TEST nvmf_bdev_io_wait 00:42:47.347 ************************************ 00:42:47.347 13:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:42:47.347 13:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:42:47.347 13:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:42:47.347 13:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:47.347 ************************************ 00:42:47.347 START TEST nvmf_queue_depth 00:42:47.347 ************************************ 00:42:47.347 13:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:42:47.347 * Looking for test storage... 00:42:47.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:47.347 13:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:42:47.347 13:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:42:47.347 13:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:42:47.347 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:42:47.347 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:47.347 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:47.347 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:47.347 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:42:47.347 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:42:47.347 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:42:47.347 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:42:47.347 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:42:47.347 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:42:47.347 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:42:47.347 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:47.347 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:42:47.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:47.348 --rc genhtml_branch_coverage=1 00:42:47.348 --rc genhtml_function_coverage=1 00:42:47.348 --rc genhtml_legend=1 00:42:47.348 --rc geninfo_all_blocks=1 00:42:47.348 --rc geninfo_unexecuted_blocks=1 00:42:47.348 00:42:47.348 ' 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:42:47.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:47.348 --rc genhtml_branch_coverage=1 00:42:47.348 --rc genhtml_function_coverage=1 00:42:47.348 --rc genhtml_legend=1 00:42:47.348 --rc geninfo_all_blocks=1 00:42:47.348 --rc geninfo_unexecuted_blocks=1 00:42:47.348 00:42:47.348 ' 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:42:47.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:47.348 --rc genhtml_branch_coverage=1 00:42:47.348 --rc genhtml_function_coverage=1 00:42:47.348 --rc genhtml_legend=1 00:42:47.348 --rc geninfo_all_blocks=1 00:42:47.348 --rc geninfo_unexecuted_blocks=1 00:42:47.348 00:42:47.348 ' 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:42:47.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:47.348 --rc genhtml_branch_coverage=1 00:42:47.348 --rc genhtml_function_coverage=1 00:42:47.348 --rc genhtml_legend=1 00:42:47.348 --rc geninfo_all_blocks=1 00:42:47.348 --rc geninfo_unexecuted_blocks=1 00:42:47.348 00:42:47.348 ' 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:47.348 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:47.349 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:47.349 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:47.349 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:42:47.349 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:55.486 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:55.486 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:42:55.486 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:55.486 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:55.486 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:55.486 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:55.486 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:55.486 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:42:55.486 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:55.486 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:42:55.486 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:42:55.486 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:42:55.486 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:42:55.486 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:42:55.486 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:42:55.486 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:55.486 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:55.486 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:55.486 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:55.486 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:55.486 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:55.486 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:55.486 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:55.486 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:55.486 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:55.486 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:55.486 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:55.486 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:55.486 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:55.486 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:55.486 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:55.486 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:55.486 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:42:55.487 Found 0000:31:00.0 (0x8086 - 0x159b) 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:42:55.487 Found 0000:31:00.1 (0x8086 - 0x159b) 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:42:55.487 Found net devices under 0000:31:00.0: cvl_0_0 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:42:55.487 Found net devices under 0000:31:00.1: cvl_0_1 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:55.487 13:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:55.487 13:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:55.487 13:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:55.487 13:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:55.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:55.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.529 ms 00:42:55.487 00:42:55.487 --- 10.0.0.2 ping statistics --- 00:42:55.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:55.487 rtt min/avg/max/mdev = 0.529/0.529/0.529/0.000 ms 00:42:55.487 13:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:55.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:55.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:42:55.487 00:42:55.487 --- 10.0.0.1 ping statistics --- 00:42:55.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:55.487 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:42:55.487 13:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:55.488 13:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:42:55.488 13:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:55.488 13:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:55.488 13:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:55.488 13:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:55.488 13:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:55.488 13:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:55.488 13:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:55.488 13:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:42:55.488 13:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:55.488 13:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:55.488 13:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:55.488 13:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:42:55.488 13:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=15727 00:42:55.488 13:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 15727 00:42:55.488 13:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 15727 ']' 00:42:55.488 13:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:55.488 13:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:55.488 13:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:55.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:55.488 13:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:55.488 13:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:55.488 [2024-11-07 13:48:03.163593] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:55.488 [2024-11-07 13:48:03.165988] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:42:55.488 [2024-11-07 13:48:03.166079] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:55.488 [2024-11-07 13:48:03.315149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:55.488 [2024-11-07 13:48:03.424367] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:55.488 [2024-11-07 13:48:03.424429] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:55.488 [2024-11-07 13:48:03.424444] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:55.488 [2024-11-07 13:48:03.424458] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:55.488 [2024-11-07 13:48:03.424472] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:55.488 [2024-11-07 13:48:03.425928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:55.748 [2024-11-07 13:48:03.699716] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:55.748 [2024-11-07 13:48:03.700113] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:56.009 13:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:42:56.009 13:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:42:56.009 13:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:56.009 13:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:56.009 13:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:56.009 13:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:56.009 13:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:56.009 13:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:56.009 13:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:56.009 [2024-11-07 13:48:03.955291] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:56.009 13:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:56.009 13:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:56.009 13:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:56.009 13:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:56.271 Malloc0 00:42:56.271 13:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:56.271 13:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:42:56.271 13:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:56.271 13:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:56.271 13:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:56.271 13:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:56.271 13:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:56.271 13:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:56.271 13:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:56.271 13:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:56.271 13:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:56.271 13:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:56.271 [2024-11-07 13:48:04.082942] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:56.271 13:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:56.271 13:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=16070 00:42:56.271 13:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:42:56.271 13:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:42:56.271 13:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 16070 /var/tmp/bdevperf.sock 00:42:56.271 13:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 16070 ']' 00:42:56.271 13:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:42:56.271 13:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:56.271 13:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:42:56.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:42:56.271 13:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:56.271 13:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:56.271 [2024-11-07 13:48:04.168191] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:42:56.271 [2024-11-07 13:48:04.168299] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid16070 ] 00:42:56.531 [2024-11-07 13:48:04.305440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:56.531 [2024-11-07 13:48:04.402537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:57.102 13:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:42:57.102 13:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:42:57.102 13:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:42:57.102 13:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:57.102 13:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:57.364 NVMe0n1 00:42:57.364 13:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:57.364 13:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:42:57.364 Running I/O for 10 seconds... 00:42:59.252 8067.00 IOPS, 31.51 MiB/s [2024-11-07T12:48:08.645Z] 8192.00 IOPS, 32.00 MiB/s [2024-11-07T12:48:09.589Z] 8537.00 IOPS, 33.35 MiB/s [2024-11-07T12:48:10.532Z] 9128.50 IOPS, 35.66 MiB/s [2024-11-07T12:48:11.475Z] 9430.80 IOPS, 36.84 MiB/s [2024-11-07T12:48:12.418Z] 9733.00 IOPS, 38.02 MiB/s [2024-11-07T12:48:13.360Z] 9928.57 IOPS, 38.78 MiB/s [2024-11-07T12:48:14.303Z] 10039.75 IOPS, 39.22 MiB/s [2024-11-07T12:48:15.245Z] 10162.56 IOPS, 39.70 MiB/s [2024-11-07T12:48:15.505Z] 10264.70 IOPS, 40.10 MiB/s 00:43:07.498 Latency(us) 00:43:07.498 [2024-11-07T12:48:15.505Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:07.498 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:43:07.498 Verification LBA range: start 0x0 length 0x4000 00:43:07.498 NVMe0n1 : 10.05 10304.89 40.25 0.00 0.00 98985.98 11468.80 77769.39 00:43:07.498 [2024-11-07T12:48:15.506Z] =================================================================================================================== 00:43:07.499 [2024-11-07T12:48:15.506Z] Total : 10304.89 40.25 0.00 0.00 98985.98 11468.80 77769.39 00:43:07.499 { 00:43:07.499 "results": [ 00:43:07.499 { 00:43:07.499 "job": "NVMe0n1", 00:43:07.499 "core_mask": "0x1", 00:43:07.499 "workload": "verify", 00:43:07.499 "status": "finished", 00:43:07.499 "verify_range": { 00:43:07.499 "start": 0, 00:43:07.499 "length": 16384 00:43:07.499 }, 00:43:07.499 "queue_depth": 1024, 00:43:07.499 "io_size": 4096, 00:43:07.499 "runtime": 10.052896, 00:43:07.499 "iops": 10304.891247258502, 00:43:07.499 "mibps": 40.25348143460352, 00:43:07.499 "io_failed": 0, 00:43:07.499 "io_timeout": 0, 00:43:07.499 "avg_latency_us": 98985.98264249538, 00:43:07.499 "min_latency_us": 11468.8, 00:43:07.499 "max_latency_us": 77769.38666666667 00:43:07.499 } 00:43:07.499 ], 00:43:07.499 "core_count": 1 00:43:07.499 } 00:43:07.499 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 16070 00:43:07.499 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 16070 ']' 00:43:07.499 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 16070 00:43:07.499 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:43:07.499 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:43:07.499 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 16070 00:43:07.499 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:43:07.499 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:43:07.499 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 16070' 00:43:07.499 killing process with pid 16070 00:43:07.499 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 16070 00:43:07.499 Received shutdown signal, test time was about 10.000000 seconds 00:43:07.499 00:43:07.499 Latency(us) 00:43:07.499 [2024-11-07T12:48:15.506Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:07.499 [2024-11-07T12:48:15.506Z] =================================================================================================================== 00:43:07.499 [2024-11-07T12:48:15.506Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:07.499 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 16070 00:43:08.071 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:43:08.071 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:43:08.071 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:08.071 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:43:08.071 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:08.071 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:43:08.071 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:08.071 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:08.071 rmmod nvme_tcp 00:43:08.071 rmmod nvme_fabrics 00:43:08.071 rmmod nvme_keyring 00:43:08.071 13:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:08.071 13:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:43:08.071 13:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:43:08.071 13:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 15727 ']' 00:43:08.071 13:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 15727 00:43:08.071 13:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 15727 ']' 00:43:08.071 13:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 15727 00:43:08.071 13:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:43:08.071 13:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:43:08.071 13:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 15727 00:43:08.333 13:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:43:08.333 13:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:43:08.333 13:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 15727' 00:43:08.333 killing process with pid 15727 00:43:08.333 13:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 15727 00:43:08.333 13:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 15727 00:43:08.904 13:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:08.904 13:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:08.904 13:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:08.904 13:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:43:08.904 13:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:43:08.904 13:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:08.904 13:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:43:08.904 13:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:08.904 13:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:08.904 13:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:08.904 13:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:08.904 13:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:11.451 13:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:11.451 00:43:11.451 real 0m23.980s 00:43:11.451 user 0m26.511s 00:43:11.451 sys 0m7.891s 00:43:11.452 13:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:43:11.452 13:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:43:11.452 ************************************ 00:43:11.452 END TEST nvmf_queue_depth 00:43:11.452 ************************************ 00:43:11.452 13:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:43:11.452 13:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:43:11.452 13:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:43:11.452 13:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:11.452 ************************************ 00:43:11.452 START TEST nvmf_target_multipath 00:43:11.452 ************************************ 00:43:11.452 13:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:43:11.452 * Looking for test storage... 00:43:11.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:11.452 13:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:43:11.452 13:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:43:11.452 13:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:43:11.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:11.452 --rc genhtml_branch_coverage=1 00:43:11.452 --rc genhtml_function_coverage=1 00:43:11.452 --rc genhtml_legend=1 00:43:11.452 --rc geninfo_all_blocks=1 00:43:11.452 --rc geninfo_unexecuted_blocks=1 00:43:11.452 00:43:11.452 ' 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:43:11.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:11.452 --rc genhtml_branch_coverage=1 00:43:11.452 --rc genhtml_function_coverage=1 00:43:11.452 --rc genhtml_legend=1 00:43:11.452 --rc geninfo_all_blocks=1 00:43:11.452 --rc geninfo_unexecuted_blocks=1 00:43:11.452 00:43:11.452 ' 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:43:11.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:11.452 --rc genhtml_branch_coverage=1 00:43:11.452 --rc genhtml_function_coverage=1 00:43:11.452 --rc genhtml_legend=1 00:43:11.452 --rc geninfo_all_blocks=1 00:43:11.452 --rc geninfo_unexecuted_blocks=1 00:43:11.452 00:43:11.452 ' 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:43:11.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:11.452 --rc genhtml_branch_coverage=1 00:43:11.452 --rc genhtml_function_coverage=1 00:43:11.452 --rc genhtml_legend=1 00:43:11.452 --rc geninfo_all_blocks=1 00:43:11.452 --rc geninfo_unexecuted_blocks=1 00:43:11.452 00:43:11.452 ' 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:11.452 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:11.453 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:11.453 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:43:11.453 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:11.453 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:43:11.453 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:11.453 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:11.453 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:11.453 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:11.453 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:11.453 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:11.453 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:11.453 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:11.453 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:11.453 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:11.453 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:11.453 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:11.453 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:43:11.453 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:43:11.453 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:43:11.453 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:11.453 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:11.453 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:11.453 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:11.453 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:11.453 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:11.453 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:11.453 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:11.453 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:11.453 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:11.453 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:43:11.453 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:43:19.595 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:19.595 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:43:19.595 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:19.595 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:19.595 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:19.595 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:19.595 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:19.595 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:43:19.595 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:19.595 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:43:19.595 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:43:19.595 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:43:19.595 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:43:19.595 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:43:19.595 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:43:19.595 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:19.595 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:19.595 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:19.595 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:19.595 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:19.595 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:19.595 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:19.595 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:19.595 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:19.595 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:19.595 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:19.595 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:19.595 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:19.595 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:19.595 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:19.595 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:19.595 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:19.595 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:19.595 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:43:19.596 Found 0000:31:00.0 (0x8086 - 0x159b) 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:43:19.596 Found 0000:31:00.1 (0x8086 - 0x159b) 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:43:19.596 Found net devices under 0000:31:00.0: cvl_0_0 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:43:19.596 Found net devices under 0000:31:00.1: cvl_0_1 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:19.596 13:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:19.596 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:19.596 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:19.596 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:19.596 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:19.596 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:19.596 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:19.596 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:19.596 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:19.596 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:19.596 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.487 ms 00:43:19.596 00:43:19.596 --- 10.0.0.2 ping statistics --- 00:43:19.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:19.596 rtt min/avg/max/mdev = 0.487/0.487/0.487/0.000 ms 00:43:19.596 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:19.596 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:19.596 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:43:19.596 00:43:19.596 --- 10.0.0.1 ping statistics --- 00:43:19.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:19.596 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:43:19.596 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:19.596 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:43:19.596 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:19.596 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:19.596 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:19.596 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:19.596 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:19.596 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:19.596 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:19.596 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:43:19.596 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:43:19.596 only one NIC for nvmf test 00:43:19.596 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:43:19.596 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:19.596 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:43:19.596 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:19.596 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:43:19.596 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:19.596 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:19.596 rmmod nvme_tcp 00:43:19.596 rmmod nvme_fabrics 00:43:19.596 rmmod nvme_keyring 00:43:19.596 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:19.596 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:43:19.596 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:43:19.596 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:43:19.596 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:19.596 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:19.596 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:19.597 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:43:19.597 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:43:19.597 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:19.597 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:43:19.597 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:19.597 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:19.597 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:19.597 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:19.597 13:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:21.541 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:21.541 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:43:21.541 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:43:21.541 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:21.541 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:43:21.541 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:21.541 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:43:21.541 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:21.541 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:21.541 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:21.541 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:43:21.541 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:43:21.541 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:43:21.541 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:21.541 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:21.541 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:21.541 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:43:21.541 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:43:21.541 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:21.541 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:43:21.541 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:21.541 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:21.541 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:21.541 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:21.541 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:21.541 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:21.541 00:43:21.541 real 0m10.516s 00:43:21.541 user 0m2.404s 00:43:21.541 sys 0m6.027s 00:43:21.541 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:43:21.541 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:43:21.541 ************************************ 00:43:21.541 END TEST nvmf_target_multipath 00:43:21.541 ************************************ 00:43:21.541 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:43:21.541 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:43:21.541 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:43:21.541 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:21.541 ************************************ 00:43:21.541 START TEST nvmf_zcopy 00:43:21.541 ************************************ 00:43:21.541 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:43:21.541 * Looking for test storage... 00:43:21.541 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:43:21.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:21.850 --rc genhtml_branch_coverage=1 00:43:21.850 --rc genhtml_function_coverage=1 00:43:21.850 --rc genhtml_legend=1 00:43:21.850 --rc geninfo_all_blocks=1 00:43:21.850 --rc geninfo_unexecuted_blocks=1 00:43:21.850 00:43:21.850 ' 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:43:21.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:21.850 --rc genhtml_branch_coverage=1 00:43:21.850 --rc genhtml_function_coverage=1 00:43:21.850 --rc genhtml_legend=1 00:43:21.850 --rc geninfo_all_blocks=1 00:43:21.850 --rc geninfo_unexecuted_blocks=1 00:43:21.850 00:43:21.850 ' 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:43:21.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:21.850 --rc genhtml_branch_coverage=1 00:43:21.850 --rc genhtml_function_coverage=1 00:43:21.850 --rc genhtml_legend=1 00:43:21.850 --rc geninfo_all_blocks=1 00:43:21.850 --rc geninfo_unexecuted_blocks=1 00:43:21.850 00:43:21.850 ' 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:43:21.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:21.850 --rc genhtml_branch_coverage=1 00:43:21.850 --rc genhtml_function_coverage=1 00:43:21.850 --rc genhtml_legend=1 00:43:21.850 --rc geninfo_all_blocks=1 00:43:21.850 --rc geninfo_unexecuted_blocks=1 00:43:21.850 00:43:21.850 ' 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:21.850 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:21.851 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:21.851 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:21.851 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:21.851 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:21.851 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:21.851 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:21.851 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:43:21.851 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:21.851 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:21.851 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:21.851 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:21.851 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:21.851 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:21.851 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:21.851 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:21.851 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:21.851 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:21.851 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:43:21.851 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:30.034 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:30.034 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:43:30.034 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:30.034 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:30.034 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:30.034 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:30.034 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:30.034 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:43:30.034 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:30.034 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:43:30.034 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:43:30.034 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:43:30.034 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:43:30.034 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:43:30.034 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:43:30.034 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:30.034 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:30.034 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:30.034 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:30.034 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:30.034 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:30.034 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:30.034 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:30.034 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:30.034 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:30.034 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:30.034 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:30.034 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:30.034 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:30.034 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:43:30.035 Found 0000:31:00.0 (0x8086 - 0x159b) 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:43:30.035 Found 0000:31:00.1 (0x8086 - 0x159b) 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:43:30.035 Found net devices under 0000:31:00.0: cvl_0_0 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:43:30.035 Found net devices under 0000:31:00.1: cvl_0_1 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:30.035 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:30.035 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.683 ms 00:43:30.035 00:43:30.035 --- 10.0.0.2 ping statistics --- 00:43:30.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:30.035 rtt min/avg/max/mdev = 0.683/0.683/0.683/0.000 ms 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:30.035 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:30.035 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:43:30.035 00:43:30.035 --- 10.0.0.1 ping statistics --- 00:43:30.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:30.035 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=27880 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 27880 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 27880 ']' 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:30.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:43:30.035 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:30.035 [2024-11-07 13:48:37.805457] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:30.035 [2024-11-07 13:48:37.807771] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:43:30.036 [2024-11-07 13:48:37.807856] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:30.036 [2024-11-07 13:48:37.970277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:30.296 [2024-11-07 13:48:38.067648] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:30.296 [2024-11-07 13:48:38.067689] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:30.296 [2024-11-07 13:48:38.067703] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:30.296 [2024-11-07 13:48:38.067715] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:30.296 [2024-11-07 13:48:38.067726] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:30.296 [2024-11-07 13:48:38.069060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:30.556 [2024-11-07 13:48:38.306820] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:30.556 [2024-11-07 13:48:38.307136] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:30.556 13:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:43:30.557 13:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:43:30.557 13:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:30.557 13:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:30.557 13:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:30.817 13:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:30.817 13:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:43:30.817 13:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:43:30.817 13:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:30.817 13:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:30.817 [2024-11-07 13:48:38.606235] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:30.817 13:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:30.817 13:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:43:30.817 13:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:30.817 13:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:30.818 13:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:30.818 13:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:30.818 13:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:30.818 13:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:30.818 [2024-11-07 13:48:38.634629] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:30.818 13:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:30.818 13:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:43:30.818 13:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:30.818 13:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:30.818 13:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:30.818 13:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:43:30.818 13:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:30.818 13:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:30.818 malloc0 00:43:30.818 13:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:30.818 13:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:43:30.818 13:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:30.818 13:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:30.818 13:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:30.818 13:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:43:30.818 13:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:43:30.818 13:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:43:30.818 13:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:43:30.818 13:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:30.818 13:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:30.818 { 00:43:30.818 "params": { 00:43:30.818 "name": "Nvme$subsystem", 00:43:30.818 "trtype": "$TEST_TRANSPORT", 00:43:30.818 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:30.818 "adrfam": "ipv4", 00:43:30.818 "trsvcid": "$NVMF_PORT", 00:43:30.818 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:30.818 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:30.818 "hdgst": ${hdgst:-false}, 00:43:30.818 "ddgst": ${ddgst:-false} 00:43:30.818 }, 00:43:30.818 "method": "bdev_nvme_attach_controller" 00:43:30.818 } 00:43:30.818 EOF 00:43:30.818 )") 00:43:30.818 13:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:43:30.818 13:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:43:30.818 13:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:43:30.818 13:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:30.818 "params": { 00:43:30.818 "name": "Nvme1", 00:43:30.818 "trtype": "tcp", 00:43:30.818 "traddr": "10.0.0.2", 00:43:30.818 "adrfam": "ipv4", 00:43:30.818 "trsvcid": "4420", 00:43:30.818 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:30.818 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:30.818 "hdgst": false, 00:43:30.818 "ddgst": false 00:43:30.818 }, 00:43:30.818 "method": "bdev_nvme_attach_controller" 00:43:30.818 }' 00:43:30.818 [2024-11-07 13:48:38.796686] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:43:30.818 [2024-11-07 13:48:38.796812] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid28147 ] 00:43:31.078 [2024-11-07 13:48:38.950675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:31.078 [2024-11-07 13:48:39.047766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:31.648 Running I/O for 10 seconds... 00:43:33.527 5865.00 IOPS, 45.82 MiB/s [2024-11-07T12:48:42.917Z] 5935.00 IOPS, 46.37 MiB/s [2024-11-07T12:48:43.856Z] 6299.33 IOPS, 49.21 MiB/s [2024-11-07T12:48:44.796Z] 6890.00 IOPS, 53.83 MiB/s [2024-11-07T12:48:45.736Z] 7248.20 IOPS, 56.63 MiB/s [2024-11-07T12:48:46.678Z] 7484.33 IOPS, 58.47 MiB/s [2024-11-07T12:48:47.618Z] 7654.00 IOPS, 59.80 MiB/s [2024-11-07T12:48:48.560Z] 7778.88 IOPS, 60.77 MiB/s [2024-11-07T12:48:49.943Z] 7876.78 IOPS, 61.54 MiB/s [2024-11-07T12:48:49.943Z] 7957.00 IOPS, 62.16 MiB/s 00:43:41.936 Latency(us) 00:43:41.936 [2024-11-07T12:48:49.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:41.936 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:43:41.936 Verification LBA range: start 0x0 length 0x1000 00:43:41.936 Nvme1n1 : 10.01 7960.55 62.19 0.00 0.00 16023.51 2266.45 30583.47 00:43:41.936 [2024-11-07T12:48:49.943Z] =================================================================================================================== 00:43:41.936 [2024-11-07T12:48:49.943Z] Total : 7960.55 62.19 0.00 0.00 16023.51 2266.45 30583.47 00:43:42.196 13:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=30127 00:43:42.196 13:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:43:42.196 13:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:42.196 13:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:43:42.196 13:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:43:42.196 13:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:43:42.196 13:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:43:42.196 13:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:42.196 13:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:42.196 { 00:43:42.196 "params": { 00:43:42.196 "name": "Nvme$subsystem", 00:43:42.196 "trtype": "$TEST_TRANSPORT", 00:43:42.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:42.196 "adrfam": "ipv4", 00:43:42.196 "trsvcid": "$NVMF_PORT", 00:43:42.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:42.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:42.196 "hdgst": ${hdgst:-false}, 00:43:42.196 "ddgst": ${ddgst:-false} 00:43:42.196 }, 00:43:42.196 "method": "bdev_nvme_attach_controller" 00:43:42.196 } 00:43:42.196 EOF 00:43:42.196 )") 00:43:42.196 [2024-11-07 13:48:50.137658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.196 [2024-11-07 13:48:50.137703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.196 13:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:43:42.196 13:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:43:42.196 13:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:43:42.196 13:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:42.196 "params": { 00:43:42.196 "name": "Nvme1", 00:43:42.196 "trtype": "tcp", 00:43:42.196 "traddr": "10.0.0.2", 00:43:42.196 "adrfam": "ipv4", 00:43:42.196 "trsvcid": "4420", 00:43:42.196 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:42.196 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:42.196 "hdgst": false, 00:43:42.196 "ddgst": false 00:43:42.196 }, 00:43:42.196 "method": "bdev_nvme_attach_controller" 00:43:42.196 }' 00:43:42.196 [2024-11-07 13:48:50.149619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.196 [2024-11-07 13:48:50.149640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.196 [2024-11-07 13:48:50.161594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.196 [2024-11-07 13:48:50.161610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.196 [2024-11-07 13:48:50.173602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.196 [2024-11-07 13:48:50.173618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.196 [2024-11-07 13:48:50.185597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.196 [2024-11-07 13:48:50.185613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.196 [2024-11-07 13:48:50.197587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.196 [2024-11-07 13:48:50.197603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.457 [2024-11-07 13:48:50.209606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.457 [2024-11-07 13:48:50.209622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.457 [2024-11-07 13:48:50.219664] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:43:42.457 [2024-11-07 13:48:50.219767] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid30127 ] 00:43:42.457 [2024-11-07 13:48:50.221596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.457 [2024-11-07 13:48:50.221612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.457 [2024-11-07 13:48:50.233587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.457 [2024-11-07 13:48:50.233603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.457 [2024-11-07 13:48:50.245599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.457 [2024-11-07 13:48:50.245615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.457 [2024-11-07 13:48:50.257589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.457 [2024-11-07 13:48:50.257605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.457 [2024-11-07 13:48:50.269597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.457 [2024-11-07 13:48:50.269613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.457 [2024-11-07 13:48:50.281597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.457 [2024-11-07 13:48:50.281612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.457 [2024-11-07 13:48:50.293585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.457 [2024-11-07 13:48:50.293601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.457 [2024-11-07 13:48:50.305598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.457 [2024-11-07 13:48:50.305614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.457 [2024-11-07 13:48:50.317600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.457 [2024-11-07 13:48:50.317616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.457 [2024-11-07 13:48:50.329586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.457 [2024-11-07 13:48:50.329601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.457 [2024-11-07 13:48:50.341595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.457 [2024-11-07 13:48:50.341611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.457 [2024-11-07 13:48:50.353602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.457 [2024-11-07 13:48:50.353618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.457 [2024-11-07 13:48:50.356340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:42.457 [2024-11-07 13:48:50.365598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.457 [2024-11-07 13:48:50.365614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.457 [2024-11-07 13:48:50.377597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.457 [2024-11-07 13:48:50.377612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.457 [2024-11-07 13:48:50.389586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.457 [2024-11-07 13:48:50.389601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.457 [2024-11-07 13:48:50.401598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.457 [2024-11-07 13:48:50.401614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.457 [2024-11-07 13:48:50.413597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.457 [2024-11-07 13:48:50.413612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.457 [2024-11-07 13:48:50.425589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.457 [2024-11-07 13:48:50.425604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.457 [2024-11-07 13:48:50.437597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.457 [2024-11-07 13:48:50.437613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.457 [2024-11-07 13:48:50.449594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.457 [2024-11-07 13:48:50.449610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.457 [2024-11-07 13:48:50.453675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:42.717 [2024-11-07 13:48:50.461598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.717 [2024-11-07 13:48:50.461613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.718 [2024-11-07 13:48:50.473601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.718 [2024-11-07 13:48:50.473618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.718 [2024-11-07 13:48:50.485590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.718 [2024-11-07 13:48:50.485605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.718 [2024-11-07 13:48:50.497608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.718 [2024-11-07 13:48:50.497624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.718 [2024-11-07 13:48:50.509598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.718 [2024-11-07 13:48:50.509613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.718 [2024-11-07 13:48:50.521591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.718 [2024-11-07 13:48:50.521606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.718 [2024-11-07 13:48:50.533602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.718 [2024-11-07 13:48:50.533619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.718 [2024-11-07 13:48:50.545588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.718 [2024-11-07 13:48:50.545603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.718 [2024-11-07 13:48:50.557599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.718 [2024-11-07 13:48:50.557614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.718 [2024-11-07 13:48:50.569600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.718 [2024-11-07 13:48:50.569615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.718 [2024-11-07 13:48:50.581590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.718 [2024-11-07 13:48:50.581605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.718 [2024-11-07 13:48:50.593599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.718 [2024-11-07 13:48:50.593614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.718 [2024-11-07 13:48:50.605604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.718 [2024-11-07 13:48:50.605619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.718 [2024-11-07 13:48:50.617589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.718 [2024-11-07 13:48:50.617608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.718 [2024-11-07 13:48:50.629601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.718 [2024-11-07 13:48:50.629617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.718 [2024-11-07 13:48:50.641599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.718 [2024-11-07 13:48:50.641614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.718 [2024-11-07 13:48:50.653598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.718 [2024-11-07 13:48:50.653613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.718 [2024-11-07 13:48:50.665598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.718 [2024-11-07 13:48:50.665614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.718 [2024-11-07 13:48:50.677586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.718 [2024-11-07 13:48:50.677602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.718 [2024-11-07 13:48:50.689597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.718 [2024-11-07 13:48:50.689612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.718 [2024-11-07 13:48:50.701607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.718 [2024-11-07 13:48:50.701625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.718 [2024-11-07 13:48:50.713592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.718 [2024-11-07 13:48:50.713609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.979 [2024-11-07 13:48:50.725605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.979 [2024-11-07 13:48:50.725622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.979 [2024-11-07 13:48:50.737591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.979 [2024-11-07 13:48:50.737607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.979 [2024-11-07 13:48:50.749599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.979 [2024-11-07 13:48:50.749615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.979 [2024-11-07 13:48:50.761596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.979 [2024-11-07 13:48:50.761611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.979 [2024-11-07 13:48:50.773601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.979 [2024-11-07 13:48:50.773617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.979 [2024-11-07 13:48:50.785600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.979 [2024-11-07 13:48:50.785616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.979 [2024-11-07 13:48:50.797614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.979 [2024-11-07 13:48:50.797629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.979 [2024-11-07 13:48:50.809588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.979 [2024-11-07 13:48:50.809603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.979 [2024-11-07 13:48:50.821601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.979 [2024-11-07 13:48:50.821616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.979 [2024-11-07 13:48:50.833592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.979 [2024-11-07 13:48:50.833607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.979 [2024-11-07 13:48:50.845596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.979 [2024-11-07 13:48:50.845611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.979 [2024-11-07 13:48:50.857596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.979 [2024-11-07 13:48:50.857612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.979 [2024-11-07 13:48:50.869589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.979 [2024-11-07 13:48:50.869605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.979 [2024-11-07 13:48:50.881599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.979 [2024-11-07 13:48:50.881615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.979 [2024-11-07 13:48:50.893597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.979 [2024-11-07 13:48:50.893613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.979 [2024-11-07 13:48:50.905586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.979 [2024-11-07 13:48:50.905601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.979 [2024-11-07 13:48:50.917595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.979 [2024-11-07 13:48:50.917611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.979 [2024-11-07 13:48:50.961338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.979 [2024-11-07 13:48:50.961357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.979 [2024-11-07 13:48:50.969599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:42.979 [2024-11-07 13:48:50.969617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:42.979 Running I/O for 5 seconds... 00:43:43.239 [2024-11-07 13:48:50.986838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.240 [2024-11-07 13:48:50.986858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.240 [2024-11-07 13:48:51.001159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.240 [2024-11-07 13:48:51.001180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.240 [2024-11-07 13:48:51.015681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.240 [2024-11-07 13:48:51.015701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.240 [2024-11-07 13:48:51.029966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.240 [2024-11-07 13:48:51.029986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.240 [2024-11-07 13:48:51.045802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.240 [2024-11-07 13:48:51.045821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.240 [2024-11-07 13:48:51.059513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.240 [2024-11-07 13:48:51.059533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.240 [2024-11-07 13:48:51.074170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.240 [2024-11-07 13:48:51.074188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.240 [2024-11-07 13:48:51.089821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.240 [2024-11-07 13:48:51.089840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.240 [2024-11-07 13:48:51.103752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.240 [2024-11-07 13:48:51.103771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.240 [2024-11-07 13:48:51.118391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.240 [2024-11-07 13:48:51.118412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.240 [2024-11-07 13:48:51.133690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.240 [2024-11-07 13:48:51.133710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.240 [2024-11-07 13:48:51.147343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.240 [2024-11-07 13:48:51.147362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.240 [2024-11-07 13:48:51.161972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.240 [2024-11-07 13:48:51.161990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.240 [2024-11-07 13:48:51.177563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.240 [2024-11-07 13:48:51.177581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.240 [2024-11-07 13:48:51.191129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.240 [2024-11-07 13:48:51.191148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.240 [2024-11-07 13:48:51.205455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.240 [2024-11-07 13:48:51.205473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.240 [2024-11-07 13:48:51.219506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.240 [2024-11-07 13:48:51.219525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.240 [2024-11-07 13:48:51.234141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.240 [2024-11-07 13:48:51.234159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.500 [2024-11-07 13:48:51.249534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.500 [2024-11-07 13:48:51.249553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.500 [2024-11-07 13:48:51.263907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.500 [2024-11-07 13:48:51.263927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.500 [2024-11-07 13:48:51.278495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.501 [2024-11-07 13:48:51.278514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.501 [2024-11-07 13:48:51.294009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.501 [2024-11-07 13:48:51.294027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.501 [2024-11-07 13:48:51.309205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.501 [2024-11-07 13:48:51.309224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.501 [2024-11-07 13:48:51.323267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.501 [2024-11-07 13:48:51.323286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.501 [2024-11-07 13:48:51.337635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.501 [2024-11-07 13:48:51.337653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.501 [2024-11-07 13:48:51.350210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.501 [2024-11-07 13:48:51.350228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.501 [2024-11-07 13:48:51.365121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.501 [2024-11-07 13:48:51.365139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.501 [2024-11-07 13:48:51.379430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.501 [2024-11-07 13:48:51.379448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.501 [2024-11-07 13:48:51.394076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.501 [2024-11-07 13:48:51.394094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.501 [2024-11-07 13:48:51.409671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.501 [2024-11-07 13:48:51.409690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.501 [2024-11-07 13:48:51.423231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.501 [2024-11-07 13:48:51.423249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.501 [2024-11-07 13:48:51.437802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.501 [2024-11-07 13:48:51.437820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.501 [2024-11-07 13:48:51.451405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.501 [2024-11-07 13:48:51.451424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.501 [2024-11-07 13:48:51.465860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.501 [2024-11-07 13:48:51.465883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.501 [2024-11-07 13:48:51.478448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.501 [2024-11-07 13:48:51.478466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.501 [2024-11-07 13:48:51.493376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.501 [2024-11-07 13:48:51.493394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.761 [2024-11-07 13:48:51.506877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.761 [2024-11-07 13:48:51.506896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.761 [2024-11-07 13:48:51.521274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.761 [2024-11-07 13:48:51.521294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.761 [2024-11-07 13:48:51.535117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.761 [2024-11-07 13:48:51.535136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.761 [2024-11-07 13:48:51.549703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.761 [2024-11-07 13:48:51.549722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.761 [2024-11-07 13:48:51.563540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.761 [2024-11-07 13:48:51.563559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.761 [2024-11-07 13:48:51.577738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.762 [2024-11-07 13:48:51.577756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.762 [2024-11-07 13:48:51.591356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.762 [2024-11-07 13:48:51.591375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.762 [2024-11-07 13:48:51.605944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.762 [2024-11-07 13:48:51.605962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.762 [2024-11-07 13:48:51.621577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.762 [2024-11-07 13:48:51.621596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.762 [2024-11-07 13:48:51.635512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.762 [2024-11-07 13:48:51.635530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.762 [2024-11-07 13:48:51.649984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.762 [2024-11-07 13:48:51.650003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.762 [2024-11-07 13:48:51.665846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.762 [2024-11-07 13:48:51.665875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.762 [2024-11-07 13:48:51.679194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.762 [2024-11-07 13:48:51.679213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.762 [2024-11-07 13:48:51.693713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.762 [2024-11-07 13:48:51.693732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.762 [2024-11-07 13:48:51.706235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.762 [2024-11-07 13:48:51.706253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.762 [2024-11-07 13:48:51.721209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.762 [2024-11-07 13:48:51.721228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.762 [2024-11-07 13:48:51.735884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.762 [2024-11-07 13:48:51.735902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:43.762 [2024-11-07 13:48:51.750201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:43.762 [2024-11-07 13:48:51.750220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.023 [2024-11-07 13:48:51.765815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.023 [2024-11-07 13:48:51.765834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.023 [2024-11-07 13:48:51.779451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.023 [2024-11-07 13:48:51.779470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.023 [2024-11-07 13:48:51.793957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.023 [2024-11-07 13:48:51.793975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.023 [2024-11-07 13:48:51.809831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.023 [2024-11-07 13:48:51.809849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.023 [2024-11-07 13:48:51.823460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.023 [2024-11-07 13:48:51.823478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.023 [2024-11-07 13:48:51.838133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.023 [2024-11-07 13:48:51.838151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.023 [2024-11-07 13:48:51.853739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.023 [2024-11-07 13:48:51.853758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.023 [2024-11-07 13:48:51.866293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.023 [2024-11-07 13:48:51.866311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.023 [2024-11-07 13:48:51.881158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.023 [2024-11-07 13:48:51.881177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.023 [2024-11-07 13:48:51.895376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.023 [2024-11-07 13:48:51.895394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.023 [2024-11-07 13:48:51.909855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.023 [2024-11-07 13:48:51.909879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.023 [2024-11-07 13:48:51.922361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.023 [2024-11-07 13:48:51.922379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.023 [2024-11-07 13:48:51.937152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.023 [2024-11-07 13:48:51.937174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.023 [2024-11-07 13:48:51.951350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.023 [2024-11-07 13:48:51.951369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.023 [2024-11-07 13:48:51.965730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.023 [2024-11-07 13:48:51.965749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.023 [2024-11-07 13:48:51.978098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.023 [2024-11-07 13:48:51.978117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.023 16920.00 IOPS, 132.19 MiB/s [2024-11-07T12:48:52.030Z] [2024-11-07 13:48:51.993521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.023 [2024-11-07 13:48:51.993542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.023 [2024-11-07 13:48:52.007223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.023 [2024-11-07 13:48:52.007243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.023 [2024-11-07 13:48:52.021507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.023 [2024-11-07 13:48:52.021527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.284 [2024-11-07 13:48:52.035265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.284 [2024-11-07 13:48:52.035284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.284 [2024-11-07 13:48:52.049472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.284 [2024-11-07 13:48:52.049491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.284 [2024-11-07 13:48:52.063099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.284 [2024-11-07 13:48:52.063117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.284 [2024-11-07 13:48:52.077520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.284 [2024-11-07 13:48:52.077539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.284 [2024-11-07 13:48:52.091515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.284 [2024-11-07 13:48:52.091535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.284 [2024-11-07 13:48:52.106241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.284 [2024-11-07 13:48:52.106259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.284 [2024-11-07 13:48:52.120834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.284 [2024-11-07 13:48:52.120853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.284 [2024-11-07 13:48:52.135538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.284 [2024-11-07 13:48:52.135558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.284 [2024-11-07 13:48:52.149703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.284 [2024-11-07 13:48:52.149722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.284 [2024-11-07 13:48:52.163437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.284 [2024-11-07 13:48:52.163457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.284 [2024-11-07 13:48:52.177793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.284 [2024-11-07 13:48:52.177813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.284 [2024-11-07 13:48:52.190363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.284 [2024-11-07 13:48:52.190381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.284 [2024-11-07 13:48:52.205796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.284 [2024-11-07 13:48:52.205819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.284 [2024-11-07 13:48:52.219554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.284 [2024-11-07 13:48:52.219573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.284 [2024-11-07 13:48:52.234299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.284 [2024-11-07 13:48:52.234318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.284 [2024-11-07 13:48:52.249820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.284 [2024-11-07 13:48:52.249839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.284 [2024-11-07 13:48:52.262254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.284 [2024-11-07 13:48:52.262272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.284 [2024-11-07 13:48:52.276737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.284 [2024-11-07 13:48:52.276756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.545 [2024-11-07 13:48:52.291274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.545 [2024-11-07 13:48:52.291293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.545 [2024-11-07 13:48:52.306022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.545 [2024-11-07 13:48:52.306040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.545 [2024-11-07 13:48:52.321417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.545 [2024-11-07 13:48:52.321436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.545 [2024-11-07 13:48:52.335531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.545 [2024-11-07 13:48:52.335550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.545 [2024-11-07 13:48:52.350427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.545 [2024-11-07 13:48:52.350446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.545 [2024-11-07 13:48:52.366175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.545 [2024-11-07 13:48:52.366193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.545 [2024-11-07 13:48:52.381732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.545 [2024-11-07 13:48:52.381751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.545 [2024-11-07 13:48:52.394295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.545 [2024-11-07 13:48:52.394315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.545 [2024-11-07 13:48:52.409112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.545 [2024-11-07 13:48:52.409131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.545 [2024-11-07 13:48:52.423303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.545 [2024-11-07 13:48:52.423322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.545 [2024-11-07 13:48:52.437332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.545 [2024-11-07 13:48:52.437351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.545 [2024-11-07 13:48:52.451540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.545 [2024-11-07 13:48:52.451558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.545 [2024-11-07 13:48:52.465913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.545 [2024-11-07 13:48:52.465942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.545 [2024-11-07 13:48:52.481440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.545 [2024-11-07 13:48:52.481459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.545 [2024-11-07 13:48:52.495553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.545 [2024-11-07 13:48:52.495573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.545 [2024-11-07 13:48:52.509498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.545 [2024-11-07 13:48:52.509518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.545 [2024-11-07 13:48:52.523166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.545 [2024-11-07 13:48:52.523185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.546 [2024-11-07 13:48:52.537544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.546 [2024-11-07 13:48:52.537563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.806 [2024-11-07 13:48:52.550234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.806 [2024-11-07 13:48:52.550252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.806 [2024-11-07 13:48:52.565701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.806 [2024-11-07 13:48:52.565720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.806 [2024-11-07 13:48:52.578312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.806 [2024-11-07 13:48:52.578331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.806 [2024-11-07 13:48:52.593273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.806 [2024-11-07 13:48:52.593292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.806 [2024-11-07 13:48:52.607664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.806 [2024-11-07 13:48:52.607683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.806 [2024-11-07 13:48:52.622600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.806 [2024-11-07 13:48:52.622619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.806 [2024-11-07 13:48:52.637243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.806 [2024-11-07 13:48:52.637262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.806 [2024-11-07 13:48:52.650444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.806 [2024-11-07 13:48:52.650462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.806 [2024-11-07 13:48:52.666081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.806 [2024-11-07 13:48:52.666100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.806 [2024-11-07 13:48:52.681741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.807 [2024-11-07 13:48:52.681759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.807 [2024-11-07 13:48:52.695450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.807 [2024-11-07 13:48:52.695468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.807 [2024-11-07 13:48:52.710056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.807 [2024-11-07 13:48:52.710074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.807 [2024-11-07 13:48:52.725568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.807 [2024-11-07 13:48:52.725586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.807 [2024-11-07 13:48:52.739551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.807 [2024-11-07 13:48:52.739570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.807 [2024-11-07 13:48:52.753973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.807 [2024-11-07 13:48:52.753991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.807 [2024-11-07 13:48:52.769516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.807 [2024-11-07 13:48:52.769535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.807 [2024-11-07 13:48:52.782697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.807 [2024-11-07 13:48:52.782716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:44.807 [2024-11-07 13:48:52.798173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:44.807 [2024-11-07 13:48:52.798191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.067 [2024-11-07 13:48:52.813298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.067 [2024-11-07 13:48:52.813317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.067 [2024-11-07 13:48:52.826952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.067 [2024-11-07 13:48:52.826970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.067 [2024-11-07 13:48:52.841888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.067 [2024-11-07 13:48:52.841906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.067 [2024-11-07 13:48:52.857521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.067 [2024-11-07 13:48:52.857540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.067 [2024-11-07 13:48:52.871270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.067 [2024-11-07 13:48:52.871288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.067 [2024-11-07 13:48:52.885564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.067 [2024-11-07 13:48:52.885583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.067 [2024-11-07 13:48:52.899651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.067 [2024-11-07 13:48:52.899669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.067 [2024-11-07 13:48:52.913829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.067 [2024-11-07 13:48:52.913847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.067 [2024-11-07 13:48:52.926400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.067 [2024-11-07 13:48:52.926418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.067 [2024-11-07 13:48:52.941524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.067 [2024-11-07 13:48:52.941542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.067 [2024-11-07 13:48:52.955238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.067 [2024-11-07 13:48:52.955256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.067 [2024-11-07 13:48:52.969760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.067 [2024-11-07 13:48:52.969778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.067 [2024-11-07 13:48:52.981888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.067 [2024-11-07 13:48:52.981906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.067 16996.50 IOPS, 132.79 MiB/s [2024-11-07T12:48:53.074Z] [2024-11-07 13:48:52.997483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.067 [2024-11-07 13:48:52.997502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.067 [2024-11-07 13:48:53.011795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.067 [2024-11-07 13:48:53.011818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.067 [2024-11-07 13:48:53.026258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.067 [2024-11-07 13:48:53.026276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.067 [2024-11-07 13:48:53.041655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.067 [2024-11-07 13:48:53.041674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.067 [2024-11-07 13:48:53.055319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.067 [2024-11-07 13:48:53.055337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.067 [2024-11-07 13:48:53.069931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.067 [2024-11-07 13:48:53.069949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.328 [2024-11-07 13:48:53.085499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.328 [2024-11-07 13:48:53.085519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.328 [2024-11-07 13:48:53.099533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.328 [2024-11-07 13:48:53.099554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.328 [2024-11-07 13:48:53.114269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.328 [2024-11-07 13:48:53.114288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.328 [2024-11-07 13:48:53.129602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.328 [2024-11-07 13:48:53.129621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.328 [2024-11-07 13:48:53.143702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.328 [2024-11-07 13:48:53.143721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.328 [2024-11-07 13:48:53.158055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.328 [2024-11-07 13:48:53.158074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.328 [2024-11-07 13:48:53.173119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.328 [2024-11-07 13:48:53.173138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.328 [2024-11-07 13:48:53.187643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.328 [2024-11-07 13:48:53.187661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.328 [2024-11-07 13:48:53.201837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.328 [2024-11-07 13:48:53.201856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.328 [2024-11-07 13:48:53.214393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.328 [2024-11-07 13:48:53.214412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.328 [2024-11-07 13:48:53.230193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.328 [2024-11-07 13:48:53.230211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.328 [2024-11-07 13:48:53.245578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.328 [2024-11-07 13:48:53.245596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.328 [2024-11-07 13:48:53.259311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.328 [2024-11-07 13:48:53.259330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.328 [2024-11-07 13:48:53.273574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.328 [2024-11-07 13:48:53.273594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.328 [2024-11-07 13:48:53.287626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.328 [2024-11-07 13:48:53.287650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.328 [2024-11-07 13:48:53.301848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.328 [2024-11-07 13:48:53.301874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.328 [2024-11-07 13:48:53.314185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.328 [2024-11-07 13:48:53.314202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.328 [2024-11-07 13:48:53.329824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.328 [2024-11-07 13:48:53.329842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.588 [2024-11-07 13:48:53.342976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.588 [2024-11-07 13:48:53.342995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.588 [2024-11-07 13:48:53.357762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.588 [2024-11-07 13:48:53.357781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.588 [2024-11-07 13:48:53.371322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.588 [2024-11-07 13:48:53.371340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.588 [2024-11-07 13:48:53.386030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.588 [2024-11-07 13:48:53.386048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.588 [2024-11-07 13:48:53.401315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.588 [2024-11-07 13:48:53.401333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.588 [2024-11-07 13:48:53.415509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.588 [2024-11-07 13:48:53.415527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.588 [2024-11-07 13:48:53.429615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.588 [2024-11-07 13:48:53.429633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.588 [2024-11-07 13:48:53.442307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.588 [2024-11-07 13:48:53.442325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.588 [2024-11-07 13:48:53.457021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.588 [2024-11-07 13:48:53.457039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.588 [2024-11-07 13:48:53.471391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.588 [2024-11-07 13:48:53.471410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.588 [2024-11-07 13:48:53.485867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.588 [2024-11-07 13:48:53.485886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.588 [2024-11-07 13:48:53.498507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.588 [2024-11-07 13:48:53.498525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.588 [2024-11-07 13:48:53.513331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.588 [2024-11-07 13:48:53.513350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.588 [2024-11-07 13:48:53.526974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.588 [2024-11-07 13:48:53.526992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.588 [2024-11-07 13:48:53.541224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.588 [2024-11-07 13:48:53.541242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.588 [2024-11-07 13:48:53.555109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.588 [2024-11-07 13:48:53.555132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.588 [2024-11-07 13:48:53.569982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.588 [2024-11-07 13:48:53.570000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.588 [2024-11-07 13:48:53.585607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.588 [2024-11-07 13:48:53.585626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.848 [2024-11-07 13:48:53.599512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.848 [2024-11-07 13:48:53.599530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.848 [2024-11-07 13:48:53.614032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.848 [2024-11-07 13:48:53.614049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.848 [2024-11-07 13:48:53.629694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.848 [2024-11-07 13:48:53.629713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.848 [2024-11-07 13:48:53.643571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.848 [2024-11-07 13:48:53.643590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.848 [2024-11-07 13:48:53.658327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.848 [2024-11-07 13:48:53.658346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.848 [2024-11-07 13:48:53.673550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.848 [2024-11-07 13:48:53.673568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.848 [2024-11-07 13:48:53.687000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.848 [2024-11-07 13:48:53.687019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.848 [2024-11-07 13:48:53.701508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.848 [2024-11-07 13:48:53.701527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.848 [2024-11-07 13:48:53.715660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.848 [2024-11-07 13:48:53.715678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.848 [2024-11-07 13:48:53.730010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.848 [2024-11-07 13:48:53.730027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.848 [2024-11-07 13:48:53.746096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.848 [2024-11-07 13:48:53.746113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.848 [2024-11-07 13:48:53.761265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.848 [2024-11-07 13:48:53.761283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.848 [2024-11-07 13:48:53.775119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.848 [2024-11-07 13:48:53.775137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.848 [2024-11-07 13:48:53.789825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.848 [2024-11-07 13:48:53.789843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.848 [2024-11-07 13:48:53.802890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.848 [2024-11-07 13:48:53.802908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.848 [2024-11-07 13:48:53.817394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.848 [2024-11-07 13:48:53.817414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.848 [2024-11-07 13:48:53.830023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.848 [2024-11-07 13:48:53.830045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:45.848 [2024-11-07 13:48:53.845108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:45.848 [2024-11-07 13:48:53.845127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.109 [2024-11-07 13:48:53.859855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.109 [2024-11-07 13:48:53.859880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.109 [2024-11-07 13:48:53.874624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.109 [2024-11-07 13:48:53.874643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.109 [2024-11-07 13:48:53.890287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.109 [2024-11-07 13:48:53.890305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.109 [2024-11-07 13:48:53.906006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.109 [2024-11-07 13:48:53.906025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.109 [2024-11-07 13:48:53.921326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.109 [2024-11-07 13:48:53.921345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.109 [2024-11-07 13:48:53.935669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.109 [2024-11-07 13:48:53.935687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.109 [2024-11-07 13:48:53.949731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.109 [2024-11-07 13:48:53.949749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.109 [2024-11-07 13:48:53.962139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.109 [2024-11-07 13:48:53.962157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.109 [2024-11-07 13:48:53.976961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.109 [2024-11-07 13:48:53.976980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.109 17003.67 IOPS, 132.84 MiB/s [2024-11-07T12:48:54.116Z] [2024-11-07 13:48:53.991702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.109 [2024-11-07 13:48:53.991721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.109 [2024-11-07 13:48:54.006175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.109 [2024-11-07 13:48:54.006194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.109 [2024-11-07 13:48:54.021746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.109 [2024-11-07 13:48:54.021764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.109 [2024-11-07 13:48:54.035560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.109 [2024-11-07 13:48:54.035579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.109 [2024-11-07 13:48:54.050441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.109 [2024-11-07 13:48:54.050459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.109 [2024-11-07 13:48:54.066235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.109 [2024-11-07 13:48:54.066253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.109 [2024-11-07 13:48:54.081740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.109 [2024-11-07 13:48:54.081759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.109 [2024-11-07 13:48:54.095131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.109 [2024-11-07 13:48:54.095150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.109 [2024-11-07 13:48:54.109314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.109 [2024-11-07 13:48:54.109333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.370 [2024-11-07 13:48:54.123396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.370 [2024-11-07 13:48:54.123415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.370 [2024-11-07 13:48:54.137769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.370 [2024-11-07 13:48:54.137788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.370 [2024-11-07 13:48:54.150224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.370 [2024-11-07 13:48:54.150242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.370 [2024-11-07 13:48:54.164735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.370 [2024-11-07 13:48:54.164753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.370 [2024-11-07 13:48:54.179783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.370 [2024-11-07 13:48:54.179802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.370 [2024-11-07 13:48:54.193932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.370 [2024-11-07 13:48:54.193950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.370 [2024-11-07 13:48:54.209526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.370 [2024-11-07 13:48:54.209545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.370 [2024-11-07 13:48:54.223350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.370 [2024-11-07 13:48:54.223369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.370 [2024-11-07 13:48:54.237764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.370 [2024-11-07 13:48:54.237783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.370 [2024-11-07 13:48:54.251896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.370 [2024-11-07 13:48:54.251915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.370 [2024-11-07 13:48:54.266652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.370 [2024-11-07 13:48:54.266671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.370 [2024-11-07 13:48:54.281917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.370 [2024-11-07 13:48:54.281936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.370 [2024-11-07 13:48:54.297228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.370 [2024-11-07 13:48:54.297247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.370 [2024-11-07 13:48:54.311308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.370 [2024-11-07 13:48:54.311327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.370 [2024-11-07 13:48:54.326122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.370 [2024-11-07 13:48:54.326140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.370 [2024-11-07 13:48:54.341085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.370 [2024-11-07 13:48:54.341104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.370 [2024-11-07 13:48:54.355439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.370 [2024-11-07 13:48:54.355458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.370 [2024-11-07 13:48:54.369840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.370 [2024-11-07 13:48:54.369858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.632 [2024-11-07 13:48:54.383802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.632 [2024-11-07 13:48:54.383821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.632 [2024-11-07 13:48:54.398119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.632 [2024-11-07 13:48:54.398137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.632 [2024-11-07 13:48:54.413387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.632 [2024-11-07 13:48:54.413406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.632 [2024-11-07 13:48:54.427337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.632 [2024-11-07 13:48:54.427358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.632 [2024-11-07 13:48:54.442283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.632 [2024-11-07 13:48:54.442302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.632 [2024-11-07 13:48:54.458093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.632 [2024-11-07 13:48:54.458111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.632 [2024-11-07 13:48:54.474141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.632 [2024-11-07 13:48:54.474159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.632 [2024-11-07 13:48:54.489500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.632 [2024-11-07 13:48:54.489520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.632 [2024-11-07 13:48:54.503588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.632 [2024-11-07 13:48:54.503607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.632 [2024-11-07 13:48:54.518312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.632 [2024-11-07 13:48:54.518331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.632 [2024-11-07 13:48:54.533742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.632 [2024-11-07 13:48:54.533761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.632 [2024-11-07 13:48:54.546774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.632 [2024-11-07 13:48:54.546794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.632 [2024-11-07 13:48:54.561460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.632 [2024-11-07 13:48:54.561479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.632 [2024-11-07 13:48:54.575888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.632 [2024-11-07 13:48:54.575906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.632 [2024-11-07 13:48:54.590998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.632 [2024-11-07 13:48:54.591016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.632 [2024-11-07 13:48:54.605909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.632 [2024-11-07 13:48:54.605928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.632 [2024-11-07 13:48:54.621599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.632 [2024-11-07 13:48:54.621618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.632 [2024-11-07 13:48:54.635535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.632 [2024-11-07 13:48:54.635554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.893 [2024-11-07 13:48:54.650199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.893 [2024-11-07 13:48:54.650218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.893 [2024-11-07 13:48:54.665180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.893 [2024-11-07 13:48:54.665199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.893 [2024-11-07 13:48:54.679711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.893 [2024-11-07 13:48:54.679731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.893 [2024-11-07 13:48:54.694062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.893 [2024-11-07 13:48:54.694080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.893 [2024-11-07 13:48:54.709606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.893 [2024-11-07 13:48:54.709625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.893 [2024-11-07 13:48:54.722909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.893 [2024-11-07 13:48:54.722928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.893 [2024-11-07 13:48:54.737334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.893 [2024-11-07 13:48:54.737352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.893 [2024-11-07 13:48:54.751587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.893 [2024-11-07 13:48:54.751606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.893 [2024-11-07 13:48:54.765782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.893 [2024-11-07 13:48:54.765801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.893 [2024-11-07 13:48:54.778447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.893 [2024-11-07 13:48:54.778465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.893 [2024-11-07 13:48:54.792969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.893 [2024-11-07 13:48:54.792988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.893 [2024-11-07 13:48:54.807812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.893 [2024-11-07 13:48:54.807830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.893 [2024-11-07 13:48:54.822224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.893 [2024-11-07 13:48:54.822242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.893 [2024-11-07 13:48:54.837913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.893 [2024-11-07 13:48:54.837932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.893 [2024-11-07 13:48:54.853641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.893 [2024-11-07 13:48:54.853660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.893 [2024-11-07 13:48:54.867208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.893 [2024-11-07 13:48:54.867226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.893 [2024-11-07 13:48:54.881901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:46.893 [2024-11-07 13:48:54.881919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.154 [2024-11-07 13:48:54.897492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.154 [2024-11-07 13:48:54.897511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.154 [2024-11-07 13:48:54.911326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.154 [2024-11-07 13:48:54.911344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.154 [2024-11-07 13:48:54.925824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.154 [2024-11-07 13:48:54.925846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.154 [2024-11-07 13:48:54.939481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.154 [2024-11-07 13:48:54.939500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.154 [2024-11-07 13:48:54.954349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.154 [2024-11-07 13:48:54.954368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.154 [2024-11-07 13:48:54.969827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.154 [2024-11-07 13:48:54.969845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.154 [2024-11-07 13:48:54.983075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.154 [2024-11-07 13:48:54.983093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.154 16988.25 IOPS, 132.72 MiB/s [2024-11-07T12:48:55.161Z] [2024-11-07 13:48:54.998441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.154 [2024-11-07 13:48:54.998459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.154 [2024-11-07 13:48:55.014155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.154 [2024-11-07 13:48:55.014173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.154 [2024-11-07 13:48:55.029672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.154 [2024-11-07 13:48:55.029691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.154 [2024-11-07 13:48:55.043553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.154 [2024-11-07 13:48:55.043571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.154 [2024-11-07 13:48:55.058348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.154 [2024-11-07 13:48:55.058366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.154 [2024-11-07 13:48:55.072674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.154 [2024-11-07 13:48:55.072694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.154 [2024-11-07 13:48:55.086990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.154 [2024-11-07 13:48:55.087009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.154 [2024-11-07 13:48:55.101350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.154 [2024-11-07 13:48:55.101370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.154 [2024-11-07 13:48:55.115123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.154 [2024-11-07 13:48:55.115142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.154 [2024-11-07 13:48:55.129695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.154 [2024-11-07 13:48:55.129714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.154 [2024-11-07 13:48:55.142359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.154 [2024-11-07 13:48:55.142377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.154 [2024-11-07 13:48:55.157154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.154 [2024-11-07 13:48:55.157172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.415 [2024-11-07 13:48:55.171531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.415 [2024-11-07 13:48:55.171549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.415 [2024-11-07 13:48:55.185935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.415 [2024-11-07 13:48:55.185953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.415 [2024-11-07 13:48:55.201335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.415 [2024-11-07 13:48:55.201358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.415 [2024-11-07 13:48:55.215581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.415 [2024-11-07 13:48:55.215599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.415 [2024-11-07 13:48:55.230395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.415 [2024-11-07 13:48:55.230413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.415 [2024-11-07 13:48:55.245824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.415 [2024-11-07 13:48:55.245842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.415 [2024-11-07 13:48:55.259287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.415 [2024-11-07 13:48:55.259305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.415 [2024-11-07 13:48:55.273879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.415 [2024-11-07 13:48:55.273897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.415 [2024-11-07 13:48:55.288932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.415 [2024-11-07 13:48:55.288950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.415 [2024-11-07 13:48:55.303528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.415 [2024-11-07 13:48:55.303546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.415 [2024-11-07 13:48:55.318649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.415 [2024-11-07 13:48:55.318667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.415 [2024-11-07 13:48:55.334092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.415 [2024-11-07 13:48:55.334109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.415 [2024-11-07 13:48:55.349549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.415 [2024-11-07 13:48:55.349567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.415 [2024-11-07 13:48:55.363052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.415 [2024-11-07 13:48:55.363070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.415 [2024-11-07 13:48:55.378109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.415 [2024-11-07 13:48:55.378127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.415 [2024-11-07 13:48:55.393787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.415 [2024-11-07 13:48:55.393805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.415 [2024-11-07 13:48:55.406039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.415 [2024-11-07 13:48:55.406057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.675 [2024-11-07 13:48:55.420642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.675 [2024-11-07 13:48:55.420661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.675 [2024-11-07 13:48:55.434973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.675 [2024-11-07 13:48:55.434992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.675 [2024-11-07 13:48:55.449283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.675 [2024-11-07 13:48:55.449301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.675 [2024-11-07 13:48:55.462991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.675 [2024-11-07 13:48:55.463009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.676 [2024-11-07 13:48:55.477544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.676 [2024-11-07 13:48:55.477567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.676 [2024-11-07 13:48:55.491590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.676 [2024-11-07 13:48:55.491609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.676 [2024-11-07 13:48:55.505980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.676 [2024-11-07 13:48:55.505998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.676 [2024-11-07 13:48:55.521735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.676 [2024-11-07 13:48:55.521753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.676 [2024-11-07 13:48:55.534243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.676 [2024-11-07 13:48:55.534261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.676 [2024-11-07 13:48:55.549072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.676 [2024-11-07 13:48:55.549090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.676 [2024-11-07 13:48:55.563401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.676 [2024-11-07 13:48:55.563419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.676 [2024-11-07 13:48:55.578206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.676 [2024-11-07 13:48:55.578224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.676 [2024-11-07 13:48:55.592937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.676 [2024-11-07 13:48:55.592955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.676 [2024-11-07 13:48:55.607517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.676 [2024-11-07 13:48:55.607535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.676 [2024-11-07 13:48:55.621881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.676 [2024-11-07 13:48:55.621899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.676 [2024-11-07 13:48:55.637641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.676 [2024-11-07 13:48:55.637659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.676 [2024-11-07 13:48:55.651714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.676 [2024-11-07 13:48:55.651734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.676 [2024-11-07 13:48:55.666093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.676 [2024-11-07 13:48:55.666111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.936 [2024-11-07 13:48:55.681426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.936 [2024-11-07 13:48:55.681445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.936 [2024-11-07 13:48:55.695909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.936 [2024-11-07 13:48:55.695927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.936 [2024-11-07 13:48:55.710515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.936 [2024-11-07 13:48:55.710533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.936 [2024-11-07 13:48:55.725340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.936 [2024-11-07 13:48:55.725358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.936 [2024-11-07 13:48:55.739328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.936 [2024-11-07 13:48:55.739346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.936 [2024-11-07 13:48:55.754256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.936 [2024-11-07 13:48:55.754274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.936 [2024-11-07 13:48:55.769564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.936 [2024-11-07 13:48:55.769583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.936 [2024-11-07 13:48:55.783544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.936 [2024-11-07 13:48:55.783563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.936 [2024-11-07 13:48:55.797728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.937 [2024-11-07 13:48:55.797747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.937 [2024-11-07 13:48:55.811435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.937 [2024-11-07 13:48:55.811455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.937 [2024-11-07 13:48:55.825664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.937 [2024-11-07 13:48:55.825683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.937 [2024-11-07 13:48:55.839446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.937 [2024-11-07 13:48:55.839464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.937 [2024-11-07 13:48:55.854178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.937 [2024-11-07 13:48:55.854197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.937 [2024-11-07 13:48:55.870324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.937 [2024-11-07 13:48:55.870343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.937 [2024-11-07 13:48:55.885986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.937 [2024-11-07 13:48:55.886005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.937 [2024-11-07 13:48:55.901319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.937 [2024-11-07 13:48:55.901337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.937 [2024-11-07 13:48:55.915225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.937 [2024-11-07 13:48:55.915243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:47.937 [2024-11-07 13:48:55.929744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:47.937 [2024-11-07 13:48:55.929762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.197 [2024-11-07 13:48:55.942108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.197 [2024-11-07 13:48:55.942126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.197 [2024-11-07 13:48:55.957802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.197 [2024-11-07 13:48:55.957822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.197 [2024-11-07 13:48:55.971562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.197 [2024-11-07 13:48:55.971581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.197 [2024-11-07 13:48:55.986186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.197 [2024-11-07 13:48:55.986205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.197 16999.60 IOPS, 132.81 MiB/s 00:43:48.197 Latency(us) 00:43:48.197 [2024-11-07T12:48:56.204Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:48.197 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:43:48.197 Nvme1n1 : 5.00 17008.94 132.88 0.00 0.00 7519.54 2826.24 14308.69 00:43:48.197 [2024-11-07T12:48:56.204Z] =================================================================================================================== 00:43:48.197 [2024-11-07T12:48:56.204Z] Total : 17008.94 132.88 0.00 0.00 7519.54 2826.24 14308.69 00:43:48.198 [2024-11-07 13:48:55.997602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.198 [2024-11-07 13:48:55.997619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.198 [2024-11-07 13:48:56.009589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.198 [2024-11-07 13:48:56.009606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.198 [2024-11-07 13:48:56.021598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.198 [2024-11-07 13:48:56.021614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.198 [2024-11-07 13:48:56.033607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.198 [2024-11-07 13:48:56.033627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.198 [2024-11-07 13:48:56.045602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.198 [2024-11-07 13:48:56.045618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.198 [2024-11-07 13:48:56.057601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.198 [2024-11-07 13:48:56.057617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.198 [2024-11-07 13:48:56.069589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.198 [2024-11-07 13:48:56.069603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.198 [2024-11-07 13:48:56.081602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.198 [2024-11-07 13:48:56.081617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.198 [2024-11-07 13:48:56.093611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.198 [2024-11-07 13:48:56.093627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.198 [2024-11-07 13:48:56.105592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.198 [2024-11-07 13:48:56.105608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.198 [2024-11-07 13:48:56.117604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.198 [2024-11-07 13:48:56.117620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.198 [2024-11-07 13:48:56.129587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.198 [2024-11-07 13:48:56.129603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.198 [2024-11-07 13:48:56.141598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.198 [2024-11-07 13:48:56.141613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.198 [2024-11-07 13:48:56.153597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.198 [2024-11-07 13:48:56.153612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.198 [2024-11-07 13:48:56.165589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.198 [2024-11-07 13:48:56.165603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.198 [2024-11-07 13:48:56.177597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.198 [2024-11-07 13:48:56.177611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.198 [2024-11-07 13:48:56.189598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.198 [2024-11-07 13:48:56.189614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.459 [2024-11-07 13:48:56.201590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.459 [2024-11-07 13:48:56.201610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.459 [2024-11-07 13:48:56.213654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.459 [2024-11-07 13:48:56.213670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.459 [2024-11-07 13:48:56.225589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.459 [2024-11-07 13:48:56.225604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.459 [2024-11-07 13:48:56.237607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.459 [2024-11-07 13:48:56.237622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.459 [2024-11-07 13:48:56.249608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.459 [2024-11-07 13:48:56.249625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.459 [2024-11-07 13:48:56.261588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.459 [2024-11-07 13:48:56.261603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.459 [2024-11-07 13:48:56.273597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.459 [2024-11-07 13:48:56.273612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.459 [2024-11-07 13:48:56.285598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.459 [2024-11-07 13:48:56.285613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.459 [2024-11-07 13:48:56.297590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.459 [2024-11-07 13:48:56.297605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.459 [2024-11-07 13:48:56.309602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.459 [2024-11-07 13:48:56.309617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.459 [2024-11-07 13:48:56.321586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.459 [2024-11-07 13:48:56.321600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.459 [2024-11-07 13:48:56.333597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.459 [2024-11-07 13:48:56.333612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.459 [2024-11-07 13:48:56.345597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.459 [2024-11-07 13:48:56.345613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.459 [2024-11-07 13:48:56.357590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.459 [2024-11-07 13:48:56.357606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.459 [2024-11-07 13:48:56.369603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.459 [2024-11-07 13:48:56.369619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.459 [2024-11-07 13:48:56.381606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.459 [2024-11-07 13:48:56.381621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.459 [2024-11-07 13:48:56.393585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.459 [2024-11-07 13:48:56.393600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.459 [2024-11-07 13:48:56.405597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.459 [2024-11-07 13:48:56.405612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.459 [2024-11-07 13:48:56.417587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.459 [2024-11-07 13:48:56.417602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.459 [2024-11-07 13:48:56.429598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.459 [2024-11-07 13:48:56.429617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.459 [2024-11-07 13:48:56.441603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.459 [2024-11-07 13:48:56.441619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.459 [2024-11-07 13:48:56.453586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.459 [2024-11-07 13:48:56.453601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.720 [2024-11-07 13:48:56.465608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.720 [2024-11-07 13:48:56.465623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.720 [2024-11-07 13:48:56.477598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.720 [2024-11-07 13:48:56.477613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.720 [2024-11-07 13:48:56.489587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.720 [2024-11-07 13:48:56.489602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.720 [2024-11-07 13:48:56.501599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.720 [2024-11-07 13:48:56.501614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.720 [2024-11-07 13:48:56.513587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.720 [2024-11-07 13:48:56.513602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.720 [2024-11-07 13:48:56.525608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.720 [2024-11-07 13:48:56.525623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.720 [2024-11-07 13:48:56.537596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.720 [2024-11-07 13:48:56.537611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.720 [2024-11-07 13:48:56.549588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.720 [2024-11-07 13:48:56.549603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.720 [2024-11-07 13:48:56.561601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.720 [2024-11-07 13:48:56.561616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.720 [2024-11-07 13:48:56.573599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.720 [2024-11-07 13:48:56.573613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.720 [2024-11-07 13:48:56.585588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.720 [2024-11-07 13:48:56.585602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.720 [2024-11-07 13:48:56.597598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:48.720 [2024-11-07 13:48:56.597613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:48.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (30127) - No such process 00:43:48.720 13:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 30127 00:43:48.720 13:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:48.720 13:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:48.720 13:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:48.720 13:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:48.720 13:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:43:48.720 13:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:48.720 13:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:48.720 delay0 00:43:48.720 13:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:48.720 13:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:43:48.720 13:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:48.720 13:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:48.720 13:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:48.720 13:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:43:48.980 [2024-11-07 13:48:56.752408] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:43:55.562 Initializing NVMe Controllers 00:43:55.562 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:43:55.562 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:43:55.562 Initialization complete. Launching workers. 00:43:55.562 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 5910 00:43:55.562 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 6183, failed to submit 47 00:43:55.562 success 6045, unsuccessful 138, failed 0 00:43:55.562 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:43:55.562 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:43:55.562 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:55.562 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:43:55.562 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:55.562 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:43:55.562 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:55.562 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:55.562 rmmod nvme_tcp 00:43:55.562 rmmod nvme_fabrics 00:43:55.562 rmmod nvme_keyring 00:43:55.562 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:55.562 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:43:55.562 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:43:55.562 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 27880 ']' 00:43:55.562 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 27880 00:43:55.562 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 27880 ']' 00:43:55.562 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 27880 00:43:55.562 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:43:55.562 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:43:55.562 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 27880 00:43:55.823 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:43:55.823 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:43:55.823 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 27880' 00:43:55.823 killing process with pid 27880 00:43:55.823 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 27880 00:43:55.823 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 27880 00:43:56.396 13:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:56.396 13:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:56.396 13:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:56.396 13:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:43:56.396 13:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:43:56.396 13:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:56.396 13:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:43:56.396 13:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:56.396 13:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:56.396 13:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:56.396 13:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:56.396 13:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:58.311 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:58.311 00:43:58.311 real 0m36.842s 00:43:58.311 user 0m47.939s 00:43:58.311 sys 0m12.691s 00:43:58.311 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:43:58.311 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:58.311 ************************************ 00:43:58.311 END TEST nvmf_zcopy 00:43:58.311 ************************************ 00:43:58.573 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:43:58.573 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:43:58.573 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:43:58.573 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:58.573 ************************************ 00:43:58.573 START TEST nvmf_nmic 00:43:58.573 ************************************ 00:43:58.573 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:43:58.573 * Looking for test storage... 00:43:58.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:58.573 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:43:58.573 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:43:58.573 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:43:58.573 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:43:58.573 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:58.573 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:58.573 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:58.573 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:43:58.573 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:43:58.573 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:43:58.573 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:43:58.573 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:43:58.573 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:43:58.573 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:43:58.573 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:58.573 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:43:58.573 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:43:58.573 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:58.573 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:58.573 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:43:58.573 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:43:58.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:58.574 --rc genhtml_branch_coverage=1 00:43:58.574 --rc genhtml_function_coverage=1 00:43:58.574 --rc genhtml_legend=1 00:43:58.574 --rc geninfo_all_blocks=1 00:43:58.574 --rc geninfo_unexecuted_blocks=1 00:43:58.574 00:43:58.574 ' 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:43:58.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:58.574 --rc genhtml_branch_coverage=1 00:43:58.574 --rc genhtml_function_coverage=1 00:43:58.574 --rc genhtml_legend=1 00:43:58.574 --rc geninfo_all_blocks=1 00:43:58.574 --rc geninfo_unexecuted_blocks=1 00:43:58.574 00:43:58.574 ' 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:43:58.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:58.574 --rc genhtml_branch_coverage=1 00:43:58.574 --rc genhtml_function_coverage=1 00:43:58.574 --rc genhtml_legend=1 00:43:58.574 --rc geninfo_all_blocks=1 00:43:58.574 --rc geninfo_unexecuted_blocks=1 00:43:58.574 00:43:58.574 ' 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:43:58.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:58.574 --rc genhtml_branch_coverage=1 00:43:58.574 --rc genhtml_function_coverage=1 00:43:58.574 --rc genhtml_legend=1 00:43:58.574 --rc geninfo_all_blocks=1 00:43:58.574 --rc geninfo_unexecuted_blocks=1 00:43:58.574 00:43:58.574 ' 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:58.574 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:58.575 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:58.575 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:43:58.575 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:44:06.716 Found 0000:31:00.0 (0x8086 - 0x159b) 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:44:06.716 Found 0000:31:00.1 (0x8086 - 0x159b) 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:44:06.716 Found net devices under 0000:31:00.0: cvl_0_0 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:44:06.716 Found net devices under 0000:31:00.1: cvl_0_1 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:06.716 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:06.716 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:06.716 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:44:06.716 00:44:06.716 --- 10.0.0.2 ping statistics --- 00:44:06.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:06.717 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:44:06.717 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:06.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:06.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:44:06.717 00:44:06.717 --- 10.0.0.1 ping statistics --- 00:44:06.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:06.717 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:44:06.717 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:06.717 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:44:06.717 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:44:06.717 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:06.717 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:06.717 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:06.717 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:06.717 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:06.717 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:06.717 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:44:06.717 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:44:06.717 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:06.717 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:06.977 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=37408 00:44:06.977 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 37408 00:44:06.977 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:44:06.977 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 37408 ']' 00:44:06.977 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:06.977 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:44:06.977 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:06.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:06.977 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:44:06.977 13:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:06.977 [2024-11-07 13:49:14.808229] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:44:06.977 [2024-11-07 13:49:14.810564] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:44:06.977 [2024-11-07 13:49:14.810647] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:06.977 [2024-11-07 13:49:14.955903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:07.238 [2024-11-07 13:49:15.055672] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:07.238 [2024-11-07 13:49:15.055714] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:07.238 [2024-11-07 13:49:15.055727] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:07.238 [2024-11-07 13:49:15.055737] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:07.238 [2024-11-07 13:49:15.055748] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:07.238 [2024-11-07 13:49:15.057949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:07.238 [2024-11-07 13:49:15.058112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:07.238 [2024-11-07 13:49:15.057957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:44:07.238 [2024-11-07 13:49:15.058137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:44:07.499 [2024-11-07 13:49:15.296017] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:44:07.499 [2024-11-07 13:49:15.303078] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:44:07.499 [2024-11-07 13:49:15.303840] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:44:07.499 [2024-11-07 13:49:15.303993] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:44:07.499 [2024-11-07 13:49:15.304136] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:44:07.760 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:44:07.760 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:44:07.760 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:44:07.760 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:07.760 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:07.760 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:07.760 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:07.760 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:07.760 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:07.760 [2024-11-07 13:49:15.607236] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:07.760 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:07.760 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:44:07.760 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:07.760 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:07.760 Malloc0 00:44:07.760 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:07.760 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:44:07.760 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:07.760 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:07.760 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:07.760 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:44:07.760 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:07.760 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:07.760 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:07.760 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:07.761 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:07.761 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:07.761 [2024-11-07 13:49:15.715117] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:07.761 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:07.761 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:44:07.761 test case1: single bdev can't be used in multiple subsystems 00:44:07.761 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:44:07.761 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:07.761 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:07.761 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:07.761 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:44:07.761 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:07.761 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:07.761 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:07.761 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:44:07.761 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:44:07.761 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:07.761 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:07.761 [2024-11-07 13:49:15.750793] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:44:07.761 [2024-11-07 13:49:15.750829] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:44:07.761 [2024-11-07 13:49:15.750842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.761 request: 00:44:07.761 { 00:44:07.761 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:44:07.761 "namespace": { 00:44:07.761 "bdev_name": "Malloc0", 00:44:07.761 "no_auto_visible": false 00:44:07.761 }, 00:44:07.761 "method": "nvmf_subsystem_add_ns", 00:44:07.761 "req_id": 1 00:44:07.761 } 00:44:07.761 Got JSON-RPC error response 00:44:07.761 response: 00:44:07.761 { 00:44:07.761 "code": -32602, 00:44:07.761 "message": "Invalid parameters" 00:44:07.761 } 00:44:07.761 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:44:07.761 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:44:07.761 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:44:07.761 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:44:07.761 Adding namespace failed - expected result. 00:44:07.761 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:44:07.761 test case2: host connect to nvmf target in multiple paths 00:44:07.761 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:44:07.761 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:07.761 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:07.761 [2024-11-07 13:49:15.762933] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:44:08.022 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:08.022 13:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:44:08.282 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:44:08.853 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:44:08.853 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:44:08.853 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:44:08.853 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:44:08.853 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:44:10.768 13:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:44:10.768 13:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:44:10.768 13:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:44:10.768 13:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:44:10.768 13:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:44:10.768 13:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:44:10.768 13:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:44:10.768 [global] 00:44:10.768 thread=1 00:44:10.768 invalidate=1 00:44:10.768 rw=write 00:44:10.768 time_based=1 00:44:10.768 runtime=1 00:44:10.768 ioengine=libaio 00:44:10.768 direct=1 00:44:10.768 bs=4096 00:44:10.768 iodepth=1 00:44:10.768 norandommap=0 00:44:10.768 numjobs=1 00:44:10.768 00:44:10.768 verify_dump=1 00:44:10.768 verify_backlog=512 00:44:10.768 verify_state_save=0 00:44:10.768 do_verify=1 00:44:10.768 verify=crc32c-intel 00:44:10.768 [job0] 00:44:10.768 filename=/dev/nvme0n1 00:44:10.768 Could not set queue depth (nvme0n1) 00:44:11.029 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:11.029 fio-3.35 00:44:11.029 Starting 1 thread 00:44:12.415 00:44:12.415 job0: (groupid=0, jobs=1): err= 0: pid=38270: Thu Nov 7 13:49:20 2024 00:44:12.415 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:44:12.415 slat (nsec): min=7065, max=61374, avg=26505.09, stdev=3401.11 00:44:12.415 clat (usec): min=422, max=1943, avg=1124.54, stdev=129.48 00:44:12.415 lat (usec): min=449, max=1969, avg=1151.04, stdev=129.50 00:44:12.415 clat percentiles (usec): 00:44:12.415 | 1.00th=[ 523], 5.00th=[ 889], 10.00th=[ 1012], 20.00th=[ 1090], 00:44:12.415 | 30.00th=[ 1106], 40.00th=[ 1139], 50.00th=[ 1156], 60.00th=[ 1172], 00:44:12.415 | 70.00th=[ 1188], 80.00th=[ 1205], 90.00th=[ 1221], 95.00th=[ 1237], 00:44:12.415 | 99.00th=[ 1287], 99.50th=[ 1319], 99.90th=[ 1942], 99.95th=[ 1942], 00:44:12.415 | 99.99th=[ 1942] 00:44:12.415 write: IOPS=628, BW=2513KiB/s (2574kB/s)(2516KiB/1001msec); 0 zone resets 00:44:12.415 slat (usec): min=9, max=31274, avg=78.65, stdev=1245.90 00:44:12.415 clat (usec): min=202, max=858, avg=559.77, stdev=123.89 00:44:12.415 lat (usec): min=230, max=31922, avg=638.42, stdev=1255.88 00:44:12.415 clat percentiles (usec): 00:44:12.415 | 1.00th=[ 231], 5.00th=[ 330], 10.00th=[ 392], 20.00th=[ 449], 00:44:12.415 | 30.00th=[ 502], 40.00th=[ 553], 50.00th=[ 586], 60.00th=[ 611], 00:44:12.415 | 70.00th=[ 635], 80.00th=[ 676], 90.00th=[ 701], 95.00th=[ 725], 00:44:12.415 | 99.00th=[ 766], 99.50th=[ 766], 99.90th=[ 857], 99.95th=[ 857], 00:44:12.415 | 99.99th=[ 857] 00:44:12.415 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:44:12.415 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:44:12.415 lat (usec) : 250=0.79%, 500=15.43%, 750=38.83%, 1000=4.21% 00:44:12.415 lat (msec) : 2=40.75% 00:44:12.415 cpu : usr=2.30%, sys=2.70%, ctx=1144, majf=0, minf=1 00:44:12.415 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:12.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.415 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:12.415 issued rwts: total=512,629,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:12.415 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:12.415 00:44:12.415 Run status group 0 (all jobs): 00:44:12.415 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:44:12.415 WRITE: bw=2513KiB/s (2574kB/s), 2513KiB/s-2513KiB/s (2574kB/s-2574kB/s), io=2516KiB (2576kB), run=1001-1001msec 00:44:12.415 00:44:12.415 Disk stats (read/write): 00:44:12.415 nvme0n1: ios=522/512, merge=0/0, ticks=991/289, in_queue=1280, util=99.00% 00:44:12.415 13:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:44:12.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:44:12.676 13:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:44:12.676 13:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:44:12.676 13:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:44:12.676 13:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:44:12.676 13:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:44:12.676 13:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:44:12.938 13:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:44:12.938 13:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:44:12.938 13:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:44:12.938 13:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:12.938 13:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:44:12.938 13:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:12.938 13:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:44:12.938 13:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:12.938 13:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:12.938 rmmod nvme_tcp 00:44:12.938 rmmod nvme_fabrics 00:44:12.938 rmmod nvme_keyring 00:44:12.938 13:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:12.938 13:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:44:12.938 13:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:44:12.938 13:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 37408 ']' 00:44:12.938 13:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 37408 00:44:12.938 13:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 37408 ']' 00:44:12.938 13:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 37408 00:44:12.938 13:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:44:12.938 13:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:44:12.938 13:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 37408 00:44:12.938 13:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:44:12.938 13:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:44:12.938 13:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 37408' 00:44:12.938 killing process with pid 37408 00:44:12.938 13:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 37408 00:44:12.938 13:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 37408 00:44:13.879 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:44:13.879 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:13.879 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:13.879 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:44:13.879 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:44:13.879 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:44:13.879 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:13.879 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:13.879 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:13.879 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:13.879 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:13.879 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:16.422 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:16.422 00:44:16.422 real 0m17.482s 00:44:16.422 user 0m37.511s 00:44:16.422 sys 0m8.198s 00:44:16.422 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:44:16.422 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:16.422 ************************************ 00:44:16.422 END TEST nvmf_nmic 00:44:16.422 ************************************ 00:44:16.422 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:44:16.422 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:44:16.422 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:44:16.422 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:44:16.422 ************************************ 00:44:16.422 START TEST nvmf_fio_target 00:44:16.422 ************************************ 00:44:16.422 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:44:16.422 * Looking for test storage... 00:44:16.422 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:16.422 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:44:16.422 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:44:16.422 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:44:16.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:16.422 --rc genhtml_branch_coverage=1 00:44:16.422 --rc genhtml_function_coverage=1 00:44:16.422 --rc genhtml_legend=1 00:44:16.422 --rc geninfo_all_blocks=1 00:44:16.422 --rc geninfo_unexecuted_blocks=1 00:44:16.422 00:44:16.422 ' 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:44:16.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:16.422 --rc genhtml_branch_coverage=1 00:44:16.422 --rc genhtml_function_coverage=1 00:44:16.422 --rc genhtml_legend=1 00:44:16.422 --rc geninfo_all_blocks=1 00:44:16.422 --rc geninfo_unexecuted_blocks=1 00:44:16.422 00:44:16.422 ' 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:44:16.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:16.422 --rc genhtml_branch_coverage=1 00:44:16.422 --rc genhtml_function_coverage=1 00:44:16.422 --rc genhtml_legend=1 00:44:16.422 --rc geninfo_all_blocks=1 00:44:16.422 --rc geninfo_unexecuted_blocks=1 00:44:16.422 00:44:16.422 ' 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:44:16.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:16.422 --rc genhtml_branch_coverage=1 00:44:16.422 --rc genhtml_function_coverage=1 00:44:16.422 --rc genhtml_legend=1 00:44:16.422 --rc geninfo_all_blocks=1 00:44:16.422 --rc geninfo_unexecuted_blocks=1 00:44:16.422 00:44:16.422 ' 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:16.422 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:16.423 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:16.423 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:44:16.423 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:16.423 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:16.423 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:16.423 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:16.423 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:16.423 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:16.423 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:44:16.423 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:16.423 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:44:16.423 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:16.423 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:16.423 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:16.423 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:16.423 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:16.423 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:44:16.423 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:44:16.423 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:16.423 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:16.423 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:16.423 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:44:16.423 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:44:16.423 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:44:16.423 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:44:16.423 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:44:16.423 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:16.423 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:44:16.423 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:44:16.423 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:44:16.423 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:16.423 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:16.423 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:16.423 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:44:16.423 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:44:16.423 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:44:16.423 13:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:44:24.687 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:24.687 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:44:24.687 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:24.687 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:24.687 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:24.687 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:24.687 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:24.687 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:44:24.687 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:24.687 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:44:24.687 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:44:24.687 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:44:24.687 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:44:24.687 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:44:24.687 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:44:24.687 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:24.687 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:24.687 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:24.687 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:24.687 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:24.687 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:44:24.688 Found 0000:31:00.0 (0x8086 - 0x159b) 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:44:24.688 Found 0000:31:00.1 (0x8086 - 0x159b) 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:44:24.688 Found net devices under 0000:31:00.0: cvl_0_0 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:44:24.688 Found net devices under 0000:31:00.1: cvl_0_1 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:24.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:24.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:44:24.688 00:44:24.688 --- 10.0.0.2 ping statistics --- 00:44:24.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:24.688 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:24.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:24.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.335 ms 00:44:24.688 00:44:24.688 --- 10.0.0.1 ping statistics --- 00:44:24.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:24.688 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:24.688 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:24.689 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:44:24.689 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:44:24.689 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:24.689 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:44:24.689 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=43386 00:44:24.689 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 43386 00:44:24.689 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:44:24.689 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 43386 ']' 00:44:24.689 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:24.689 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:44:24.689 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:24.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:24.689 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:44:24.689 13:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:44:24.689 [2024-11-07 13:49:32.542488] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:44:24.689 [2024-11-07 13:49:32.545148] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:44:24.689 [2024-11-07 13:49:32.545244] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:24.949 [2024-11-07 13:49:32.713650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:24.949 [2024-11-07 13:49:32.815432] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:24.949 [2024-11-07 13:49:32.815476] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:24.949 [2024-11-07 13:49:32.815490] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:24.949 [2024-11-07 13:49:32.815500] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:24.949 [2024-11-07 13:49:32.815511] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:24.949 [2024-11-07 13:49:32.817660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:24.949 [2024-11-07 13:49:32.817750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:44:24.949 [2024-11-07 13:49:32.817871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:24.949 [2024-11-07 13:49:32.817915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:44:25.210 [2024-11-07 13:49:33.055142] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:44:25.210 [2024-11-07 13:49:33.064054] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:44:25.210 [2024-11-07 13:49:33.064799] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:44:25.210 [2024-11-07 13:49:33.064960] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:44:25.210 [2024-11-07 13:49:33.065099] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:44:25.470 13:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:44:25.470 13:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:44:25.470 13:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:44:25.470 13:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:25.470 13:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:44:25.470 13:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:25.470 13:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:44:25.731 [2024-11-07 13:49:33.499078] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:25.731 13:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:44:25.991 13:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:44:25.991 13:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:44:26.251 13:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:44:26.251 13:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:44:26.251 13:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:44:26.251 13:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:44:26.512 13:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:44:26.512 13:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:44:26.772 13:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:44:27.033 13:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:44:27.033 13:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:44:27.293 13:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:44:27.293 13:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:44:27.553 13:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:44:27.553 13:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:44:27.553 13:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:44:27.813 13:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:44:27.813 13:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:44:28.074 13:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:44:28.074 13:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:44:28.074 13:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:28.335 [2024-11-07 13:49:36.162850] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:28.335 13:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:44:28.595 13:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:44:28.595 13:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:44:29.166 13:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:44:29.166 13:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:44:29.166 13:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:44:29.166 13:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:44:29.166 13:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:44:29.166 13:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:44:31.077 13:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:44:31.077 13:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:44:31.077 13:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:44:31.077 13:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:44:31.077 13:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:44:31.077 13:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:44:31.077 13:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:44:31.077 [global] 00:44:31.077 thread=1 00:44:31.077 invalidate=1 00:44:31.077 rw=write 00:44:31.077 time_based=1 00:44:31.077 runtime=1 00:44:31.077 ioengine=libaio 00:44:31.077 direct=1 00:44:31.077 bs=4096 00:44:31.077 iodepth=1 00:44:31.077 norandommap=0 00:44:31.077 numjobs=1 00:44:31.077 00:44:31.077 verify_dump=1 00:44:31.077 verify_backlog=512 00:44:31.077 verify_state_save=0 00:44:31.077 do_verify=1 00:44:31.077 verify=crc32c-intel 00:44:31.077 [job0] 00:44:31.077 filename=/dev/nvme0n1 00:44:31.077 [job1] 00:44:31.077 filename=/dev/nvme0n2 00:44:31.077 [job2] 00:44:31.077 filename=/dev/nvme0n3 00:44:31.077 [job3] 00:44:31.077 filename=/dev/nvme0n4 00:44:31.359 Could not set queue depth (nvme0n1) 00:44:31.359 Could not set queue depth (nvme0n2) 00:44:31.359 Could not set queue depth (nvme0n3) 00:44:31.359 Could not set queue depth (nvme0n4) 00:44:31.620 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:31.620 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:31.620 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:31.620 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:31.620 fio-3.35 00:44:31.620 Starting 4 threads 00:44:33.037 00:44:33.037 job0: (groupid=0, jobs=1): err= 0: pid=44858: Thu Nov 7 13:49:40 2024 00:44:33.037 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:44:33.037 slat (nsec): min=7015, max=65759, avg=26854.43, stdev=7555.24 00:44:33.037 clat (usec): min=366, max=1245, avg=785.14, stdev=125.44 00:44:33.037 lat (usec): min=395, max=1274, avg=812.00, stdev=127.83 00:44:33.037 clat percentiles (usec): 00:44:33.037 | 1.00th=[ 461], 5.00th=[ 570], 10.00th=[ 627], 20.00th=[ 693], 00:44:33.037 | 30.00th=[ 725], 40.00th=[ 758], 50.00th=[ 791], 60.00th=[ 816], 00:44:33.037 | 70.00th=[ 848], 80.00th=[ 889], 90.00th=[ 947], 95.00th=[ 979], 00:44:33.037 | 99.00th=[ 1045], 99.50th=[ 1090], 99.90th=[ 1254], 99.95th=[ 1254], 00:44:33.037 | 99.99th=[ 1254] 00:44:33.037 write: IOPS=941, BW=3764KiB/s (3855kB/s)(3768KiB/1001msec); 0 zone resets 00:44:33.037 slat (usec): min=9, max=42580, avg=116.91, stdev=1882.56 00:44:33.037 clat (usec): min=164, max=1839, avg=492.40, stdev=134.49 00:44:33.037 lat (usec): min=175, max=43323, avg=609.31, stdev=1897.71 00:44:33.037 clat percentiles (usec): 00:44:33.037 | 1.00th=[ 231], 5.00th=[ 297], 10.00th=[ 322], 20.00th=[ 371], 00:44:33.037 | 30.00th=[ 420], 40.00th=[ 453], 50.00th=[ 486], 60.00th=[ 519], 00:44:33.037 | 70.00th=[ 562], 80.00th=[ 611], 90.00th=[ 668], 95.00th=[ 709], 00:44:33.037 | 99.00th=[ 775], 99.50th=[ 799], 99.90th=[ 1844], 99.95th=[ 1844], 00:44:33.037 | 99.99th=[ 1844] 00:44:33.037 bw ( KiB/s): min= 4104, max= 4104, per=38.93%, avg=4104.00, stdev= 0.00, samples=1 00:44:33.037 iops : min= 1026, max= 1026, avg=1026.00, stdev= 0.00, samples=1 00:44:33.037 lat (usec) : 250=1.10%, 500=35.28%, 750=40.23%, 1000=21.94% 00:44:33.037 lat (msec) : 2=1.44% 00:44:33.037 cpu : usr=3.60%, sys=4.50%, ctx=1457, majf=0, minf=1 00:44:33.037 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:33.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:33.037 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:33.037 issued rwts: total=512,942,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:33.037 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:33.037 job1: (groupid=0, jobs=1): err= 0: pid=44859: Thu Nov 7 13:49:40 2024 00:44:33.037 read: IOPS=19, BW=79.4KiB/s (81.3kB/s)(80.0KiB/1007msec) 00:44:33.037 slat (nsec): min=27025, max=46504, avg=28495.15, stdev=4244.18 00:44:33.037 clat (usec): min=40874, max=42302, avg=41778.11, stdev=433.05 00:44:33.037 lat (usec): min=40901, max=42349, avg=41806.61, stdev=434.21 00:44:33.037 clat percentiles (usec): 00:44:33.037 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:44:33.037 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:44:33.037 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:44:33.037 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:44:33.037 | 99.99th=[42206] 00:44:33.037 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:44:33.037 slat (nsec): min=9681, max=53711, avg=20523.01, stdev=12333.25 00:44:33.037 clat (usec): min=111, max=549, avg=307.44, stdev=88.13 00:44:33.037 lat (usec): min=123, max=584, avg=327.96, stdev=95.30 00:44:33.037 clat percentiles (usec): 00:44:33.037 | 1.00th=[ 119], 5.00th=[ 131], 10.00th=[ 172], 20.00th=[ 255], 00:44:33.037 | 30.00th=[ 273], 40.00th=[ 285], 50.00th=[ 297], 60.00th=[ 322], 00:44:33.037 | 70.00th=[ 367], 80.00th=[ 388], 90.00th=[ 420], 95.00th=[ 433], 00:44:33.037 | 99.00th=[ 486], 99.50th=[ 510], 99.90th=[ 553], 99.95th=[ 553], 00:44:33.037 | 99.99th=[ 553] 00:44:33.037 bw ( KiB/s): min= 4096, max= 4096, per=38.85%, avg=4096.00, stdev= 0.00, samples=1 00:44:33.037 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:44:33.037 lat (usec) : 250=16.92%, 500=78.57%, 750=0.75% 00:44:33.037 lat (msec) : 50=3.76% 00:44:33.037 cpu : usr=0.40%, sys=1.09%, ctx=536, majf=0, minf=1 00:44:33.037 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:33.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:33.037 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:33.037 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:33.037 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:33.037 job2: (groupid=0, jobs=1): err= 0: pid=44866: Thu Nov 7 13:49:40 2024 00:44:33.037 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:44:33.037 slat (nsec): min=7652, max=46552, avg=27124.31, stdev=3610.74 00:44:33.037 clat (usec): min=334, max=1435, avg=1081.46, stdev=177.10 00:44:33.037 lat (usec): min=348, max=1448, avg=1108.59, stdev=177.61 00:44:33.037 clat percentiles (usec): 00:44:33.037 | 1.00th=[ 383], 5.00th=[ 693], 10.00th=[ 881], 20.00th=[ 996], 00:44:33.037 | 30.00th=[ 1037], 40.00th=[ 1074], 50.00th=[ 1123], 60.00th=[ 1139], 00:44:33.037 | 70.00th=[ 1172], 80.00th=[ 1221], 90.00th=[ 1254], 95.00th=[ 1287], 00:44:33.037 | 99.00th=[ 1336], 99.50th=[ 1369], 99.90th=[ 1434], 99.95th=[ 1434], 00:44:33.037 | 99.99th=[ 1434] 00:44:33.037 write: IOPS=687, BW=2749KiB/s (2815kB/s)(2752KiB/1001msec); 0 zone resets 00:44:33.037 slat (nsec): min=10059, max=76477, avg=31315.38, stdev=10850.38 00:44:33.037 clat (usec): min=257, max=940, avg=584.32, stdev=117.83 00:44:33.037 lat (usec): min=269, max=976, avg=615.64, stdev=122.20 00:44:33.037 clat percentiles (usec): 00:44:33.037 | 1.00th=[ 285], 5.00th=[ 375], 10.00th=[ 429], 20.00th=[ 482], 00:44:33.037 | 30.00th=[ 529], 40.00th=[ 570], 50.00th=[ 586], 60.00th=[ 611], 00:44:33.037 | 70.00th=[ 644], 80.00th=[ 685], 90.00th=[ 734], 95.00th=[ 766], 00:44:33.037 | 99.00th=[ 865], 99.50th=[ 889], 99.90th=[ 938], 99.95th=[ 938], 00:44:33.037 | 99.99th=[ 938] 00:44:33.037 bw ( KiB/s): min= 1400, max= 4104, per=26.10%, avg=2752.00, stdev=1912.02, samples=2 00:44:33.037 iops : min= 350, max= 1026, avg=688.00, stdev=478.00, samples=2 00:44:33.037 lat (usec) : 500=14.50%, 750=41.50%, 1000=10.08% 00:44:33.037 lat (msec) : 2=33.92% 00:44:33.037 cpu : usr=1.50%, sys=3.80%, ctx=1202, majf=0, minf=1 00:44:33.037 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:33.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:33.037 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:33.037 issued rwts: total=512,688,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:33.037 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:33.037 job3: (groupid=0, jobs=1): err= 0: pid=44867: Thu Nov 7 13:49:40 2024 00:44:33.037 read: IOPS=401, BW=1606KiB/s (1645kB/s)(1608KiB/1001msec) 00:44:33.037 slat (nsec): min=8666, max=47211, avg=24939.95, stdev=5743.39 00:44:33.037 clat (usec): min=803, max=41963, avg=1858.78, stdev=5607.02 00:44:33.038 lat (usec): min=821, max=41989, avg=1883.72, stdev=5607.18 00:44:33.038 clat percentiles (usec): 00:44:33.038 | 1.00th=[ 824], 5.00th=[ 906], 10.00th=[ 938], 20.00th=[ 996], 00:44:33.038 | 30.00th=[ 1029], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1106], 00:44:33.038 | 70.00th=[ 1123], 80.00th=[ 1139], 90.00th=[ 1172], 95.00th=[ 1205], 00:44:33.038 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:44:33.038 | 99.99th=[42206] 00:44:33.038 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:44:33.038 slat (nsec): min=9447, max=71350, avg=30064.25, stdev=9309.32 00:44:33.038 clat (usec): min=194, max=1078, avg=429.51, stdev=107.46 00:44:33.038 lat (usec): min=226, max=1111, avg=459.57, stdev=110.12 00:44:33.038 clat percentiles (usec): 00:44:33.038 | 1.00th=[ 227], 5.00th=[ 265], 10.00th=[ 306], 20.00th=[ 334], 00:44:33.038 | 30.00th=[ 363], 40.00th=[ 392], 50.00th=[ 429], 60.00th=[ 453], 00:44:33.038 | 70.00th=[ 482], 80.00th=[ 519], 90.00th=[ 570], 95.00th=[ 603], 00:44:33.038 | 99.00th=[ 660], 99.50th=[ 709], 99.90th=[ 1074], 99.95th=[ 1074], 00:44:33.038 | 99.99th=[ 1074] 00:44:33.038 bw ( KiB/s): min= 4104, max= 4104, per=38.93%, avg=4104.00, stdev= 0.00, samples=1 00:44:33.038 iops : min= 1026, max= 1026, avg=1026.00, stdev= 0.00, samples=1 00:44:33.038 lat (usec) : 250=2.30%, 500=39.50%, 750=14.00%, 1000=9.85% 00:44:33.038 lat (msec) : 2=33.48%, 50=0.88% 00:44:33.038 cpu : usr=1.50%, sys=2.40%, ctx=915, majf=0, minf=1 00:44:33.038 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:33.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:33.038 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:33.038 issued rwts: total=402,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:33.038 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:33.038 00:44:33.038 Run status group 0 (all jobs): 00:44:33.038 READ: bw=5744KiB/s (5882kB/s), 79.4KiB/s-2046KiB/s (81.3kB/s-2095kB/s), io=5784KiB (5923kB), run=1001-1007msec 00:44:33.038 WRITE: bw=10.3MiB/s (10.8MB/s), 2034KiB/s-3764KiB/s (2083kB/s-3855kB/s), io=10.4MiB (10.9MB), run=1001-1007msec 00:44:33.038 00:44:33.038 Disk stats (read/write): 00:44:33.038 nvme0n1: ios=561/613, merge=0/0, ticks=732/250, in_queue=982, util=83.87% 00:44:33.038 nvme0n2: ios=64/512, merge=0/0, ticks=779/151, in_queue=930, util=87.76% 00:44:33.038 nvme0n3: ios=486/512, merge=0/0, ticks=1384/279, in_queue=1663, util=91.85% 00:44:33.038 nvme0n4: ios=309/512, merge=0/0, ticks=705/212, in_queue=917, util=97.64% 00:44:33.038 13:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:44:33.038 [global] 00:44:33.038 thread=1 00:44:33.038 invalidate=1 00:44:33.038 rw=randwrite 00:44:33.038 time_based=1 00:44:33.038 runtime=1 00:44:33.038 ioengine=libaio 00:44:33.038 direct=1 00:44:33.038 bs=4096 00:44:33.038 iodepth=1 00:44:33.038 norandommap=0 00:44:33.038 numjobs=1 00:44:33.038 00:44:33.038 verify_dump=1 00:44:33.038 verify_backlog=512 00:44:33.038 verify_state_save=0 00:44:33.038 do_verify=1 00:44:33.038 verify=crc32c-intel 00:44:33.038 [job0] 00:44:33.038 filename=/dev/nvme0n1 00:44:33.038 [job1] 00:44:33.038 filename=/dev/nvme0n2 00:44:33.038 [job2] 00:44:33.038 filename=/dev/nvme0n3 00:44:33.038 [job3] 00:44:33.038 filename=/dev/nvme0n4 00:44:33.038 Could not set queue depth (nvme0n1) 00:44:33.038 Could not set queue depth (nvme0n2) 00:44:33.038 Could not set queue depth (nvme0n3) 00:44:33.038 Could not set queue depth (nvme0n4) 00:44:33.298 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:33.298 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:33.298 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:33.298 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:33.298 fio-3.35 00:44:33.298 Starting 4 threads 00:44:34.707 00:44:34.707 job0: (groupid=0, jobs=1): err= 0: pid=45355: Thu Nov 7 13:49:42 2024 00:44:34.707 read: IOPS=17, BW=71.6KiB/s (73.4kB/s)(72.0KiB/1005msec) 00:44:34.707 slat (nsec): min=25901, max=26479, avg=26155.94, stdev=165.07 00:44:34.707 clat (usec): min=885, max=42013, avg=39463.00, stdev=9637.13 00:44:34.707 lat (usec): min=911, max=42039, avg=39489.15, stdev=9637.13 00:44:34.707 clat percentiles (usec): 00:44:34.707 | 1.00th=[ 889], 5.00th=[ 889], 10.00th=[41157], 20.00th=[41157], 00:44:34.707 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:44:34.707 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:44:34.707 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:44:34.707 | 99.99th=[42206] 00:44:34.707 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:44:34.707 slat (nsec): min=9531, max=90161, avg=31651.08, stdev=8950.44 00:44:34.707 clat (usec): min=146, max=878, avg=531.90, stdev=118.42 00:44:34.707 lat (usec): min=155, max=911, avg=563.55, stdev=121.19 00:44:34.707 clat percentiles (usec): 00:44:34.707 | 1.00th=[ 269], 5.00th=[ 314], 10.00th=[ 383], 20.00th=[ 416], 00:44:34.707 | 30.00th=[ 486], 40.00th=[ 506], 50.00th=[ 529], 60.00th=[ 570], 00:44:34.707 | 70.00th=[ 611], 80.00th=[ 635], 90.00th=[ 676], 95.00th=[ 709], 00:44:34.707 | 99.00th=[ 783], 99.50th=[ 840], 99.90th=[ 881], 99.95th=[ 881], 00:44:34.707 | 99.99th=[ 881] 00:44:34.707 bw ( KiB/s): min= 4096, max= 4096, per=52.10%, avg=4096.00, stdev= 0.00, samples=1 00:44:34.707 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:44:34.707 lat (usec) : 250=0.75%, 500=34.34%, 750=59.43%, 1000=2.26% 00:44:34.707 lat (msec) : 50=3.21% 00:44:34.707 cpu : usr=0.90%, sys=1.49%, ctx=533, majf=0, minf=1 00:44:34.707 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:34.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:34.707 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:34.707 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:34.707 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:34.707 job1: (groupid=0, jobs=1): err= 0: pid=45356: Thu Nov 7 13:49:42 2024 00:44:34.707 read: IOPS=16, BW=67.9KiB/s (69.5kB/s)(68.0KiB/1002msec) 00:44:34.707 slat (nsec): min=25981, max=27159, avg=26395.35, stdev=327.87 00:44:34.707 clat (usec): min=1217, max=42034, avg=39387.35, stdev=9843.35 00:44:34.707 lat (usec): min=1243, max=42061, avg=39413.74, stdev=9843.40 00:44:34.707 clat percentiles (usec): 00:44:34.707 | 1.00th=[ 1221], 5.00th=[ 1221], 10.00th=[41157], 20.00th=[41157], 00:44:34.707 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:44:34.707 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:44:34.707 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:44:34.707 | 99.99th=[42206] 00:44:34.707 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:44:34.707 slat (nsec): min=8881, max=66935, avg=29684.07, stdev=9033.03 00:44:34.707 clat (usec): min=253, max=933, avg=610.62, stdev=116.02 00:44:34.707 lat (usec): min=264, max=966, avg=640.30, stdev=119.57 00:44:34.707 clat percentiles (usec): 00:44:34.707 | 1.00th=[ 322], 5.00th=[ 400], 10.00th=[ 461], 20.00th=[ 515], 00:44:34.707 | 30.00th=[ 562], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 644], 00:44:34.707 | 70.00th=[ 668], 80.00th=[ 709], 90.00th=[ 750], 95.00th=[ 791], 00:44:34.707 | 99.00th=[ 865], 99.50th=[ 906], 99.90th=[ 930], 99.95th=[ 930], 00:44:34.707 | 99.99th=[ 930] 00:44:34.707 bw ( KiB/s): min= 4096, max= 4096, per=52.10%, avg=4096.00, stdev= 0.00, samples=1 00:44:34.707 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:44:34.707 lat (usec) : 500=16.26%, 750=69.94%, 1000=10.59% 00:44:34.707 lat (msec) : 2=0.19%, 50=3.02% 00:44:34.707 cpu : usr=0.80%, sys=2.20%, ctx=529, majf=0, minf=2 00:44:34.707 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:34.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:34.707 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:34.707 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:34.707 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:34.707 job2: (groupid=0, jobs=1): err= 0: pid=45357: Thu Nov 7 13:49:42 2024 00:44:34.707 read: IOPS=292, BW=1171KiB/s (1199kB/s)(1172KiB/1001msec) 00:44:34.707 slat (nsec): min=26159, max=36940, avg=27036.42, stdev=630.59 00:44:34.707 clat (usec): min=767, max=42031, avg=2222.01, stdev=7078.72 00:44:34.707 lat (usec): min=794, max=42058, avg=2249.04, stdev=7078.66 00:44:34.707 clat percentiles (usec): 00:44:34.707 | 1.00th=[ 783], 5.00th=[ 824], 10.00th=[ 881], 20.00th=[ 906], 00:44:34.707 | 30.00th=[ 938], 40.00th=[ 963], 50.00th=[ 979], 60.00th=[ 996], 00:44:34.707 | 70.00th=[ 1004], 80.00th=[ 1020], 90.00th=[ 1057], 95.00th=[ 1090], 00:44:34.707 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:44:34.707 | 99.99th=[42206] 00:44:34.707 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:44:34.707 slat (nsec): min=9910, max=64150, avg=29131.07, stdev=9789.05 00:44:34.707 clat (usec): min=340, max=881, avg=623.37, stdev=103.30 00:44:34.707 lat (usec): min=354, max=935, avg=652.50, stdev=107.88 00:44:34.707 clat percentiles (usec): 00:44:34.707 | 1.00th=[ 359], 5.00th=[ 433], 10.00th=[ 478], 20.00th=[ 545], 00:44:34.707 | 30.00th=[ 578], 40.00th=[ 611], 50.00th=[ 635], 60.00th=[ 660], 00:44:34.707 | 70.00th=[ 685], 80.00th=[ 709], 90.00th=[ 742], 95.00th=[ 775], 00:44:34.707 | 99.00th=[ 848], 99.50th=[ 873], 99.90th=[ 881], 99.95th=[ 881], 00:44:34.707 | 99.99th=[ 881] 00:44:34.707 bw ( KiB/s): min= 4096, max= 4096, per=52.10%, avg=4096.00, stdev= 0.00, samples=1 00:44:34.707 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:44:34.707 lat (usec) : 500=8.70%, 750=49.81%, 1000=28.94% 00:44:34.707 lat (msec) : 2=11.43%, 50=1.12% 00:44:34.707 cpu : usr=1.10%, sys=2.40%, ctx=807, majf=0, minf=1 00:44:34.707 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:34.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:34.707 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:34.707 issued rwts: total=293,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:34.707 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:34.707 job3: (groupid=0, jobs=1): err= 0: pid=45358: Thu Nov 7 13:49:42 2024 00:44:34.707 read: IOPS=16, BW=65.3KiB/s (66.8kB/s)(68.0KiB/1042msec) 00:44:34.707 slat (nsec): min=22858, max=27549, avg=26978.24, stdev=1072.12 00:44:34.707 clat (usec): min=1037, max=42023, avg=39537.92, stdev=9921.76 00:44:34.707 lat (usec): min=1064, max=42050, avg=39564.90, stdev=9921.63 00:44:34.707 clat percentiles (usec): 00:44:34.707 | 1.00th=[ 1037], 5.00th=[ 1037], 10.00th=[41681], 20.00th=[41681], 00:44:34.707 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:44:34.707 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:44:34.707 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:44:34.707 | 99.99th=[42206] 00:44:34.707 write: IOPS=491, BW=1965KiB/s (2013kB/s)(2048KiB/1042msec); 0 zone resets 00:44:34.707 slat (nsec): min=9985, max=61621, avg=30289.67, stdev=10153.37 00:44:34.707 clat (usec): min=190, max=956, avg=680.13, stdev=115.00 00:44:34.708 lat (usec): min=215, max=990, avg=710.42, stdev=119.38 00:44:34.708 clat percentiles (usec): 00:44:34.708 | 1.00th=[ 396], 5.00th=[ 457], 10.00th=[ 523], 20.00th=[ 594], 00:44:34.708 | 30.00th=[ 635], 40.00th=[ 668], 50.00th=[ 693], 60.00th=[ 717], 00:44:34.708 | 70.00th=[ 750], 80.00th=[ 783], 90.00th=[ 807], 95.00th=[ 840], 00:44:34.708 | 99.00th=[ 898], 99.50th=[ 922], 99.90th=[ 955], 99.95th=[ 955], 00:44:34.708 | 99.99th=[ 955] 00:44:34.708 bw ( KiB/s): min= 4096, max= 4096, per=52.10%, avg=4096.00, stdev= 0.00, samples=1 00:44:34.708 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:44:34.708 lat (usec) : 250=0.19%, 500=6.81%, 750=61.63%, 1000=28.17% 00:44:34.708 lat (msec) : 2=0.19%, 50=3.02% 00:44:34.708 cpu : usr=0.96%, sys=1.25%, ctx=530, majf=0, minf=1 00:44:34.708 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:34.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:34.708 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:34.708 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:34.708 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:34.708 00:44:34.708 Run status group 0 (all jobs): 00:44:34.708 READ: bw=1324KiB/s (1356kB/s), 65.3KiB/s-1171KiB/s (66.8kB/s-1199kB/s), io=1380KiB (1413kB), run=1001-1042msec 00:44:34.708 WRITE: bw=7862KiB/s (8050kB/s), 1965KiB/s-2046KiB/s (2013kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1042msec 00:44:34.708 00:44:34.708 Disk stats (read/write): 00:44:34.708 nvme0n1: ios=62/512, merge=0/0, ticks=690/248, in_queue=938, util=89.28% 00:44:34.708 nvme0n2: ios=63/512, merge=0/0, ticks=584/239, in_queue=823, util=90.62% 00:44:34.708 nvme0n3: ios=186/512, merge=0/0, ticks=1203/315, in_queue=1518, util=96.62% 00:44:34.708 nvme0n4: ios=52/512, merge=0/0, ticks=865/327, in_queue=1192, util=100.00% 00:44:34.708 13:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:44:34.708 [global] 00:44:34.708 thread=1 00:44:34.708 invalidate=1 00:44:34.708 rw=write 00:44:34.708 time_based=1 00:44:34.708 runtime=1 00:44:34.708 ioengine=libaio 00:44:34.708 direct=1 00:44:34.708 bs=4096 00:44:34.708 iodepth=128 00:44:34.708 norandommap=0 00:44:34.708 numjobs=1 00:44:34.708 00:44:34.708 verify_dump=1 00:44:34.708 verify_backlog=512 00:44:34.708 verify_state_save=0 00:44:34.708 do_verify=1 00:44:34.708 verify=crc32c-intel 00:44:34.708 [job0] 00:44:34.708 filename=/dev/nvme0n1 00:44:34.708 [job1] 00:44:34.708 filename=/dev/nvme0n2 00:44:34.708 [job2] 00:44:34.708 filename=/dev/nvme0n3 00:44:34.708 [job3] 00:44:34.708 filename=/dev/nvme0n4 00:44:34.708 Could not set queue depth (nvme0n1) 00:44:34.708 Could not set queue depth (nvme0n2) 00:44:34.708 Could not set queue depth (nvme0n3) 00:44:34.708 Could not set queue depth (nvme0n4) 00:44:34.970 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:34.970 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:34.970 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:34.970 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:34.970 fio-3.35 00:44:34.970 Starting 4 threads 00:44:36.375 00:44:36.375 job0: (groupid=0, jobs=1): err= 0: pid=45878: Thu Nov 7 13:49:43 2024 00:44:36.375 read: IOPS=6761, BW=26.4MiB/s (27.7MB/s)(26.5MiB/1003msec) 00:44:36.375 slat (nsec): min=963, max=9508.1k, avg=63824.63, stdev=484356.94 00:44:36.375 clat (usec): min=1588, max=21551, avg=8498.01, stdev=3076.37 00:44:36.375 lat (usec): min=2514, max=21600, avg=8561.83, stdev=3100.98 00:44:36.375 clat percentiles (usec): 00:44:36.375 | 1.00th=[ 3654], 5.00th=[ 5145], 10.00th=[ 5604], 20.00th=[ 6456], 00:44:36.375 | 30.00th=[ 7046], 40.00th=[ 7308], 50.00th=[ 7767], 60.00th=[ 8094], 00:44:36.375 | 70.00th=[ 8979], 80.00th=[10421], 90.00th=[11338], 95.00th=[15401], 00:44:36.375 | 99.00th=[19792], 99.50th=[21103], 99.90th=[21627], 99.95th=[21627], 00:44:36.375 | 99.99th=[21627] 00:44:36.375 write: IOPS=7146, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1003msec); 0 zone resets 00:44:36.375 slat (nsec): min=1598, max=14175k, avg=70224.59, stdev=478510.07 00:44:36.375 clat (usec): min=696, max=63457, avg=9673.25, stdev=7815.79 00:44:36.375 lat (usec): min=705, max=71467, avg=9743.47, stdev=7870.52 00:44:36.375 clat percentiles (usec): 00:44:36.376 | 1.00th=[ 2114], 5.00th=[ 4080], 10.00th=[ 4621], 20.00th=[ 5604], 00:44:36.376 | 30.00th=[ 6063], 40.00th=[ 6390], 50.00th=[ 7111], 60.00th=[ 7963], 00:44:36.376 | 70.00th=[ 9372], 80.00th=[11207], 90.00th=[18220], 95.00th=[27657], 00:44:36.376 | 99.00th=[42206], 99.50th=[54264], 99.90th=[57410], 99.95th=[63177], 00:44:36.376 | 99.99th=[63701] 00:44:36.376 bw ( KiB/s): min=28664, max=28672, per=33.63%, avg=28668.00, stdev= 5.66, samples=2 00:44:36.376 iops : min= 7166, max= 7168, avg=7167.00, stdev= 1.41, samples=2 00:44:36.376 lat (usec) : 750=0.02% 00:44:36.376 lat (msec) : 2=0.34%, 4=2.61%, 10=72.03%, 20=20.01%, 50=4.63% 00:44:36.376 lat (msec) : 100=0.36% 00:44:36.376 cpu : usr=5.19%, sys=6.99%, ctx=514, majf=0, minf=1 00:44:36.376 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:44:36.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:36.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:36.376 issued rwts: total=6782,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:36.376 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:36.376 job1: (groupid=0, jobs=1): err= 0: pid=45879: Thu Nov 7 13:49:43 2024 00:44:36.376 read: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1008msec) 00:44:36.376 slat (nsec): min=940, max=46463k, avg=80388.36, stdev=784289.63 00:44:36.376 clat (usec): min=3480, max=54835, avg=10570.01, stdev=8715.16 00:44:36.376 lat (usec): min=3487, max=55974, avg=10650.40, stdev=8765.83 00:44:36.376 clat percentiles (usec): 00:44:36.376 | 1.00th=[ 4686], 5.00th=[ 5473], 10.00th=[ 6259], 20.00th=[ 7046], 00:44:36.376 | 30.00th=[ 7373], 40.00th=[ 7635], 50.00th=[ 7898], 60.00th=[ 8094], 00:44:36.376 | 70.00th=[ 8717], 80.00th=[ 9765], 90.00th=[15926], 95.00th=[30802], 00:44:36.376 | 99.00th=[52691], 99.50th=[53740], 99.90th=[54789], 99.95th=[54789], 00:44:36.376 | 99.99th=[54789] 00:44:36.376 write: IOPS=6599, BW=25.8MiB/s (27.0MB/s)(26.0MiB/1008msec); 0 zone resets 00:44:36.376 slat (nsec): min=1615, max=11973k, avg=70776.88, stdev=483808.22 00:44:36.376 clat (usec): min=3360, max=54545, avg=9386.05, stdev=5989.73 00:44:36.376 lat (usec): min=3369, max=54554, avg=9456.83, stdev=6015.01 00:44:36.376 clat percentiles (usec): 00:44:36.376 | 1.00th=[ 4113], 5.00th=[ 5145], 10.00th=[ 6456], 20.00th=[ 6849], 00:44:36.376 | 30.00th=[ 7242], 40.00th=[ 7439], 50.00th=[ 7767], 60.00th=[ 8160], 00:44:36.376 | 70.00th=[ 8586], 80.00th=[ 9765], 90.00th=[13173], 95.00th=[19006], 00:44:36.376 | 99.00th=[33817], 99.50th=[52167], 99.90th=[53216], 99.95th=[53216], 00:44:36.376 | 99.99th=[54789] 00:44:36.376 bw ( KiB/s): min=24624, max=27576, per=30.62%, avg=26100.00, stdev=2087.38, samples=2 00:44:36.376 iops : min= 6156, max= 6894, avg=6525.00, stdev=521.84, samples=2 00:44:36.376 lat (msec) : 4=0.78%, 10=80.09%, 20=12.30%, 50=5.84%, 100=0.99% 00:44:36.376 cpu : usr=5.06%, sys=5.76%, ctx=424, majf=0, minf=1 00:44:36.376 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:44:36.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:36.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:36.376 issued rwts: total=6144,6652,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:36.376 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:36.376 job2: (groupid=0, jobs=1): err= 0: pid=45880: Thu Nov 7 13:49:43 2024 00:44:36.376 read: IOPS=3420, BW=13.4MiB/s (14.0MB/s)(13.5MiB/1009msec) 00:44:36.376 slat (nsec): min=979, max=17547k, avg=132045.80, stdev=988557.02 00:44:36.376 clat (usec): min=1717, max=54511, avg=16610.85, stdev=9165.94 00:44:36.376 lat (usec): min=1759, max=60211, avg=16742.89, stdev=9246.41 00:44:36.376 clat percentiles (usec): 00:44:36.376 | 1.00th=[ 3425], 5.00th=[ 4752], 10.00th=[ 6521], 20.00th=[ 9896], 00:44:36.376 | 30.00th=[11076], 40.00th=[13829], 50.00th=[15795], 60.00th=[16319], 00:44:36.376 | 70.00th=[18744], 80.00th=[21627], 90.00th=[30278], 95.00th=[36963], 00:44:36.376 | 99.00th=[47449], 99.50th=[50070], 99.90th=[53740], 99.95th=[54264], 00:44:36.376 | 99.99th=[54264] 00:44:36.376 write: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec); 0 zone resets 00:44:36.376 slat (nsec): min=1700, max=15890k, avg=142901.49, stdev=863036.04 00:44:36.376 clat (usec): min=463, max=70242, avg=19625.96, stdev=14359.15 00:44:36.376 lat (usec): min=497, max=70252, avg=19768.86, stdev=14458.91 00:44:36.376 clat percentiles (usec): 00:44:36.376 | 1.00th=[ 1156], 5.00th=[ 3032], 10.00th=[ 5538], 20.00th=[ 9765], 00:44:36.376 | 30.00th=[11469], 40.00th=[12780], 50.00th=[14353], 60.00th=[17171], 00:44:36.376 | 70.00th=[22152], 80.00th=[31589], 90.00th=[40109], 95.00th=[48497], 00:44:36.376 | 99.00th=[65274], 99.50th=[66323], 99.90th=[67634], 99.95th=[67634], 00:44:36.376 | 99.99th=[69731] 00:44:36.376 bw ( KiB/s): min= 9760, max=18912, per=16.82%, avg=14336.00, stdev=6471.44, samples=2 00:44:36.376 iops : min= 2440, max= 4728, avg=3584.00, stdev=1617.86, samples=2 00:44:36.376 lat (usec) : 500=0.04%, 750=0.11%, 1000=0.09% 00:44:36.376 lat (msec) : 2=0.54%, 4=5.30%, 10=15.85%, 20=49.52%, 50=26.08% 00:44:36.376 lat (msec) : 100=2.46% 00:44:36.376 cpu : usr=1.88%, sys=4.56%, ctx=300, majf=0, minf=1 00:44:36.376 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:44:36.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:36.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:36.376 issued rwts: total=3451,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:36.376 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:36.376 job3: (groupid=0, jobs=1): err= 0: pid=45881: Thu Nov 7 13:49:43 2024 00:44:36.376 read: IOPS=3588, BW=14.0MiB/s (14.7MB/s)(14.1MiB/1009msec) 00:44:36.376 slat (nsec): min=943, max=27589k, avg=130600.44, stdev=1101311.32 00:44:36.376 clat (usec): min=2294, max=71547, avg=16677.78, stdev=13017.40 00:44:36.376 lat (usec): min=5949, max=80690, avg=16808.38, stdev=13115.54 00:44:36.376 clat percentiles (usec): 00:44:36.376 | 1.00th=[ 6390], 5.00th=[ 7701], 10.00th=[ 8979], 20.00th=[ 9765], 00:44:36.376 | 30.00th=[10814], 40.00th=[11600], 50.00th=[11994], 60.00th=[12780], 00:44:36.376 | 70.00th=[15008], 80.00th=[17433], 90.00th=[32113], 95.00th=[52167], 00:44:36.376 | 99.00th=[69731], 99.50th=[71828], 99.90th=[71828], 99.95th=[71828], 00:44:36.376 | 99.99th=[71828] 00:44:36.376 write: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec); 0 zone resets 00:44:36.376 slat (nsec): min=1714, max=28717k, avg=124549.63, stdev=1015251.96 00:44:36.376 clat (usec): min=3480, max=82039, avg=16334.95, stdev=10542.66 00:44:36.376 lat (usec): min=3507, max=82062, avg=16459.50, stdev=10622.26 00:44:36.376 clat percentiles (usec): 00:44:36.376 | 1.00th=[ 4293], 5.00th=[ 6128], 10.00th=[ 7177], 20.00th=[ 8586], 00:44:36.376 | 30.00th=[10290], 40.00th=[11207], 50.00th=[12911], 60.00th=[15926], 00:44:36.376 | 70.00th=[19006], 80.00th=[20317], 90.00th=[30278], 95.00th=[36963], 00:44:36.376 | 99.00th=[62653], 99.50th=[62653], 99.90th=[62653], 99.95th=[62653], 00:44:36.376 | 99.99th=[82314] 00:44:36.376 bw ( KiB/s): min=15656, max=16384, per=18.80%, avg=16020.00, stdev=514.77, samples=2 00:44:36.376 iops : min= 3914, max= 4096, avg=4005.00, stdev=128.69, samples=2 00:44:36.376 lat (msec) : 4=0.04%, 10=24.15%, 20=59.04%, 50=13.04%, 100=3.73% 00:44:36.376 cpu : usr=2.98%, sys=3.67%, ctx=322, majf=0, minf=1 00:44:36.376 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:44:36.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:36.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:36.376 issued rwts: total=3621,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:36.376 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:36.376 00:44:36.376 Run status group 0 (all jobs): 00:44:36.376 READ: bw=77.4MiB/s (81.2MB/s), 13.4MiB/s-26.4MiB/s (14.0MB/s-27.7MB/s), io=78.1MiB (81.9MB), run=1003-1009msec 00:44:36.376 WRITE: bw=83.2MiB/s (87.3MB/s), 13.9MiB/s-27.9MiB/s (14.5MB/s-29.3MB/s), io=84.0MiB (88.1MB), run=1003-1009msec 00:44:36.376 00:44:36.376 Disk stats (read/write): 00:44:36.376 nvme0n1: ios=5685/6091, merge=0/0, ticks=37461/40955, in_queue=78416, util=84.47% 00:44:36.376 nvme0n2: ios=5488/5632, merge=0/0, ticks=24600/19270, in_queue=43870, util=85.22% 00:44:36.376 nvme0n3: ios=3133/3103, merge=0/0, ticks=38387/52765, in_queue=91152, util=92.08% 00:44:36.376 nvme0n4: ios=3129/3377, merge=0/0, ticks=24244/21601, in_queue=45845, util=93.27% 00:44:36.376 13:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:44:36.376 [global] 00:44:36.376 thread=1 00:44:36.376 invalidate=1 00:44:36.376 rw=randwrite 00:44:36.376 time_based=1 00:44:36.376 runtime=1 00:44:36.376 ioengine=libaio 00:44:36.376 direct=1 00:44:36.376 bs=4096 00:44:36.376 iodepth=128 00:44:36.376 norandommap=0 00:44:36.376 numjobs=1 00:44:36.376 00:44:36.376 verify_dump=1 00:44:36.376 verify_backlog=512 00:44:36.376 verify_state_save=0 00:44:36.376 do_verify=1 00:44:36.376 verify=crc32c-intel 00:44:36.376 [job0] 00:44:36.376 filename=/dev/nvme0n1 00:44:36.376 [job1] 00:44:36.376 filename=/dev/nvme0n2 00:44:36.376 [job2] 00:44:36.376 filename=/dev/nvme0n3 00:44:36.376 [job3] 00:44:36.376 filename=/dev/nvme0n4 00:44:36.376 Could not set queue depth (nvme0n1) 00:44:36.376 Could not set queue depth (nvme0n2) 00:44:36.376 Could not set queue depth (nvme0n3) 00:44:36.376 Could not set queue depth (nvme0n4) 00:44:36.639 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:36.639 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:36.639 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:36.639 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:36.639 fio-3.35 00:44:36.639 Starting 4 threads 00:44:38.046 00:44:38.046 job0: (groupid=0, jobs=1): err= 0: pid=46390: Thu Nov 7 13:49:45 2024 00:44:38.046 read: IOPS=6245, BW=24.4MiB/s (25.6MB/s)(24.5MiB/1004msec) 00:44:38.046 slat (nsec): min=1010, max=10307k, avg=78292.64, stdev=624803.34 00:44:38.046 clat (usec): min=1225, max=32047, avg=9587.20, stdev=3182.61 00:44:38.046 lat (usec): min=2401, max=32051, avg=9665.49, stdev=3232.81 00:44:38.046 clat percentiles (usec): 00:44:38.046 | 1.00th=[ 4293], 5.00th=[ 6390], 10.00th=[ 6980], 20.00th=[ 7504], 00:44:38.046 | 30.00th=[ 7963], 40.00th=[ 8160], 50.00th=[ 8848], 60.00th=[ 9503], 00:44:38.046 | 70.00th=[10421], 80.00th=[11076], 90.00th=[12780], 95.00th=[15401], 00:44:38.046 | 99.00th=[22938], 99.50th=[26870], 99.90th=[31589], 99.95th=[32113], 00:44:38.046 | 99.99th=[32113] 00:44:38.046 write: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec); 0 zone resets 00:44:38.046 slat (nsec): min=1641, max=8470.7k, avg=70512.12, stdev=439695.09 00:44:38.046 clat (usec): min=1159, max=32033, avg=10078.10, stdev=4880.51 00:44:38.046 lat (usec): min=1168, max=32035, avg=10148.61, stdev=4909.80 00:44:38.046 clat percentiles (usec): 00:44:38.046 | 1.00th=[ 2737], 5.00th=[ 4555], 10.00th=[ 4948], 20.00th=[ 6718], 00:44:38.046 | 30.00th=[ 7701], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 9503], 00:44:38.046 | 70.00th=[10683], 80.00th=[13566], 90.00th=[15664], 95.00th=[20579], 00:44:38.046 | 99.00th=[27657], 99.50th=[28705], 99.90th=[29492], 99.95th=[29492], 00:44:38.046 | 99.99th=[32113] 00:44:38.046 bw ( KiB/s): min=24576, max=28656, per=27.32%, avg=26616.00, stdev=2885.00, samples=2 00:44:38.046 iops : min= 6144, max= 7164, avg=6654.00, stdev=721.25, samples=2 00:44:38.046 lat (msec) : 2=0.14%, 4=1.04%, 10=62.78%, 20=31.70%, 50=4.35% 00:44:38.046 cpu : usr=4.89%, sys=6.38%, ctx=510, majf=0, minf=1 00:44:38.046 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:44:38.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:38.046 issued rwts: total=6270,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:38.046 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:38.046 job1: (groupid=0, jobs=1): err= 0: pid=46391: Thu Nov 7 13:49:45 2024 00:44:38.046 read: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec) 00:44:38.046 slat (nsec): min=945, max=16320k, avg=77698.33, stdev=498529.04 00:44:38.046 clat (usec): min=3763, max=37956, avg=10066.12, stdev=4430.99 00:44:38.046 lat (usec): min=3769, max=37984, avg=10143.82, stdev=4470.56 00:44:38.046 clat percentiles (usec): 00:44:38.046 | 1.00th=[ 5866], 5.00th=[ 7373], 10.00th=[ 7898], 20.00th=[ 8291], 00:44:38.046 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9110], 00:44:38.046 | 70.00th=[ 9503], 80.00th=[ 9896], 90.00th=[11469], 95.00th=[21890], 00:44:38.046 | 99.00th=[30278], 99.50th=[34341], 99.90th=[35390], 99.95th=[35390], 00:44:38.046 | 99.99th=[38011] 00:44:38.046 write: IOPS=6992, BW=27.3MiB/s (28.6MB/s)(27.4MiB/1003msec); 0 zone resets 00:44:38.046 slat (nsec): min=1571, max=9244.1k, avg=65460.23, stdev=376708.90 00:44:38.046 clat (usec): min=2500, max=23044, avg=8521.97, stdev=1877.58 00:44:38.046 lat (usec): min=2504, max=23077, avg=8587.43, stdev=1912.51 00:44:38.046 clat percentiles (usec): 00:44:38.046 | 1.00th=[ 5407], 5.00th=[ 6718], 10.00th=[ 6980], 20.00th=[ 7308], 00:44:38.046 | 30.00th=[ 7767], 40.00th=[ 8160], 50.00th=[ 8291], 60.00th=[ 8586], 00:44:38.046 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[10028], 95.00th=[11994], 00:44:38.046 | 99.00th=[16909], 99.50th=[19268], 99.90th=[19530], 99.95th=[20317], 00:44:38.046 | 99.99th=[22938] 00:44:38.046 bw ( KiB/s): min=25168, max=29920, per=28.27%, avg=27544.00, stdev=3360.17, samples=2 00:44:38.046 iops : min= 6292, max= 7480, avg=6886.00, stdev=840.04, samples=2 00:44:38.046 lat (msec) : 4=0.64%, 10=85.75%, 20=10.57%, 50=3.04% 00:44:38.046 cpu : usr=2.69%, sys=5.19%, ctx=725, majf=0, minf=1 00:44:38.046 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:44:38.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:38.046 issued rwts: total=6656,7013,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:38.046 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:38.046 job2: (groupid=0, jobs=1): err= 0: pid=46392: Thu Nov 7 13:49:45 2024 00:44:38.046 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:44:38.046 slat (nsec): min=913, max=16998k, avg=116046.10, stdev=740456.62 00:44:38.046 clat (usec): min=5646, max=48585, avg=15272.75, stdev=6201.54 00:44:38.046 lat (usec): min=5648, max=48587, avg=15388.80, stdev=6244.97 00:44:38.046 clat percentiles (usec): 00:44:38.046 | 1.00th=[ 6259], 5.00th=[ 9110], 10.00th=[10159], 20.00th=[11338], 00:44:38.046 | 30.00th=[12387], 40.00th=[13042], 50.00th=[13698], 60.00th=[14353], 00:44:38.046 | 70.00th=[14877], 80.00th=[17957], 90.00th=[22414], 95.00th=[29754], 00:44:38.046 | 99.00th=[37487], 99.50th=[40633], 99.90th=[43254], 99.95th=[44303], 00:44:38.046 | 99.99th=[48497] 00:44:38.046 write: IOPS=4628, BW=18.1MiB/s (19.0MB/s)(18.1MiB/1003msec); 0 zone resets 00:44:38.046 slat (nsec): min=1521, max=9682.2k, avg=96988.37, stdev=501196.41 00:44:38.046 clat (usec): min=733, max=40550, avg=12251.35, stdev=4594.04 00:44:38.046 lat (usec): min=1233, max=40552, avg=12348.34, stdev=4625.61 00:44:38.046 clat percentiles (usec): 00:44:38.046 | 1.00th=[ 4883], 5.00th=[ 7111], 10.00th=[ 7898], 20.00th=[ 9634], 00:44:38.046 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11600], 60.00th=[12256], 00:44:38.046 | 70.00th=[12780], 80.00th=[13960], 90.00th=[15926], 95.00th=[19530], 00:44:38.046 | 99.00th=[36439], 99.50th=[38536], 99.90th=[39584], 99.95th=[39584], 00:44:38.046 | 99.99th=[40633] 00:44:38.046 bw ( KiB/s): min=16384, max=20480, per=18.92%, avg=18432.00, stdev=2896.31, samples=2 00:44:38.046 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:44:38.046 lat (usec) : 750=0.01% 00:44:38.046 lat (msec) : 2=0.11%, 4=0.35%, 10=15.75%, 20=73.44%, 50=10.35% 00:44:38.046 cpu : usr=2.10%, sys=2.69%, ctx=607, majf=0, minf=2 00:44:38.046 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:44:38.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:38.046 issued rwts: total=4608,4642,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:38.046 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:38.046 job3: (groupid=0, jobs=1): err= 0: pid=46393: Thu Nov 7 13:49:45 2024 00:44:38.046 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:44:38.046 slat (nsec): min=967, max=4481.7k, avg=85286.91, stdev=406252.50 00:44:38.046 clat (usec): min=7371, max=16595, avg=11012.60, stdev=1694.16 00:44:38.047 lat (usec): min=7382, max=18752, avg=11097.89, stdev=1687.48 00:44:38.047 clat percentiles (usec): 00:44:38.047 | 1.00th=[ 8029], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[ 9765], 00:44:38.047 | 30.00th=[10028], 40.00th=[10421], 50.00th=[10683], 60.00th=[10945], 00:44:38.047 | 70.00th=[11469], 80.00th=[12256], 90.00th=[13698], 95.00th=[14615], 00:44:38.047 | 99.00th=[15664], 99.50th=[16188], 99.90th=[16581], 99.95th=[16581], 00:44:38.047 | 99.99th=[16581] 00:44:38.047 write: IOPS=6124, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:44:38.047 slat (nsec): min=1580, max=12326k, avg=81130.86, stdev=406634.80 00:44:38.047 clat (usec): min=2169, max=37339, avg=10503.35, stdev=3492.80 00:44:38.047 lat (usec): min=2762, max=37355, avg=10584.48, stdev=3510.05 00:44:38.047 clat percentiles (usec): 00:44:38.047 | 1.00th=[ 6980], 5.00th=[ 7963], 10.00th=[ 8160], 20.00th=[ 8586], 00:44:38.047 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[ 9896], 60.00th=[10421], 00:44:38.047 | 70.00th=[11076], 80.00th=[11731], 90.00th=[12387], 95.00th=[13698], 00:44:38.047 | 99.00th=[30802], 99.50th=[34341], 99.90th=[37487], 99.95th=[37487], 00:44:38.047 | 99.99th=[37487] 00:44:38.047 bw ( KiB/s): min=23552, max=24576, per=24.70%, avg=24064.00, stdev=724.08, samples=2 00:44:38.047 iops : min= 5888, max= 6144, avg=6016.00, stdev=181.02, samples=2 00:44:38.047 lat (msec) : 4=0.14%, 10=40.57%, 20=58.20%, 50=1.09% 00:44:38.047 cpu : usr=1.70%, sys=4.49%, ctx=818, majf=0, minf=1 00:44:38.047 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:44:38.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.047 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:38.047 issued rwts: total=5632,6143,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:38.047 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:38.047 00:44:38.047 Run status group 0 (all jobs): 00:44:38.047 READ: bw=90.1MiB/s (94.5MB/s), 17.9MiB/s-25.9MiB/s (18.8MB/s-27.2MB/s), io=90.5MiB (94.9MB), run=1003-1004msec 00:44:38.047 WRITE: bw=95.1MiB/s (99.8MB/s), 18.1MiB/s-27.3MiB/s (19.0MB/s-28.6MB/s), io=95.5MiB (100MB), run=1003-1004msec 00:44:38.047 00:44:38.047 Disk stats (read/write): 00:44:38.047 nvme0n1: ios=5135/5120, merge=0/0, ticks=47970/54114, in_queue=102084, util=87.17% 00:44:38.047 nvme0n2: ios=5672/5708, merge=0/0, ticks=21700/16091, in_queue=37791, util=91.13% 00:44:38.047 nvme0n3: ios=3641/3877, merge=0/0, ticks=21492/19283, in_queue=40775, util=95.14% 00:44:38.047 nvme0n4: ios=4874/5120, merge=0/0, ticks=14280/13683, in_queue=27963, util=97.33% 00:44:38.047 13:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:44:38.047 13:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=46729 00:44:38.047 13:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:44:38.047 13:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:44:38.047 [global] 00:44:38.047 thread=1 00:44:38.047 invalidate=1 00:44:38.047 rw=read 00:44:38.047 time_based=1 00:44:38.047 runtime=10 00:44:38.047 ioengine=libaio 00:44:38.047 direct=1 00:44:38.047 bs=4096 00:44:38.047 iodepth=1 00:44:38.047 norandommap=1 00:44:38.047 numjobs=1 00:44:38.047 00:44:38.047 [job0] 00:44:38.047 filename=/dev/nvme0n1 00:44:38.047 [job1] 00:44:38.047 filename=/dev/nvme0n2 00:44:38.047 [job2] 00:44:38.047 filename=/dev/nvme0n3 00:44:38.047 [job3] 00:44:38.047 filename=/dev/nvme0n4 00:44:38.047 Could not set queue depth (nvme0n1) 00:44:38.047 Could not set queue depth (nvme0n2) 00:44:38.047 Could not set queue depth (nvme0n3) 00:44:38.047 Could not set queue depth (nvme0n4) 00:44:38.309 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:38.309 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:38.309 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:38.309 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:38.309 fio-3.35 00:44:38.309 Starting 4 threads 00:44:40.852 13:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:44:41.113 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=12345344, buflen=4096 00:44:41.113 fio: pid=46916, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:44:41.113 13:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:44:41.113 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=10362880, buflen=4096 00:44:41.113 fio: pid=46915, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:44:41.113 13:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:41.113 13:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:44:41.373 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=2547712, buflen=4096 00:44:41.373 fio: pid=46913, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:44:41.373 13:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:41.373 13:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:44:41.634 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=10858496, buflen=4096 00:44:41.634 fio: pid=46914, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:44:41.634 13:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:41.634 13:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:44:41.634 00:44:41.634 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=46913: Thu Nov 7 13:49:49 2024 00:44:41.634 read: IOPS=207, BW=828KiB/s (848kB/s)(2488KiB/3004msec) 00:44:41.634 slat (usec): min=23, max=5496, avg=34.02, stdev=219.22 00:44:41.634 clat (usec): min=859, max=42342, avg=4752.51, stdev=11544.07 00:44:41.634 lat (usec): min=884, max=46980, avg=4786.54, stdev=11574.10 00:44:41.634 clat percentiles (usec): 00:44:41.634 | 1.00th=[ 922], 5.00th=[ 1020], 10.00th=[ 1074], 20.00th=[ 1106], 00:44:41.634 | 30.00th=[ 1139], 40.00th=[ 1156], 50.00th=[ 1172], 60.00th=[ 1188], 00:44:41.634 | 70.00th=[ 1205], 80.00th=[ 1237], 90.00th=[ 1303], 95.00th=[41681], 00:44:41.634 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:44:41.634 | 99.99th=[42206] 00:44:41.634 bw ( KiB/s): min= 96, max= 2032, per=8.18%, avg=905.60, stdev=913.60, samples=5 00:44:41.634 iops : min= 24, max= 508, avg=226.40, stdev=228.40, samples=5 00:44:41.634 lat (usec) : 1000=3.05% 00:44:41.634 lat (msec) : 2=87.96%, 50=8.83% 00:44:41.634 cpu : usr=0.20%, sys=0.63%, ctx=624, majf=0, minf=1 00:44:41.634 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:41.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:41.634 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:41.634 issued rwts: total=623,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:41.634 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:41.634 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=46914: Thu Nov 7 13:49:49 2024 00:44:41.634 read: IOPS=832, BW=3327KiB/s (3407kB/s)(10.4MiB/3187msec) 00:44:41.634 slat (usec): min=6, max=16553, avg=39.80, stdev=418.82 00:44:41.634 clat (usec): min=319, max=42046, avg=1148.05, stdev=3821.45 00:44:41.634 lat (usec): min=346, max=42071, avg=1187.85, stdev=3843.32 00:44:41.634 clat percentiles (usec): 00:44:41.634 | 1.00th=[ 486], 5.00th=[ 603], 10.00th=[ 652], 20.00th=[ 709], 00:44:41.634 | 30.00th=[ 742], 40.00th=[ 775], 50.00th=[ 799], 60.00th=[ 824], 00:44:41.634 | 70.00th=[ 848], 80.00th=[ 873], 90.00th=[ 914], 95.00th=[ 938], 00:44:41.634 | 99.00th=[ 2008], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:44:41.634 | 99.99th=[42206] 00:44:41.634 bw ( KiB/s): min= 88, max= 4848, per=29.49%, avg=3263.83, stdev=2206.45, samples=6 00:44:41.634 iops : min= 22, max= 1212, avg=815.83, stdev=551.52, samples=6 00:44:41.634 lat (usec) : 500=1.28%, 750=31.26%, 1000=65.46% 00:44:41.634 lat (msec) : 2=0.94%, 4=0.15%, 50=0.87% 00:44:41.634 cpu : usr=1.57%, sys=2.86%, ctx=2656, majf=0, minf=2 00:44:41.634 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:41.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:41.634 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:41.634 issued rwts: total=2652,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:41.634 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:41.634 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=46915: Thu Nov 7 13:49:49 2024 00:44:41.634 read: IOPS=912, BW=3647KiB/s (3734kB/s)(9.88MiB/2775msec) 00:44:41.634 slat (nsec): min=6906, max=62194, avg=27037.27, stdev=3177.05 00:44:41.634 clat (usec): min=743, max=1326, avg=1054.41, stdev=86.89 00:44:41.634 lat (usec): min=753, max=1353, avg=1081.45, stdev=87.09 00:44:41.634 clat percentiles (usec): 00:44:41.634 | 1.00th=[ 816], 5.00th=[ 898], 10.00th=[ 938], 20.00th=[ 988], 00:44:41.634 | 30.00th=[ 1020], 40.00th=[ 1045], 50.00th=[ 1057], 60.00th=[ 1074], 00:44:41.634 | 70.00th=[ 1106], 80.00th=[ 1123], 90.00th=[ 1156], 95.00th=[ 1188], 00:44:41.634 | 99.00th=[ 1254], 99.50th=[ 1270], 99.90th=[ 1303], 99.95th=[ 1319], 00:44:41.634 | 99.99th=[ 1319] 00:44:41.634 bw ( KiB/s): min= 3656, max= 3704, per=33.25%, avg=3680.00, stdev=17.89, samples=5 00:44:41.634 iops : min= 914, max= 926, avg=920.00, stdev= 4.47, samples=5 00:44:41.634 lat (usec) : 750=0.04%, 1000=24.10% 00:44:41.634 lat (msec) : 2=75.82% 00:44:41.634 cpu : usr=1.66%, sys=3.64%, ctx=2531, majf=0, minf=2 00:44:41.634 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:41.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:41.634 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:41.634 issued rwts: total=2531,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:41.634 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:41.634 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=46916: Thu Nov 7 13:49:49 2024 00:44:41.634 read: IOPS=1161, BW=4644KiB/s (4756kB/s)(11.8MiB/2596msec) 00:44:41.634 slat (nsec): min=6532, max=62221, avg=26118.92, stdev=5939.02 00:44:41.634 clat (usec): min=337, max=1115, avg=820.18, stdev=102.07 00:44:41.634 lat (usec): min=365, max=1142, avg=846.30, stdev=102.91 00:44:41.634 clat percentiles (usec): 00:44:41.634 | 1.00th=[ 562], 5.00th=[ 635], 10.00th=[ 668], 20.00th=[ 734], 00:44:41.634 | 30.00th=[ 775], 40.00th=[ 807], 50.00th=[ 840], 60.00th=[ 865], 00:44:41.634 | 70.00th=[ 889], 80.00th=[ 906], 90.00th=[ 938], 95.00th=[ 963], 00:44:41.634 | 99.00th=[ 1004], 99.50th=[ 1020], 99.90th=[ 1057], 99.95th=[ 1074], 00:44:41.634 | 99.99th=[ 1123] 00:44:41.634 bw ( KiB/s): min= 4624, max= 4792, per=42.49%, avg=4702.40, stdev=80.08, samples=5 00:44:41.634 iops : min= 1156, max= 1198, avg=1175.60, stdev=20.02, samples=5 00:44:41.634 lat (usec) : 500=0.43%, 750=23.02%, 1000=75.36% 00:44:41.634 lat (msec) : 2=1.16% 00:44:41.634 cpu : usr=2.12%, sys=4.35%, ctx=3016, majf=0, minf=2 00:44:41.634 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:41.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:41.634 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:41.634 issued rwts: total=3015,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:41.634 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:41.634 00:44:41.634 Run status group 0 (all jobs): 00:44:41.634 READ: bw=10.8MiB/s (11.3MB/s), 828KiB/s-4644KiB/s (848kB/s-4756kB/s), io=34.4MiB (36.1MB), run=2596-3187msec 00:44:41.634 00:44:41.634 Disk stats (read/write): 00:44:41.634 nvme0n1: ios=618/0, merge=0/0, ticks=2781/0, in_queue=2781, util=94.56% 00:44:41.634 nvme0n2: ios=2544/0, merge=0/0, ticks=2646/0, in_queue=2646, util=94.45% 00:44:41.634 nvme0n3: ios=2375/0, merge=0/0, ticks=2262/0, in_queue=2262, util=96.07% 00:44:41.634 nvme0n4: ios=3015/0, merge=0/0, ticks=2084/0, in_queue=2084, util=96.16% 00:44:41.894 13:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:41.894 13:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:44:42.154 13:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:42.154 13:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:44:42.414 13:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:42.414 13:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:44:42.414 13:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:42.414 13:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:44:42.674 13:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:44:42.674 13:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 46729 00:44:42.674 13:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:44:42.674 13:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:44:43.614 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:44:43.614 13:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:44:43.614 13:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:44:43.614 13:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:44:43.614 13:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:44:43.614 13:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:44:43.614 13:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:44:43.614 13:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:44:43.614 13:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:44:43.614 13:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:44:43.614 nvmf hotplug test: fio failed as expected 00:44:43.614 13:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:43.614 13:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:44:43.615 13:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:44:43.615 13:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:44:43.615 13:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:44:43.615 13:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:44:43.615 13:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:43.615 13:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:44:43.615 13:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:43.615 13:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:44:43.615 13:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:43.615 13:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:43.615 rmmod nvme_tcp 00:44:43.615 rmmod nvme_fabrics 00:44:43.615 rmmod nvme_keyring 00:44:43.615 13:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:43.615 13:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:44:43.615 13:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:44:43.615 13:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 43386 ']' 00:44:43.615 13:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 43386 00:44:43.615 13:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 43386 ']' 00:44:43.615 13:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 43386 00:44:43.615 13:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:44:43.615 13:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:44:43.615 13:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 43386 00:44:43.876 13:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:44:43.876 13:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:44:43.876 13:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 43386' 00:44:43.876 killing process with pid 43386 00:44:43.876 13:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 43386 00:44:43.876 13:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 43386 00:44:44.818 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:44:44.818 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:44.818 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:44.818 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:44:44.818 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:44:44.818 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:44.818 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:44:44.818 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:44.818 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:44.818 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:44.818 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:44.818 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:46.730 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:46.730 00:44:46.730 real 0m30.685s 00:44:46.730 user 2m21.448s 00:44:46.730 sys 0m13.252s 00:44:46.730 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:44:46.730 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:44:46.730 ************************************ 00:44:46.730 END TEST nvmf_fio_target 00:44:46.730 ************************************ 00:44:46.730 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:44:46.730 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:44:46.730 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:44:46.730 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:44:46.730 ************************************ 00:44:46.730 START TEST nvmf_bdevio 00:44:46.730 ************************************ 00:44:46.730 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:44:46.730 * Looking for test storage... 00:44:46.730 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:46.730 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:44:46.730 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:44:46.730 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:44:46.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:46.991 --rc genhtml_branch_coverage=1 00:44:46.991 --rc genhtml_function_coverage=1 00:44:46.991 --rc genhtml_legend=1 00:44:46.991 --rc geninfo_all_blocks=1 00:44:46.991 --rc geninfo_unexecuted_blocks=1 00:44:46.991 00:44:46.991 ' 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:44:46.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:46.991 --rc genhtml_branch_coverage=1 00:44:46.991 --rc genhtml_function_coverage=1 00:44:46.991 --rc genhtml_legend=1 00:44:46.991 --rc geninfo_all_blocks=1 00:44:46.991 --rc geninfo_unexecuted_blocks=1 00:44:46.991 00:44:46.991 ' 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:44:46.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:46.991 --rc genhtml_branch_coverage=1 00:44:46.991 --rc genhtml_function_coverage=1 00:44:46.991 --rc genhtml_legend=1 00:44:46.991 --rc geninfo_all_blocks=1 00:44:46.991 --rc geninfo_unexecuted_blocks=1 00:44:46.991 00:44:46.991 ' 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:44:46.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:46.991 --rc genhtml_branch_coverage=1 00:44:46.991 --rc genhtml_function_coverage=1 00:44:46.991 --rc genhtml_legend=1 00:44:46.991 --rc geninfo_all_blocks=1 00:44:46.991 --rc geninfo_unexecuted_blocks=1 00:44:46.991 00:44:46.991 ' 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:46.991 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:46.992 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:46.992 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:46.992 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:44:46.992 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:46.992 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:44:46.992 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:46.992 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:46.992 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:46.992 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:46.992 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:46.992 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:44:46.992 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:44:46.992 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:46.992 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:46.992 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:46.992 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:44:46.992 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:44:46.992 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:44:46.992 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:44:46.992 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:46.992 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:44:46.992 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:44:46.992 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:44:46.992 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:46.992 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:46.992 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:46.992 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:44:46.992 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:44:46.992 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:44:46.992 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:55.126 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:55.126 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:44:55.126 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:55.126 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:55.126 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:55.126 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:55.126 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:55.126 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:44:55.126 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:55.126 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:44:55.126 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:44:55.127 Found 0000:31:00.0 (0x8086 - 0x159b) 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:44:55.127 Found 0000:31:00.1 (0x8086 - 0x159b) 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:44:55.127 Found net devices under 0000:31:00.0: cvl_0_0 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:44:55.127 Found net devices under 0000:31:00.1: cvl_0_1 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:55.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:55.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:44:55.127 00:44:55.127 --- 10.0.0.2 ping statistics --- 00:44:55.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:55.127 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:55.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:55.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:44:55.127 00:44:55.127 --- 10.0.0.1 ping statistics --- 00:44:55.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:55.127 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:44:55.127 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:44:55.128 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:55.128 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:55.128 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:55.128 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:55.128 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:55.128 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:55.128 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:44:55.128 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:44:55.128 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:55.128 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:55.128 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=52580 00:44:55.128 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 52580 00:44:55.128 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:44:55.128 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 52580 ']' 00:44:55.128 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:55.128 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:44:55.128 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:55.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:55.128 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:44:55.128 13:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:55.128 [2024-11-07 13:50:03.057530] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:44:55.128 [2024-11-07 13:50:03.060168] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:44:55.128 [2024-11-07 13:50:03.060269] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:55.389 [2024-11-07 13:50:03.246833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:55.389 [2024-11-07 13:50:03.371127] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:55.389 [2024-11-07 13:50:03.371186] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:55.389 [2024-11-07 13:50:03.371202] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:55.389 [2024-11-07 13:50:03.371214] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:55.389 [2024-11-07 13:50:03.371226] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:55.389 [2024-11-07 13:50:03.374055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:44:55.389 [2024-11-07 13:50:03.374290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:44:55.389 [2024-11-07 13:50:03.374424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:44:55.389 [2024-11-07 13:50:03.374450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:44:55.649 [2024-11-07 13:50:03.643741] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:44:55.909 [2024-11-07 13:50:03.658580] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:44:55.909 [2024-11-07 13:50:03.658913] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:44:55.909 [2024-11-07 13:50:03.659205] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:44:55.909 [2024-11-07 13:50:03.662475] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:44:55.909 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:44:55.909 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:44:55.909 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:44:55.909 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:55.909 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:55.910 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:55.910 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:55.910 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:55.910 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:55.910 [2024-11-07 13:50:03.887835] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:56.170 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:56.170 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:44:56.170 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:56.170 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:56.170 Malloc0 00:44:56.170 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:56.170 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:44:56.170 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:56.171 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:56.171 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:56.171 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:44:56.171 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:56.171 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:56.171 13:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:56.171 13:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:56.171 13:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:56.171 13:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:56.171 [2024-11-07 13:50:04.015877] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:56.171 13:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:56.171 13:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:44:56.171 13:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:44:56.171 13:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:44:56.171 13:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:44:56.171 13:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:56.171 13:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:56.171 { 00:44:56.171 "params": { 00:44:56.171 "name": "Nvme$subsystem", 00:44:56.171 "trtype": "$TEST_TRANSPORT", 00:44:56.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:56.171 "adrfam": "ipv4", 00:44:56.171 "trsvcid": "$NVMF_PORT", 00:44:56.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:56.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:56.171 "hdgst": ${hdgst:-false}, 00:44:56.171 "ddgst": ${ddgst:-false} 00:44:56.171 }, 00:44:56.171 "method": "bdev_nvme_attach_controller" 00:44:56.171 } 00:44:56.171 EOF 00:44:56.171 )") 00:44:56.171 13:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:44:56.171 13:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:44:56.171 13:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:44:56.171 13:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:56.171 "params": { 00:44:56.171 "name": "Nvme1", 00:44:56.171 "trtype": "tcp", 00:44:56.171 "traddr": "10.0.0.2", 00:44:56.171 "adrfam": "ipv4", 00:44:56.171 "trsvcid": "4420", 00:44:56.171 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:56.171 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:56.171 "hdgst": false, 00:44:56.171 "ddgst": false 00:44:56.171 }, 00:44:56.171 "method": "bdev_nvme_attach_controller" 00:44:56.171 }' 00:44:56.171 [2024-11-07 13:50:04.114980] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:44:56.171 [2024-11-07 13:50:04.115102] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid52927 ] 00:44:56.432 [2024-11-07 13:50:04.273748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:44:56.432 [2024-11-07 13:50:04.374511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:56.432 [2024-11-07 13:50:04.374597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:56.432 [2024-11-07 13:50:04.374598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:44:56.692 I/O targets: 00:44:56.692 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:44:56.692 00:44:56.692 00:44:56.692 CUnit - A unit testing framework for C - Version 2.1-3 00:44:56.692 http://cunit.sourceforge.net/ 00:44:56.692 00:44:56.692 00:44:56.692 Suite: bdevio tests on: Nvme1n1 00:44:56.953 Test: blockdev write read block ...passed 00:44:56.953 Test: blockdev write zeroes read block ...passed 00:44:56.953 Test: blockdev write zeroes read no split ...passed 00:44:56.953 Test: blockdev write zeroes read split ...passed 00:44:56.953 Test: blockdev write zeroes read split partial ...passed 00:44:56.953 Test: blockdev reset ...[2024-11-07 13:50:04.859203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:44:56.953 [2024-11-07 13:50:04.859319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000417600 (9): Bad file descriptor 00:44:56.953 [2024-11-07 13:50:04.867911] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:44:56.953 passed 00:44:56.953 Test: blockdev write read 8 blocks ...passed 00:44:56.953 Test: blockdev write read size > 128k ...passed 00:44:56.953 Test: blockdev write read invalid size ...passed 00:44:56.953 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:44:56.953 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:44:56.953 Test: blockdev write read max offset ...passed 00:44:57.213 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:44:57.213 Test: blockdev writev readv 8 blocks ...passed 00:44:57.213 Test: blockdev writev readv 30 x 1block ...passed 00:44:57.213 Test: blockdev writev readv block ...passed 00:44:57.213 Test: blockdev writev readv size > 128k ...passed 00:44:57.213 Test: blockdev writev readv size > 128k in two iovs ...passed 00:44:57.213 Test: blockdev comparev and writev ...[2024-11-07 13:50:05.054143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:57.213 [2024-11-07 13:50:05.054177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:44:57.213 [2024-11-07 13:50:05.054194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:57.213 [2024-11-07 13:50:05.054204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:44:57.213 [2024-11-07 13:50:05.054829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:57.213 [2024-11-07 13:50:05.054845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:44:57.213 [2024-11-07 13:50:05.054858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:57.213 [2024-11-07 13:50:05.054873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:44:57.213 [2024-11-07 13:50:05.055455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:57.213 [2024-11-07 13:50:05.055470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:44:57.213 [2024-11-07 13:50:05.055485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:57.213 [2024-11-07 13:50:05.055493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:44:57.213 [2024-11-07 13:50:05.056098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:57.213 [2024-11-07 13:50:05.056113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:44:57.213 [2024-11-07 13:50:05.056129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:57.213 [2024-11-07 13:50:05.056137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:44:57.213 passed 00:44:57.213 Test: blockdev nvme passthru rw ...passed 00:44:57.214 Test: blockdev nvme passthru vendor specific ...[2024-11-07 13:50:05.140749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:44:57.214 [2024-11-07 13:50:05.140771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:44:57.214 [2024-11-07 13:50:05.141142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:44:57.214 [2024-11-07 13:50:05.141155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:44:57.214 [2024-11-07 13:50:05.141531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:44:57.214 [2024-11-07 13:50:05.141543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:44:57.214 [2024-11-07 13:50:05.141928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:44:57.214 [2024-11-07 13:50:05.141941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:44:57.214 passed 00:44:57.214 Test: blockdev nvme admin passthru ...passed 00:44:57.214 Test: blockdev copy ...passed 00:44:57.214 00:44:57.214 Run Summary: Type Total Ran Passed Failed Inactive 00:44:57.214 suites 1 1 n/a 0 0 00:44:57.214 tests 23 23 23 0 0 00:44:57.214 asserts 152 152 152 0 n/a 00:44:57.214 00:44:57.214 Elapsed time = 1.080 seconds 00:44:58.157 13:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:58.157 13:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:58.157 13:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:58.157 13:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:58.157 13:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:44:58.157 13:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:44:58.157 13:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:58.157 13:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:44:58.157 13:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:58.157 13:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:44:58.157 13:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:58.157 13:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:58.157 rmmod nvme_tcp 00:44:58.157 rmmod nvme_fabrics 00:44:58.157 rmmod nvme_keyring 00:44:58.157 13:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:58.157 13:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:44:58.157 13:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:44:58.157 13:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 52580 ']' 00:44:58.157 13:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 52580 00:44:58.157 13:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 52580 ']' 00:44:58.157 13:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 52580 00:44:58.157 13:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:44:58.157 13:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:44:58.157 13:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 52580 00:44:58.157 13:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:44:58.157 13:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:44:58.157 13:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 52580' 00:44:58.157 killing process with pid 52580 00:44:58.157 13:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 52580 00:44:58.157 13:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 52580 00:44:59.099 13:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:44:59.099 13:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:59.099 13:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:59.099 13:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:44:59.099 13:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:44:59.099 13:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:59.099 13:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:44:59.099 13:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:59.099 13:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:59.099 13:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:59.099 13:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:59.099 13:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:01.644 13:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:01.644 00:45:01.644 real 0m14.443s 00:45:01.644 user 0m14.815s 00:45:01.644 sys 0m7.370s 00:45:01.644 13:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:45:01.644 13:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:45:01.644 ************************************ 00:45:01.644 END TEST nvmf_bdevio 00:45:01.644 ************************************ 00:45:01.644 13:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:45:01.644 00:45:01.644 real 5m23.083s 00:45:01.644 user 10m53.595s 00:45:01.644 sys 2m13.173s 00:45:01.644 13:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1128 -- # xtrace_disable 00:45:01.644 13:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:45:01.644 ************************************ 00:45:01.644 END TEST nvmf_target_core_interrupt_mode 00:45:01.644 ************************************ 00:45:01.644 13:50:09 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:45:01.644 13:50:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:45:01.644 13:50:09 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:45:01.645 13:50:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:01.645 ************************************ 00:45:01.645 START TEST nvmf_interrupt 00:45:01.645 ************************************ 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:45:01.645 * Looking for test storage... 00:45:01.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:45:01.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:01.645 --rc genhtml_branch_coverage=1 00:45:01.645 --rc genhtml_function_coverage=1 00:45:01.645 --rc genhtml_legend=1 00:45:01.645 --rc geninfo_all_blocks=1 00:45:01.645 --rc geninfo_unexecuted_blocks=1 00:45:01.645 00:45:01.645 ' 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:45:01.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:01.645 --rc genhtml_branch_coverage=1 00:45:01.645 --rc genhtml_function_coverage=1 00:45:01.645 --rc genhtml_legend=1 00:45:01.645 --rc geninfo_all_blocks=1 00:45:01.645 --rc geninfo_unexecuted_blocks=1 00:45:01.645 00:45:01.645 ' 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:45:01.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:01.645 --rc genhtml_branch_coverage=1 00:45:01.645 --rc genhtml_function_coverage=1 00:45:01.645 --rc genhtml_legend=1 00:45:01.645 --rc geninfo_all_blocks=1 00:45:01.645 --rc geninfo_unexecuted_blocks=1 00:45:01.645 00:45:01.645 ' 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:45:01.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:01.645 --rc genhtml_branch_coverage=1 00:45:01.645 --rc genhtml_function_coverage=1 00:45:01.645 --rc genhtml_legend=1 00:45:01.645 --rc geninfo_all_blocks=1 00:45:01.645 --rc geninfo_unexecuted_blocks=1 00:45:01.645 00:45:01.645 ' 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:45:01.645 13:50:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:45:01.646 13:50:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:45:01.646 13:50:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:45:09.786 Found 0000:31:00.0 (0x8086 - 0x159b) 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:45:09.786 Found 0000:31:00.1 (0x8086 - 0x159b) 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:45:09.786 Found net devices under 0000:31:00.0: cvl_0_0 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:45:09.786 Found net devices under 0000:31:00.1: cvl_0_1 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:09.786 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:45:09.787 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:09.787 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:45:09.787 00:45:09.787 --- 10.0.0.2 ping statistics --- 00:45:09.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:09.787 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:45:09.787 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:09.787 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:45:09.787 00:45:09.787 --- 10.0.0.1 ping statistics --- 00:45:09.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:09.787 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=57928 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 57928 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@833 -- # '[' -z 57928 ']' 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # local max_retries=100 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:09.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # xtrace_disable 00:45:09.787 13:50:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:45:09.787 [2024-11-07 13:50:17.536286] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:45:09.787 [2024-11-07 13:50:17.538597] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:45:09.787 [2024-11-07 13:50:17.538683] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:09.787 [2024-11-07 13:50:17.682990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:45:09.787 [2024-11-07 13:50:17.781667] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:09.787 [2024-11-07 13:50:17.781704] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:09.787 [2024-11-07 13:50:17.781720] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:09.787 [2024-11-07 13:50:17.781730] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:09.787 [2024-11-07 13:50:17.781741] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:09.787 [2024-11-07 13:50:17.783643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:09.787 [2024-11-07 13:50:17.783666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:10.047 [2024-11-07 13:50:18.021617] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:45:10.047 [2024-11-07 13:50:18.021761] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:45:10.047 [2024-11-07 13:50:18.021915] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:45:10.308 13:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:45:10.308 13:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@866 -- # return 0 00:45:10.308 13:50:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:45:10.308 13:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:10.308 13:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:45:10.568 13:50:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:10.568 13:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:45:10.568 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:45:10.568 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:45:10.568 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:45:10.568 5000+0 records in 00:45:10.568 5000+0 records out 00:45:10.568 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0186176 s, 550 MB/s 00:45:10.568 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:45:10.568 13:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:10.568 13:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:45:10.568 AIO0 00:45:10.568 13:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:10.568 13:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:45:10.568 13:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:10.568 13:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:45:10.568 [2024-11-07 13:50:18.412374] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:10.568 13:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:10.568 13:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:45:10.568 13:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:10.568 13:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:45:10.568 13:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:10.568 13:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:45:10.568 13:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:10.568 13:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:45:10.568 13:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:10.568 13:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:10.568 13:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:10.568 13:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:45:10.568 [2024-11-07 13:50:18.456699] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:10.568 13:50:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:10.568 13:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:45:10.568 13:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 57928 0 00:45:10.568 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 57928 0 idle 00:45:10.568 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=57928 00:45:10.569 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:45:10.569 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:45:10.569 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:45:10.569 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:45:10.569 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:45:10.569 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:45:10.569 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:45:10.569 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:45:10.569 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:45:10.569 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 57928 -w 256 00:45:10.569 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:45:10.830 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 57928 root 20 0 20.1t 213120 99072 S 6.2 0.2 0:00.58 reactor_0' 00:45:10.830 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 57928 root 20 0 20.1t 213120 99072 S 6.2 0.2 0:00.58 reactor_0 00:45:10.830 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:45:10.830 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:45:10.830 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.2 00:45:10.830 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:45:10.830 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:45:10.830 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:45:10.830 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:45:10.830 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:45:10.830 13:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:45:10.830 13:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 57928 1 00:45:10.830 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 57928 1 idle 00:45:10.830 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=57928 00:45:10.830 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:45:10.830 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:45:10.830 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:45:10.830 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:45:10.830 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:45:10.830 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:45:10.830 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:45:10.830 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:45:10.830 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:45:10.830 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 57928 -w 256 00:45:10.830 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:45:10.830 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 57939 root 20 0 20.1t 213120 99072 S 0.0 0.2 0:00.00 reactor_1' 00:45:10.830 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 57939 root 20 0 20.1t 213120 99072 S 0.0 0.2 0:00.00 reactor_1 00:45:10.830 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:45:10.830 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:45:11.091 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:45:11.091 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:45:11.091 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:45:11.091 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:45:11.091 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:45:11.091 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:45:11.091 13:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:45:11.091 13:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=58296 00:45:11.091 13:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:45:11.091 13:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:45:11.091 13:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:45:11.091 13:50:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 57928 0 00:45:11.091 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 57928 0 busy 00:45:11.091 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=57928 00:45:11.091 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:45:11.091 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:45:11.091 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:45:11.091 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:45:11.091 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:45:11.091 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:45:11.091 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:45:11.091 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:45:11.091 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 57928 -w 256 00:45:11.091 13:50:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:45:11.091 13:50:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 57928 root 20 0 20.1t 220032 99072 R 86.7 0.2 0:00.71 reactor_0' 00:45:11.091 13:50:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 57928 root 20 0 20.1t 220032 99072 R 86.7 0.2 0:00.71 reactor_0 00:45:11.091 13:50:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:45:11.091 13:50:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:45:11.091 13:50:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=86.7 00:45:11.091 13:50:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=86 00:45:11.091 13:50:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:45:11.091 13:50:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:45:11.091 13:50:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:45:11.091 13:50:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:45:11.091 13:50:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:45:11.091 13:50:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:45:11.091 13:50:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 57928 1 00:45:11.091 13:50:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 57928 1 busy 00:45:11.091 13:50:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=57928 00:45:11.091 13:50:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:45:11.091 13:50:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:45:11.091 13:50:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:45:11.091 13:50:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:45:11.091 13:50:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:45:11.091 13:50:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:45:11.091 13:50:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:45:11.091 13:50:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:45:11.091 13:50:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 57928 -w 256 00:45:11.091 13:50:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:45:11.352 13:50:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 57939 root 20 0 20.1t 223488 99072 R 99.9 0.2 0:00.26 reactor_1' 00:45:11.352 13:50:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 57939 root 20 0 20.1t 223488 99072 R 99.9 0.2 0:00.26 reactor_1 00:45:11.352 13:50:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:45:11.352 13:50:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:45:11.352 13:50:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:45:11.352 13:50:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:45:11.352 13:50:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:45:11.352 13:50:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:45:11.352 13:50:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:45:11.352 13:50:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:45:11.352 13:50:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 58296 00:45:21.351 Initializing NVMe Controllers 00:45:21.351 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:45:21.351 Controller IO queue size 256, less than required. 00:45:21.351 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:45:21.351 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:45:21.351 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:45:21.351 Initialization complete. Launching workers. 00:45:21.351 ======================================================== 00:45:21.351 Latency(us) 00:45:21.351 Device Information : IOPS MiB/s Average min max 00:45:21.351 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 18798.70 73.43 13622.99 4221.53 53025.52 00:45:21.351 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 15571.10 60.82 16445.84 9638.84 19858.28 00:45:21.351 ======================================================== 00:45:21.351 Total : 34369.80 134.26 14901.87 4221.53 53025.52 00:45:21.351 00:45:21.351 13:50:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:45:21.351 13:50:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 57928 0 00:45:21.351 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 57928 0 idle 00:45:21.351 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=57928 00:45:21.351 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:45:21.351 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:45:21.351 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:45:21.351 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:45:21.351 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:45:21.351 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:45:21.351 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:45:21.351 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:45:21.351 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:45:21.351 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 57928 -w 256 00:45:21.351 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:45:21.351 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 57928 root 20 0 20.1t 225792 99072 S 0.0 0.2 0:20.58 reactor_0' 00:45:21.351 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 57928 root 20 0 20.1t 225792 99072 S 0.0 0.2 0:20.58 reactor_0 00:45:21.351 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:45:21.351 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:45:21.351 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:45:21.351 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:45:21.351 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:45:21.351 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:45:21.351 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:45:21.351 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:45:21.351 13:50:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:45:21.351 13:50:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 57928 1 00:45:21.351 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 57928 1 idle 00:45:21.351 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=57928 00:45:21.351 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:45:21.351 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:45:21.351 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:45:21.351 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:45:21.351 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:45:21.351 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:45:21.351 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:45:21.351 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:45:21.351 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:45:21.351 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 57928 -w 256 00:45:21.351 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:45:21.612 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 57939 root 20 0 20.1t 225792 99072 S 0.0 0.2 0:10.00 reactor_1' 00:45:21.612 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 57939 root 20 0 20.1t 225792 99072 S 0.0 0.2 0:10.00 reactor_1 00:45:21.612 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:45:21.612 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:45:21.612 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:45:21.612 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:45:21.612 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:45:21.612 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:45:21.612 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:45:21.612 13:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:45:21.612 13:50:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:45:22.553 13:50:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:45:22.554 13:50:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # local i=0 00:45:22.554 13:50:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:45:22.554 13:50:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:45:22.554 13:50:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # sleep 2 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # return 0 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 57928 0 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 57928 0 idle 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=57928 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 57928 -w 256 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 57928 root 20 0 20.1t 298368 125568 S 0.0 0.2 0:21.09 reactor_0' 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 57928 root 20 0 20.1t 298368 125568 S 0.0 0.2 0:21.09 reactor_0 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 57928 1 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 57928 1 idle 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=57928 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 57928 -w 256 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 57939 root 20 0 20.1t 298368 125568 S 0.0 0.2 0:10.34 reactor_1' 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 57939 root 20 0 20.1t 298368 125568 S 0.0 0.2 0:10.34 reactor_1 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:45:24.622 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:45:24.884 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:45:24.884 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:45:24.884 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:45:24.884 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:45:24.884 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:45:24.884 13:50:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:45:24.884 13:50:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:45:25.145 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:45:25.145 13:50:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:45:25.145 13:50:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1221 -- # local i=0 00:45:25.145 13:50:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:45:25.145 13:50:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:45:25.145 13:50:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:45:25.145 13:50:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:45:25.406 13:50:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1233 -- # return 0 00:45:25.406 13:50:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:45:25.406 13:50:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:45:25.406 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:45:25.406 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:45:25.406 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:25.406 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:45:25.406 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:25.406 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:25.406 rmmod nvme_tcp 00:45:25.406 rmmod nvme_fabrics 00:45:25.406 rmmod nvme_keyring 00:45:25.406 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:25.406 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:45:25.406 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:45:25.406 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 57928 ']' 00:45:25.406 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 57928 00:45:25.406 13:50:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@952 -- # '[' -z 57928 ']' 00:45:25.406 13:50:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # kill -0 57928 00:45:25.406 13:50:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # uname 00:45:25.406 13:50:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:45:25.406 13:50:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57928 00:45:25.406 13:50:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:45:25.406 13:50:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:45:25.406 13:50:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57928' 00:45:25.406 killing process with pid 57928 00:45:25.406 13:50:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@971 -- # kill 57928 00:45:25.406 13:50:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@976 -- # wait 57928 00:45:26.349 13:50:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:45:26.349 13:50:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:45:26.349 13:50:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:45:26.349 13:50:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:45:26.349 13:50:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:45:26.349 13:50:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:45:26.349 13:50:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:45:26.349 13:50:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:26.349 13:50:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:45:26.349 13:50:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:26.349 13:50:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:45:26.349 13:50:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:28.263 13:50:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:28.263 00:45:28.263 real 0m27.045s 00:45:28.263 user 0m42.190s 00:45:28.263 sys 0m10.124s 00:45:28.263 13:50:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:45:28.263 13:50:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:45:28.263 ************************************ 00:45:28.263 END TEST nvmf_interrupt 00:45:28.263 ************************************ 00:45:28.263 00:45:28.263 real 39m37.992s 00:45:28.263 user 92m57.025s 00:45:28.263 sys 11m52.250s 00:45:28.263 13:50:36 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:45:28.263 13:50:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:28.263 ************************************ 00:45:28.263 END TEST nvmf_tcp 00:45:28.263 ************************************ 00:45:28.263 13:50:36 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:45:28.263 13:50:36 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:45:28.263 13:50:36 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:45:28.263 13:50:36 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:45:28.263 13:50:36 -- common/autotest_common.sh@10 -- # set +x 00:45:28.524 ************************************ 00:45:28.524 START TEST spdkcli_nvmf_tcp 00:45:28.524 ************************************ 00:45:28.524 13:50:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:45:28.524 * Looking for test storage... 00:45:28.524 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:45:28.524 13:50:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:45:28.524 13:50:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:45:28.524 13:50:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:45:28.524 13:50:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:45:28.524 13:50:36 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:28.524 13:50:36 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:28.524 13:50:36 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:28.524 13:50:36 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:45:28.524 13:50:36 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:45:28.524 13:50:36 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:45:28.524 13:50:36 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:45:28.524 13:50:36 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:45:28.524 13:50:36 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:45:28.524 13:50:36 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:45:28.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:28.525 --rc genhtml_branch_coverage=1 00:45:28.525 --rc genhtml_function_coverage=1 00:45:28.525 --rc genhtml_legend=1 00:45:28.525 --rc geninfo_all_blocks=1 00:45:28.525 --rc geninfo_unexecuted_blocks=1 00:45:28.525 00:45:28.525 ' 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:45:28.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:28.525 --rc genhtml_branch_coverage=1 00:45:28.525 --rc genhtml_function_coverage=1 00:45:28.525 --rc genhtml_legend=1 00:45:28.525 --rc geninfo_all_blocks=1 00:45:28.525 --rc geninfo_unexecuted_blocks=1 00:45:28.525 00:45:28.525 ' 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:45:28.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:28.525 --rc genhtml_branch_coverage=1 00:45:28.525 --rc genhtml_function_coverage=1 00:45:28.525 --rc genhtml_legend=1 00:45:28.525 --rc geninfo_all_blocks=1 00:45:28.525 --rc geninfo_unexecuted_blocks=1 00:45:28.525 00:45:28.525 ' 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:45:28.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:28.525 --rc genhtml_branch_coverage=1 00:45:28.525 --rc genhtml_function_coverage=1 00:45:28.525 --rc genhtml_legend=1 00:45:28.525 --rc geninfo_all_blocks=1 00:45:28.525 --rc geninfo_unexecuted_blocks=1 00:45:28.525 00:45:28.525 ' 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:28.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=61691 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 61691 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # '[' -z 61691 ']' 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:28.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:45:28.525 13:50:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:28.787 [2024-11-07 13:50:36.600131] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:45:28.787 [2024-11-07 13:50:36.600246] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61691 ] 00:45:28.787 [2024-11-07 13:50:36.738239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:45:29.047 [2024-11-07 13:50:36.835901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:29.047 [2024-11-07 13:50:36.835919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:29.619 13:50:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:45:29.619 13:50:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@866 -- # return 0 00:45:29.619 13:50:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:45:29.619 13:50:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:29.619 13:50:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:29.619 13:50:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:45:29.619 13:50:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:45:29.619 13:50:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:45:29.619 13:50:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:29.619 13:50:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:29.619 13:50:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:45:29.619 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:45:29.619 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:45:29.619 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:45:29.619 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:45:29.619 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:45:29.619 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:45:29.619 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:45:29.619 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:45:29.619 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:45:29.619 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:45:29.619 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:45:29.619 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:45:29.619 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:45:29.619 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:45:29.619 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:45:29.619 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:45:29.619 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:45:29.619 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:45:29.619 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:45:29.619 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:45:29.619 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:45:29.619 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:45:29.619 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:45:29.619 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:45:29.619 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:45:29.619 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:45:29.619 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:45:29.619 ' 00:45:32.164 [2024-11-07 13:50:39.932333] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:33.546 [2024-11-07 13:50:41.292845] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:45:36.087 [2024-11-07 13:50:43.824507] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:45:38.628 [2024-11-07 13:50:46.035170] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:45:40.011 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:45:40.011 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:45:40.011 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:45:40.011 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:45:40.011 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:45:40.011 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:45:40.011 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:45:40.011 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:45:40.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:45:40.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:45:40.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:45:40.011 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:45:40.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:45:40.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:45:40.011 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:45:40.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:45:40.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:45:40.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:45:40.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:45:40.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:45:40.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:45:40.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:45:40.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:45:40.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:45:40.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:45:40.011 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:45:40.012 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:45:40.012 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:45:40.012 13:50:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:45:40.012 13:50:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:40.012 13:50:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:40.012 13:50:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:45:40.012 13:50:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:40.012 13:50:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:40.012 13:50:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:45:40.012 13:50:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:45:40.272 13:50:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:45:40.272 13:50:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:45:40.272 13:50:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:45:40.272 13:50:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:40.272 13:50:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:40.532 13:50:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:45:40.532 13:50:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:40.532 13:50:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:40.532 13:50:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:45:40.532 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:45:40.532 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:45:40.532 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:45:40.532 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:45:40.532 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:45:40.532 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:45:40.532 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:45:40.532 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:45:40.532 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:45:40.532 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:45:40.532 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:45:40.532 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:45:40.532 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:45:40.532 ' 00:45:45.814 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:45:45.814 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:45:45.814 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:45:45.814 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:45:45.814 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:45:45.814 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:45:45.814 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:45:45.814 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:45:45.814 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:45:45.814 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:45:45.814 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:45:45.814 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:45:45.814 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:45:45.814 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:45:45.814 13:50:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:45:45.814 13:50:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:45.814 13:50:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:45.814 13:50:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 61691 00:45:45.814 13:50:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 61691 ']' 00:45:45.814 13:50:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 61691 00:45:45.814 13:50:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # uname 00:45:45.814 13:50:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:45:45.815 13:50:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61691 00:45:45.815 13:50:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:45:45.815 13:50:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:45:45.815 13:50:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61691' 00:45:45.815 killing process with pid 61691 00:45:45.815 13:50:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # kill 61691 00:45:45.815 13:50:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # wait 61691 00:45:46.755 13:50:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:45:46.755 13:50:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:45:46.755 13:50:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 61691 ']' 00:45:46.755 13:50:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 61691 00:45:46.755 13:50:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 61691 ']' 00:45:46.755 13:50:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 61691 00:45:46.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (61691) - No such process 00:45:46.755 13:50:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@979 -- # echo 'Process with pid 61691 is not found' 00:45:46.755 Process with pid 61691 is not found 00:45:46.755 13:50:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:45:46.755 13:50:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:45:46.755 13:50:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:45:46.755 00:45:46.755 real 0m18.305s 00:45:46.755 user 0m38.736s 00:45:46.755 sys 0m0.945s 00:45:46.755 13:50:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:45:46.755 13:50:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:46.755 ************************************ 00:45:46.755 END TEST spdkcli_nvmf_tcp 00:45:46.755 ************************************ 00:45:46.755 13:50:54 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:45:46.755 13:50:54 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:45:46.755 13:50:54 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:45:46.755 13:50:54 -- common/autotest_common.sh@10 -- # set +x 00:45:46.755 ************************************ 00:45:46.755 START TEST nvmf_identify_passthru 00:45:46.755 ************************************ 00:45:46.755 13:50:54 nvmf_identify_passthru -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:45:46.755 * Looking for test storage... 00:45:46.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:45:46.755 13:50:54 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:45:46.755 13:50:54 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:45:46.755 13:50:54 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:45:47.016 13:50:54 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:45:47.016 13:50:54 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:47.016 13:50:54 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:47.016 13:50:54 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:47.016 13:50:54 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:45:47.016 13:50:54 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:45:47.016 13:50:54 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:45:47.016 13:50:54 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:45:47.016 13:50:54 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:45:47.016 13:50:54 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:45:47.016 13:50:54 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:45:47.016 13:50:54 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:47.016 13:50:54 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:45:47.016 13:50:54 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:45:47.016 13:50:54 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:47.016 13:50:54 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:47.016 13:50:54 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:45:47.016 13:50:54 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:45:47.016 13:50:54 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:47.016 13:50:54 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:45:47.016 13:50:54 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:45:47.016 13:50:54 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:45:47.016 13:50:54 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:45:47.016 13:50:54 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:47.016 13:50:54 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:45:47.016 13:50:54 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:45:47.016 13:50:54 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:47.016 13:50:54 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:47.016 13:50:54 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:45:47.016 13:50:54 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:47.016 13:50:54 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:45:47.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:47.017 --rc genhtml_branch_coverage=1 00:45:47.017 --rc genhtml_function_coverage=1 00:45:47.017 --rc genhtml_legend=1 00:45:47.017 --rc geninfo_all_blocks=1 00:45:47.017 --rc geninfo_unexecuted_blocks=1 00:45:47.017 00:45:47.017 ' 00:45:47.017 13:50:54 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:45:47.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:47.017 --rc genhtml_branch_coverage=1 00:45:47.017 --rc genhtml_function_coverage=1 00:45:47.017 --rc genhtml_legend=1 00:45:47.017 --rc geninfo_all_blocks=1 00:45:47.017 --rc geninfo_unexecuted_blocks=1 00:45:47.017 00:45:47.017 ' 00:45:47.017 13:50:54 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:45:47.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:47.017 --rc genhtml_branch_coverage=1 00:45:47.017 --rc genhtml_function_coverage=1 00:45:47.017 --rc genhtml_legend=1 00:45:47.017 --rc geninfo_all_blocks=1 00:45:47.017 --rc geninfo_unexecuted_blocks=1 00:45:47.017 00:45:47.017 ' 00:45:47.017 13:50:54 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:45:47.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:47.017 --rc genhtml_branch_coverage=1 00:45:47.017 --rc genhtml_function_coverage=1 00:45:47.017 --rc genhtml_legend=1 00:45:47.017 --rc geninfo_all_blocks=1 00:45:47.017 --rc geninfo_unexecuted_blocks=1 00:45:47.017 00:45:47.017 ' 00:45:47.017 13:50:54 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:47.017 13:50:54 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:45:47.017 13:50:54 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:47.017 13:50:54 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:47.017 13:50:54 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:47.017 13:50:54 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:47.017 13:50:54 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:47.017 13:50:54 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:47.017 13:50:54 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:47.017 13:50:54 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:47.017 13:50:54 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:47.017 13:50:54 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:47.017 13:50:54 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:45:47.017 13:50:54 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:45:47.017 13:50:54 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:47.017 13:50:54 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:47.017 13:50:54 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:47.017 13:50:54 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:47.017 13:50:54 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:47.017 13:50:54 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:45:47.017 13:50:54 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:47.017 13:50:54 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:47.017 13:50:54 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:47.017 13:50:54 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:47.017 13:50:54 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:47.017 13:50:54 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:47.017 13:50:54 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:45:47.017 13:50:54 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:47.017 13:50:54 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:45:47.017 13:50:54 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:47.017 13:50:54 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:47.017 13:50:54 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:47.017 13:50:54 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:47.017 13:50:54 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:47.017 13:50:54 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:47.017 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:47.017 13:50:54 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:47.017 13:50:54 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:47.017 13:50:54 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:47.017 13:50:54 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:47.017 13:50:54 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:45:47.017 13:50:54 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:47.017 13:50:54 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:47.017 13:50:54 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:47.017 13:50:54 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:47.017 13:50:54 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:47.017 13:50:54 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:47.017 13:50:54 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:45:47.017 13:50:54 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:47.017 13:50:54 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:45:47.017 13:50:54 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:45:47.017 13:50:54 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:47.017 13:50:54 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:45:47.017 13:50:54 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:45:47.017 13:50:54 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:45:47.017 13:50:54 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:47.017 13:50:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:47.017 13:50:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:47.017 13:50:54 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:45:47.017 13:50:54 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:45:47.017 13:50:54 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:45:47.017 13:50:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:55.162 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:45:55.162 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:45:55.162 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:45:55.162 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:45:55.162 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:45:55.162 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:45:55.162 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:45:55.162 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:45:55.162 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:45:55.162 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:45:55.162 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:45:55.162 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:45:55.162 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:45:55.162 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:45:55.162 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:45:55.162 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:45:55.162 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:45:55.162 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:45:55.162 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:45:55.162 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:45:55.162 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:45:55.162 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:45:55.162 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:45:55.162 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:45:55.162 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:45:55.162 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:45:55.162 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:45:55.162 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:45:55.162 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:45:55.162 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:45:55.163 Found 0000:31:00.0 (0x8086 - 0x159b) 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:45:55.163 Found 0000:31:00.1 (0x8086 - 0x159b) 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:45:55.163 Found net devices under 0000:31:00.0: cvl_0_0 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:45:55.163 Found net devices under 0000:31:00.1: cvl_0_1 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:45:55.163 13:51:02 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:45:55.163 13:51:03 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:45:55.163 13:51:03 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:45:55.163 13:51:03 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:45:55.163 13:51:03 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:45:55.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:55.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:45:55.163 00:45:55.163 --- 10.0.0.2 ping statistics --- 00:45:55.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:55.163 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:45:55.163 13:51:03 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:45:55.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:55.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:45:55.163 00:45:55.163 --- 10.0.0.1 ping statistics --- 00:45:55.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:55.163 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:45:55.163 13:51:03 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:55.163 13:51:03 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:45:55.163 13:51:03 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:45:55.163 13:51:03 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:55.163 13:51:03 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:45:55.163 13:51:03 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:45:55.163 13:51:03 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:55.163 13:51:03 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:45:55.163 13:51:03 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:45:55.424 13:51:03 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:45:55.424 13:51:03 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:55.424 13:51:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:55.424 13:51:03 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:45:55.424 13:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:45:55.424 13:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:45:55.424 13:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:45:55.424 13:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:45:55.424 13:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:45:55.424 13:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:45:55.424 13:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:45:55.424 13:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:45:55.424 13:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:45:55.424 13:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:45:55.424 13:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:45:55.424 13:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:65:00.0 00:45:55.424 13:51:03 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:45:55.424 13:51:03 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:45:55.424 13:51:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:45:55.424 13:51:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:45:55.424 13:51:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:45:55.993 13:51:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:45:55.993 13:51:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:45:55.993 13:51:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:45:55.993 13:51:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:45:56.562 13:51:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:45:56.562 13:51:04 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:45:56.562 13:51:04 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:56.562 13:51:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:56.822 13:51:04 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:45:56.822 13:51:04 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:56.822 13:51:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:56.822 13:51:04 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=69572 00:45:56.822 13:51:04 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:45:56.822 13:51:04 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:45:56.822 13:51:04 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 69572 00:45:56.822 13:51:04 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # '[' -z 69572 ']' 00:45:56.822 13:51:04 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:56.822 13:51:04 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # local max_retries=100 00:45:56.822 13:51:04 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:56.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:56.822 13:51:04 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # xtrace_disable 00:45:56.822 13:51:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:56.822 [2024-11-07 13:51:04.708832] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:45:56.822 [2024-11-07 13:51:04.708959] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:57.084 [2024-11-07 13:51:04.865254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:45:57.084 [2024-11-07 13:51:04.968666] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:57.084 [2024-11-07 13:51:04.968707] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:57.084 [2024-11-07 13:51:04.968719] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:57.084 [2024-11-07 13:51:04.968731] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:57.084 [2024-11-07 13:51:04.968739] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:57.084 [2024-11-07 13:51:04.970931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:57.084 [2024-11-07 13:51:04.970991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:45:57.084 [2024-11-07 13:51:04.971178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:57.084 [2024-11-07 13:51:04.971202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:45:57.655 13:51:05 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:45:57.655 13:51:05 nvmf_identify_passthru -- common/autotest_common.sh@866 -- # return 0 00:45:57.655 13:51:05 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:45:57.655 13:51:05 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:57.655 13:51:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:57.655 INFO: Log level set to 20 00:45:57.655 INFO: Requests: 00:45:57.655 { 00:45:57.655 "jsonrpc": "2.0", 00:45:57.655 "method": "nvmf_set_config", 00:45:57.655 "id": 1, 00:45:57.655 "params": { 00:45:57.655 "admin_cmd_passthru": { 00:45:57.655 "identify_ctrlr": true 00:45:57.655 } 00:45:57.655 } 00:45:57.655 } 00:45:57.655 00:45:57.655 INFO: response: 00:45:57.655 { 00:45:57.655 "jsonrpc": "2.0", 00:45:57.655 "id": 1, 00:45:57.655 "result": true 00:45:57.655 } 00:45:57.655 00:45:57.655 13:51:05 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:57.655 13:51:05 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:45:57.655 13:51:05 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:57.655 13:51:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:57.655 INFO: Setting log level to 20 00:45:57.655 INFO: Setting log level to 20 00:45:57.655 INFO: Log level set to 20 00:45:57.655 INFO: Log level set to 20 00:45:57.655 INFO: Requests: 00:45:57.655 { 00:45:57.655 "jsonrpc": "2.0", 00:45:57.655 "method": "framework_start_init", 00:45:57.655 "id": 1 00:45:57.655 } 00:45:57.655 00:45:57.655 INFO: Requests: 00:45:57.655 { 00:45:57.655 "jsonrpc": "2.0", 00:45:57.655 "method": "framework_start_init", 00:45:57.655 "id": 1 00:45:57.655 } 00:45:57.655 00:45:57.916 [2024-11-07 13:51:05.725096] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:45:57.916 INFO: response: 00:45:57.916 { 00:45:57.916 "jsonrpc": "2.0", 00:45:57.916 "id": 1, 00:45:57.916 "result": true 00:45:57.916 } 00:45:57.916 00:45:57.916 INFO: response: 00:45:57.916 { 00:45:57.916 "jsonrpc": "2.0", 00:45:57.916 "id": 1, 00:45:57.916 "result": true 00:45:57.916 } 00:45:57.916 00:45:57.916 13:51:05 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:57.916 13:51:05 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:45:57.916 13:51:05 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:57.916 13:51:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:57.916 INFO: Setting log level to 40 00:45:57.916 INFO: Setting log level to 40 00:45:57.916 INFO: Setting log level to 40 00:45:57.916 [2024-11-07 13:51:05.740667] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:57.916 13:51:05 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:57.916 13:51:05 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:45:57.916 13:51:05 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:57.916 13:51:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:57.916 13:51:05 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:45:57.916 13:51:05 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:57.916 13:51:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:58.176 Nvme0n1 00:45:58.176 13:51:06 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:58.176 13:51:06 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:45:58.176 13:51:06 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:58.176 13:51:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:58.176 13:51:06 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:58.176 13:51:06 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:45:58.176 13:51:06 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:58.176 13:51:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:58.176 13:51:06 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:58.176 13:51:06 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:58.176 13:51:06 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:58.176 13:51:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:58.176 [2024-11-07 13:51:06.178289] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:58.437 13:51:06 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:58.437 13:51:06 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:45:58.437 13:51:06 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:58.437 13:51:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:58.437 [ 00:45:58.437 { 00:45:58.437 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:45:58.437 "subtype": "Discovery", 00:45:58.437 "listen_addresses": [], 00:45:58.437 "allow_any_host": true, 00:45:58.437 "hosts": [] 00:45:58.437 }, 00:45:58.437 { 00:45:58.437 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:45:58.437 "subtype": "NVMe", 00:45:58.437 "listen_addresses": [ 00:45:58.437 { 00:45:58.437 "trtype": "TCP", 00:45:58.437 "adrfam": "IPv4", 00:45:58.437 "traddr": "10.0.0.2", 00:45:58.437 "trsvcid": "4420" 00:45:58.437 } 00:45:58.437 ], 00:45:58.437 "allow_any_host": true, 00:45:58.437 "hosts": [], 00:45:58.437 "serial_number": "SPDK00000000000001", 00:45:58.437 "model_number": "SPDK bdev Controller", 00:45:58.437 "max_namespaces": 1, 00:45:58.437 "min_cntlid": 1, 00:45:58.437 "max_cntlid": 65519, 00:45:58.437 "namespaces": [ 00:45:58.437 { 00:45:58.437 "nsid": 1, 00:45:58.437 "bdev_name": "Nvme0n1", 00:45:58.437 "name": "Nvme0n1", 00:45:58.437 "nguid": "3634473052605494002538450000002D", 00:45:58.437 "uuid": "36344730-5260-5494-0025-38450000002d" 00:45:58.437 } 00:45:58.437 ] 00:45:58.437 } 00:45:58.437 ] 00:45:58.437 13:51:06 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:58.437 13:51:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:45:58.437 13:51:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:45:58.437 13:51:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:45:58.698 13:51:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:45:58.698 13:51:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:45:58.698 13:51:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:45:58.698 13:51:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:45:58.960 13:51:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:45:58.960 13:51:06 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:45:58.960 13:51:06 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:45:58.960 13:51:06 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:58.960 13:51:06 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:58.960 13:51:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:58.960 13:51:06 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:58.960 13:51:06 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:45:58.960 13:51:06 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:45:58.960 13:51:06 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:45:58.960 13:51:06 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:45:58.960 13:51:06 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:58.960 13:51:06 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:45:58.960 13:51:06 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:58.960 13:51:06 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:58.960 rmmod nvme_tcp 00:45:58.960 rmmod nvme_fabrics 00:45:58.960 rmmod nvme_keyring 00:45:58.960 13:51:06 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:58.960 13:51:06 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:45:58.960 13:51:06 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:45:58.960 13:51:06 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 69572 ']' 00:45:58.960 13:51:06 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 69572 00:45:58.960 13:51:06 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' -z 69572 ']' 00:45:58.960 13:51:06 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # kill -0 69572 00:45:58.960 13:51:06 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # uname 00:45:58.960 13:51:06 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:45:58.960 13:51:06 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69572 00:45:59.221 13:51:06 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:45:59.221 13:51:06 nvmf_identify_passthru -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:45:59.221 13:51:06 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69572' 00:45:59.221 killing process with pid 69572 00:45:59.221 13:51:06 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # kill 69572 00:45:59.221 13:51:06 nvmf_identify_passthru -- common/autotest_common.sh@976 -- # wait 69572 00:46:00.163 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:46:00.163 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:46:00.163 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:46:00.163 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:46:00.163 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:46:00.163 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:46:00.163 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:46:00.163 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:46:00.163 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:46:00.163 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:00.163 13:51:07 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:46:00.163 13:51:07 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:02.076 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:46:02.076 00:46:02.076 real 0m15.359s 00:46:02.076 user 0m13.595s 00:46:02.076 sys 0m7.601s 00:46:02.076 13:51:09 nvmf_identify_passthru -- common/autotest_common.sh@1128 -- # xtrace_disable 00:46:02.076 13:51:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:02.076 ************************************ 00:46:02.076 END TEST nvmf_identify_passthru 00:46:02.076 ************************************ 00:46:02.076 13:51:10 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:46:02.076 13:51:10 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:46:02.076 13:51:10 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:46:02.076 13:51:10 -- common/autotest_common.sh@10 -- # set +x 00:46:02.076 ************************************ 00:46:02.076 START TEST nvmf_dif 00:46:02.076 ************************************ 00:46:02.076 13:51:10 nvmf_dif -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:46:02.337 * Looking for test storage... 00:46:02.337 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:46:02.337 13:51:10 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:46:02.337 13:51:10 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:46:02.337 13:51:10 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:46:02.337 13:51:10 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:46:02.337 13:51:10 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:02.337 13:51:10 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:02.337 13:51:10 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:02.337 13:51:10 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:46:02.337 13:51:10 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:46:02.337 13:51:10 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:46:02.337 13:51:10 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:46:02.337 13:51:10 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:46:02.337 13:51:10 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:46:02.337 13:51:10 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:46:02.337 13:51:10 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:02.337 13:51:10 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:46:02.337 13:51:10 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:46:02.337 13:51:10 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:02.337 13:51:10 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:02.337 13:51:10 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:46:02.337 13:51:10 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:46:02.337 13:51:10 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:02.337 13:51:10 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:46:02.337 13:51:10 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:46:02.337 13:51:10 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:46:02.337 13:51:10 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:46:02.337 13:51:10 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:02.337 13:51:10 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:46:02.337 13:51:10 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:46:02.337 13:51:10 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:02.337 13:51:10 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:02.337 13:51:10 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:46:02.337 13:51:10 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:02.337 13:51:10 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:46:02.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:02.337 --rc genhtml_branch_coverage=1 00:46:02.337 --rc genhtml_function_coverage=1 00:46:02.337 --rc genhtml_legend=1 00:46:02.337 --rc geninfo_all_blocks=1 00:46:02.337 --rc geninfo_unexecuted_blocks=1 00:46:02.337 00:46:02.337 ' 00:46:02.337 13:51:10 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:46:02.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:02.337 --rc genhtml_branch_coverage=1 00:46:02.337 --rc genhtml_function_coverage=1 00:46:02.337 --rc genhtml_legend=1 00:46:02.337 --rc geninfo_all_blocks=1 00:46:02.337 --rc geninfo_unexecuted_blocks=1 00:46:02.337 00:46:02.337 ' 00:46:02.337 13:51:10 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:46:02.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:02.337 --rc genhtml_branch_coverage=1 00:46:02.337 --rc genhtml_function_coverage=1 00:46:02.337 --rc genhtml_legend=1 00:46:02.337 --rc geninfo_all_blocks=1 00:46:02.337 --rc geninfo_unexecuted_blocks=1 00:46:02.337 00:46:02.337 ' 00:46:02.337 13:51:10 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:46:02.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:02.338 --rc genhtml_branch_coverage=1 00:46:02.338 --rc genhtml_function_coverage=1 00:46:02.338 --rc genhtml_legend=1 00:46:02.338 --rc geninfo_all_blocks=1 00:46:02.338 --rc geninfo_unexecuted_blocks=1 00:46:02.338 00:46:02.338 ' 00:46:02.338 13:51:10 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:02.338 13:51:10 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:46:02.338 13:51:10 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:02.338 13:51:10 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:02.338 13:51:10 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:02.338 13:51:10 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:02.338 13:51:10 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:02.338 13:51:10 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:02.338 13:51:10 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:02.338 13:51:10 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:02.338 13:51:10 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:02.338 13:51:10 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:02.338 13:51:10 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:46:02.338 13:51:10 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:46:02.338 13:51:10 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:02.338 13:51:10 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:02.338 13:51:10 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:02.338 13:51:10 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:02.338 13:51:10 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:02.338 13:51:10 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:46:02.338 13:51:10 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:02.338 13:51:10 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:02.338 13:51:10 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:02.338 13:51:10 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:02.338 13:51:10 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:02.338 13:51:10 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:02.338 13:51:10 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:46:02.338 13:51:10 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:02.338 13:51:10 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:46:02.338 13:51:10 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:02.338 13:51:10 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:02.338 13:51:10 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:02.338 13:51:10 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:02.338 13:51:10 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:02.338 13:51:10 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:46:02.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:46:02.338 13:51:10 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:02.338 13:51:10 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:02.338 13:51:10 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:02.338 13:51:10 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:46:02.338 13:51:10 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:46:02.338 13:51:10 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:46:02.338 13:51:10 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:46:02.338 13:51:10 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:46:02.338 13:51:10 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:46:02.338 13:51:10 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:46:02.338 13:51:10 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:46:02.338 13:51:10 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:46:02.338 13:51:10 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:46:02.338 13:51:10 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:02.338 13:51:10 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:46:02.338 13:51:10 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:02.338 13:51:10 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:46:02.338 13:51:10 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:46:02.338 13:51:10 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:46:02.338 13:51:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:46:10.482 Found 0000:31:00.0 (0x8086 - 0x159b) 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:46:10.482 Found 0000:31:00.1 (0x8086 - 0x159b) 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:46:10.482 Found net devices under 0000:31:00.0: cvl_0_0 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:46:10.482 Found net devices under 0000:31:00.1: cvl_0_1 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:46:10.482 13:51:17 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:46:10.482 13:51:18 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:46:10.482 13:51:18 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:46:10.482 13:51:18 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:46:10.482 13:51:18 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:46:10.482 13:51:18 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:46:10.482 13:51:18 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:46:10.482 13:51:18 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:46:10.482 13:51:18 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:46:10.482 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:10.482 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:46:10.482 00:46:10.482 --- 10.0.0.2 ping statistics --- 00:46:10.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:10.482 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:46:10.482 13:51:18 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:46:10.482 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:10.482 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:46:10.482 00:46:10.482 --- 10.0.0.1 ping statistics --- 00:46:10.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:10.482 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:46:10.482 13:51:18 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:10.482 13:51:18 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:46:10.482 13:51:18 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:46:10.482 13:51:18 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:46:14.688 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:46:14.688 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:46:14.688 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:46:14.688 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:46:14.688 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:46:14.688 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:46:14.688 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:46:14.688 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:46:14.688 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:46:14.688 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:46:14.688 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:46:14.688 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:46:14.688 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:46:14.688 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:46:14.688 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:46:14.688 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:46:14.688 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:46:14.688 13:51:22 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:14.688 13:51:22 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:46:14.688 13:51:22 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:46:14.688 13:51:22 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:14.688 13:51:22 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:46:14.688 13:51:22 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:46:14.688 13:51:22 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:46:14.688 13:51:22 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:46:14.688 13:51:22 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:46:14.688 13:51:22 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:46:14.688 13:51:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:14.688 13:51:22 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=76979 00:46:14.688 13:51:22 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 76979 00:46:14.688 13:51:22 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:46:14.688 13:51:22 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 76979 ']' 00:46:14.688 13:51:22 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:14.688 13:51:22 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:46:14.688 13:51:22 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:14.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:14.688 13:51:22 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:46:14.688 13:51:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:14.688 [2024-11-07 13:51:22.374782] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:46:14.688 [2024-11-07 13:51:22.374901] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:14.688 [2024-11-07 13:51:22.518425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:14.688 [2024-11-07 13:51:22.613883] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:14.688 [2024-11-07 13:51:22.613926] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:14.688 [2024-11-07 13:51:22.613937] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:14.688 [2024-11-07 13:51:22.613949] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:14.688 [2024-11-07 13:51:22.613960] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:14.688 [2024-11-07 13:51:22.615169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:15.258 13:51:23 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:46:15.258 13:51:23 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:46:15.258 13:51:23 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:46:15.258 13:51:23 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:46:15.258 13:51:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:15.258 13:51:23 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:15.258 13:51:23 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:46:15.258 13:51:23 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:46:15.258 13:51:23 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:15.258 13:51:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:15.258 [2024-11-07 13:51:23.168706] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:15.258 13:51:23 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:15.258 13:51:23 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:46:15.258 13:51:23 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:46:15.258 13:51:23 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:46:15.258 13:51:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:15.258 ************************************ 00:46:15.258 START TEST fio_dif_1_default 00:46:15.258 ************************************ 00:46:15.258 13:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:46:15.258 13:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:46:15.258 13:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:46:15.258 13:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:46:15.258 13:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:46:15.258 13:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:46:15.258 13:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:46:15.258 13:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:15.258 13:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:46:15.258 bdev_null0 00:46:15.258 13:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:15.258 13:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:46:15.258 13:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:15.258 13:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:46:15.258 13:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:15.258 13:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:46:15.258 13:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:15.258 13:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:46:15.258 13:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:15.258 13:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:46:15.258 13:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:15.258 13:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:46:15.258 [2024-11-07 13:51:23.225056] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:15.258 13:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:15.258 13:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:46:15.258 13:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:46:15.258 13:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:46:15.258 13:51:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:46:15.259 13:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:15.259 13:51:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:46:15.259 13:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:15.259 13:51:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:46:15.259 13:51:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:46:15.259 { 00:46:15.259 "params": { 00:46:15.259 "name": "Nvme$subsystem", 00:46:15.259 "trtype": "$TEST_TRANSPORT", 00:46:15.259 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:15.259 "adrfam": "ipv4", 00:46:15.259 "trsvcid": "$NVMF_PORT", 00:46:15.259 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:15.259 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:15.259 "hdgst": ${hdgst:-false}, 00:46:15.259 "ddgst": ${ddgst:-false} 00:46:15.259 }, 00:46:15.259 "method": "bdev_nvme_attach_controller" 00:46:15.259 } 00:46:15.259 EOF 00:46:15.259 )") 00:46:15.259 13:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:46:15.259 13:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:46:15.259 13:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:15.259 13:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:46:15.259 13:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:46:15.259 13:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:46:15.259 13:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:15.259 13:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:46:15.259 13:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:46:15.259 13:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:46:15.259 13:51:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:46:15.259 13:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:15.259 13:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:46:15.259 13:51:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:46:15.259 13:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:46:15.259 13:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:46:15.259 13:51:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:46:15.259 13:51:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:46:15.259 13:51:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:46:15.259 "params": { 00:46:15.259 "name": "Nvme0", 00:46:15.259 "trtype": "tcp", 00:46:15.259 "traddr": "10.0.0.2", 00:46:15.259 "adrfam": "ipv4", 00:46:15.259 "trsvcid": "4420", 00:46:15.259 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:15.259 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:15.259 "hdgst": false, 00:46:15.259 "ddgst": false 00:46:15.259 }, 00:46:15.259 "method": "bdev_nvme_attach_controller" 00:46:15.259 }' 00:46:15.523 13:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:46:15.523 13:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:46:15.523 13:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # break 00:46:15.523 13:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:46:15.523 13:51:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:15.783 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:46:15.783 fio-3.35 00:46:15.783 Starting 1 thread 00:46:28.010 00:46:28.010 filename0: (groupid=0, jobs=1): err= 0: pid=77494: Thu Nov 7 13:51:34 2024 00:46:28.010 read: IOPS=185, BW=743KiB/s (761kB/s)(7456KiB/10039msec) 00:46:28.010 slat (nsec): min=2890, max=20255, avg=6802.26, stdev=1311.79 00:46:28.010 clat (usec): min=685, max=45768, avg=21522.63, stdev=20465.08 00:46:28.010 lat (usec): min=691, max=45788, avg=21529.43, stdev=20464.86 00:46:28.010 clat percentiles (usec): 00:46:28.010 | 1.00th=[ 791], 5.00th=[ 930], 10.00th=[ 955], 20.00th=[ 979], 00:46:28.010 | 30.00th=[ 996], 40.00th=[ 1004], 50.00th=[41157], 60.00th=[41681], 00:46:28.010 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:46:28.010 | 99.00th=[42206], 99.50th=[43254], 99.90th=[45876], 99.95th=[45876], 00:46:28.010 | 99.99th=[45876] 00:46:28.010 bw ( KiB/s): min= 672, max= 768, per=100.00%, avg=744.00, stdev=34.24, samples=20 00:46:28.010 iops : min= 168, max= 192, avg=186.00, stdev= 8.56, samples=20 00:46:28.010 lat (usec) : 750=0.54%, 1000=35.19% 00:46:28.010 lat (msec) : 2=14.06%, 50=50.21% 00:46:28.010 cpu : usr=94.03%, sys=5.72%, ctx=14, majf=0, minf=1634 00:46:28.010 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:28.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:28.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:28.010 issued rwts: total=1864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:28.010 latency : target=0, window=0, percentile=100.00%, depth=4 00:46:28.010 00:46:28.010 Run status group 0 (all jobs): 00:46:28.010 READ: bw=743KiB/s (761kB/s), 743KiB/s-743KiB/s (761kB/s-761kB/s), io=7456KiB (7635kB), run=10039-10039msec 00:46:28.010 ----------------------------------------------------- 00:46:28.010 Suppressions used: 00:46:28.010 count bytes template 00:46:28.010 1 8 /usr/src/fio/parse.c 00:46:28.010 1 8 libtcmalloc_minimal.so 00:46:28.010 1 904 libcrypto.so 00:46:28.010 ----------------------------------------------------- 00:46:28.010 00:46:28.010 13:51:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:46:28.010 13:51:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:46:28.010 13:51:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:46:28.010 13:51:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:46:28.010 13:51:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:46:28.010 13:51:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:46:28.010 13:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:28.010 13:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:46:28.010 13:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:28.010 13:51:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:46:28.010 13:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:28.010 13:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:46:28.010 13:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:28.010 00:46:28.010 real 0m12.426s 00:46:28.010 user 0m23.545s 00:46:28.010 sys 0m1.189s 00:46:28.010 13:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:46:28.011 ************************************ 00:46:28.011 END TEST fio_dif_1_default 00:46:28.011 ************************************ 00:46:28.011 13:51:35 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:46:28.011 13:51:35 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:46:28.011 13:51:35 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:46:28.011 13:51:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:28.011 ************************************ 00:46:28.011 START TEST fio_dif_1_multi_subsystems 00:46:28.011 ************************************ 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:28.011 bdev_null0 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:28.011 [2024-11-07 13:51:35.699453] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:28.011 bdev_null1 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:46:28.011 { 00:46:28.011 "params": { 00:46:28.011 "name": "Nvme$subsystem", 00:46:28.011 "trtype": "$TEST_TRANSPORT", 00:46:28.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:28.011 "adrfam": "ipv4", 00:46:28.011 "trsvcid": "$NVMF_PORT", 00:46:28.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:28.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:28.011 "hdgst": ${hdgst:-false}, 00:46:28.011 "ddgst": ${ddgst:-false} 00:46:28.011 }, 00:46:28.011 "method": "bdev_nvme_attach_controller" 00:46:28.011 } 00:46:28.011 EOF 00:46:28.011 )") 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:46:28.011 { 00:46:28.011 "params": { 00:46:28.011 "name": "Nvme$subsystem", 00:46:28.011 "trtype": "$TEST_TRANSPORT", 00:46:28.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:28.011 "adrfam": "ipv4", 00:46:28.011 "trsvcid": "$NVMF_PORT", 00:46:28.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:28.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:28.011 "hdgst": ${hdgst:-false}, 00:46:28.011 "ddgst": ${ddgst:-false} 00:46:28.011 }, 00:46:28.011 "method": "bdev_nvme_attach_controller" 00:46:28.011 } 00:46:28.011 EOF 00:46:28.011 )") 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:46:28.011 "params": { 00:46:28.011 "name": "Nvme0", 00:46:28.011 "trtype": "tcp", 00:46:28.011 "traddr": "10.0.0.2", 00:46:28.011 "adrfam": "ipv4", 00:46:28.011 "trsvcid": "4420", 00:46:28.011 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:28.011 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:28.011 "hdgst": false, 00:46:28.011 "ddgst": false 00:46:28.011 }, 00:46:28.011 "method": "bdev_nvme_attach_controller" 00:46:28.011 },{ 00:46:28.011 "params": { 00:46:28.011 "name": "Nvme1", 00:46:28.011 "trtype": "tcp", 00:46:28.011 "traddr": "10.0.0.2", 00:46:28.011 "adrfam": "ipv4", 00:46:28.011 "trsvcid": "4420", 00:46:28.011 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:28.011 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:28.011 "hdgst": false, 00:46:28.011 "ddgst": false 00:46:28.011 }, 00:46:28.011 "method": "bdev_nvme_attach_controller" 00:46:28.011 }' 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:46:28.011 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:46:28.012 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # break 00:46:28.012 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:46:28.012 13:51:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:28.273 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:46:28.273 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:46:28.273 fio-3.35 00:46:28.273 Starting 2 threads 00:46:40.624 00:46:40.624 filename0: (groupid=0, jobs=1): err= 0: pid=79980: Thu Nov 7 13:51:47 2024 00:46:40.624 read: IOPS=185, BW=742KiB/s (759kB/s)(7424KiB/10011msec) 00:46:40.624 slat (nsec): min=5905, max=50264, avg=7751.74, stdev=2763.03 00:46:40.624 clat (usec): min=834, max=43606, avg=21552.50, stdev=20462.00 00:46:40.624 lat (usec): min=840, max=43656, avg=21560.25, stdev=20461.73 00:46:40.624 clat percentiles (usec): 00:46:40.624 | 1.00th=[ 873], 5.00th=[ 914], 10.00th=[ 938], 20.00th=[ 971], 00:46:40.624 | 30.00th=[ 996], 40.00th=[ 1029], 50.00th=[41157], 60.00th=[41681], 00:46:40.624 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:46:40.624 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:46:40.624 | 99.99th=[43779] 00:46:40.624 bw ( KiB/s): min= 672, max= 768, per=49.79%, avg=740.80, stdev=34.86, samples=20 00:46:40.624 iops : min= 168, max= 192, avg=185.20, stdev= 8.72, samples=20 00:46:40.624 lat (usec) : 1000=31.47% 00:46:40.624 lat (msec) : 2=18.32%, 50=50.22% 00:46:40.624 cpu : usr=95.53%, sys=4.21%, ctx=13, majf=0, minf=1633 00:46:40.624 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:40.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:40.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:40.624 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:40.624 latency : target=0, window=0, percentile=100.00%, depth=4 00:46:40.624 filename1: (groupid=0, jobs=1): err= 0: pid=79981: Thu Nov 7 13:51:47 2024 00:46:40.624 read: IOPS=186, BW=745KiB/s (763kB/s)(7456KiB/10006msec) 00:46:40.624 slat (nsec): min=5890, max=49876, avg=7647.62, stdev=2688.50 00:46:40.624 clat (usec): min=609, max=43194, avg=21448.40, stdev=20392.68 00:46:40.624 lat (usec): min=615, max=43244, avg=21456.05, stdev=20392.43 00:46:40.624 clat percentiles (usec): 00:46:40.624 | 1.00th=[ 742], 5.00th=[ 824], 10.00th=[ 906], 20.00th=[ 947], 00:46:40.624 | 30.00th=[ 988], 40.00th=[ 1020], 50.00th=[41157], 60.00th=[41157], 00:46:40.624 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:46:40.624 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:46:40.624 | 99.99th=[43254] 00:46:40.624 bw ( KiB/s): min= 672, max= 768, per=50.06%, avg=744.00, stdev=32.63, samples=20 00:46:40.624 iops : min= 168, max= 192, avg=186.00, stdev= 8.16, samples=20 00:46:40.624 lat (usec) : 750=1.34%, 1000=32.35% 00:46:40.624 lat (msec) : 2=16.09%, 50=50.21% 00:46:40.624 cpu : usr=95.19%, sys=4.54%, ctx=16, majf=0, minf=1634 00:46:40.624 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:40.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:40.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:40.624 issued rwts: total=1864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:40.624 latency : target=0, window=0, percentile=100.00%, depth=4 00:46:40.624 00:46:40.624 Run status group 0 (all jobs): 00:46:40.624 READ: bw=1486KiB/s (1522kB/s), 742KiB/s-745KiB/s (759kB/s-763kB/s), io=14.5MiB (15.2MB), run=10006-10011msec 00:46:40.624 ----------------------------------------------------- 00:46:40.624 Suppressions used: 00:46:40.624 count bytes template 00:46:40.624 2 16 /usr/src/fio/parse.c 00:46:40.624 1 8 libtcmalloc_minimal.so 00:46:40.624 1 904 libcrypto.so 00:46:40.624 ----------------------------------------------------- 00:46:40.624 00:46:40.624 13:51:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:46:40.624 13:51:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:46:40.624 13:51:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:46:40.624 13:51:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:46:40.624 13:51:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:46:40.624 13:51:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:46:40.624 13:51:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:40.624 13:51:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:40.624 13:51:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:40.624 13:51:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:46:40.624 13:51:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:40.624 13:51:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:40.624 13:51:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:40.624 13:51:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:46:40.624 13:51:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:46:40.624 13:51:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:46:40.624 13:51:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:40.624 13:51:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:40.624 13:51:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:40.624 13:51:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:40.624 13:51:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:46:40.624 13:51:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:40.624 13:51:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:40.624 13:51:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:40.624 00:46:40.624 real 0m12.753s 00:46:40.624 user 0m33.753s 00:46:40.624 sys 0m1.464s 00:46:40.624 13:51:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:46:40.624 13:51:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:40.624 ************************************ 00:46:40.624 END TEST fio_dif_1_multi_subsystems 00:46:40.624 ************************************ 00:46:40.624 13:51:48 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:46:40.624 13:51:48 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:46:40.624 13:51:48 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:46:40.624 13:51:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:40.624 ************************************ 00:46:40.624 START TEST fio_dif_rand_params 00:46:40.624 ************************************ 00:46:40.624 13:51:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:46:40.624 13:51:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:46:40.624 13:51:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:46:40.624 13:51:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:46:40.624 13:51:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:46:40.624 13:51:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:46:40.624 13:51:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:46:40.624 13:51:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:46:40.624 13:51:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:46:40.624 13:51:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:46:40.624 13:51:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:46:40.624 13:51:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:46:40.624 13:51:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:46:40.624 13:51:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:46:40.624 13:51:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:40.624 13:51:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:40.624 bdev_null0 00:46:40.624 13:51:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:40.624 13:51:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:46:40.624 13:51:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:40.624 13:51:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:40.624 13:51:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:40.624 13:51:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:46:40.624 13:51:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:40.624 13:51:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:40.624 13:51:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:40.624 13:51:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:46:40.624 13:51:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:40.624 13:51:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:40.624 [2024-11-07 13:51:48.506472] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:40.624 13:51:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:40.624 13:51:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:46:40.624 13:51:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:46:40.624 13:51:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:46:40.624 13:51:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:46:40.625 13:51:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:40.625 13:51:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:46:40.625 13:51:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:40.625 13:51:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:46:40.625 13:51:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:46:40.625 { 00:46:40.625 "params": { 00:46:40.625 "name": "Nvme$subsystem", 00:46:40.625 "trtype": "$TEST_TRANSPORT", 00:46:40.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:40.625 "adrfam": "ipv4", 00:46:40.625 "trsvcid": "$NVMF_PORT", 00:46:40.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:40.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:40.625 "hdgst": ${hdgst:-false}, 00:46:40.625 "ddgst": ${ddgst:-false} 00:46:40.625 }, 00:46:40.625 "method": "bdev_nvme_attach_controller" 00:46:40.625 } 00:46:40.625 EOF 00:46:40.625 )") 00:46:40.625 13:51:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:46:40.625 13:51:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:46:40.625 13:51:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:40.625 13:51:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:46:40.625 13:51:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:46:40.625 13:51:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:46:40.625 13:51:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:40.625 13:51:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:46:40.625 13:51:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:46:40.625 13:51:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:46:40.625 13:51:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:46:40.625 13:51:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:40.625 13:51:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:46:40.625 13:51:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:46:40.625 13:51:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:46:40.625 13:51:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:46:40.625 13:51:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:46:40.625 13:51:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:46:40.625 13:51:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:46:40.625 "params": { 00:46:40.625 "name": "Nvme0", 00:46:40.625 "trtype": "tcp", 00:46:40.625 "traddr": "10.0.0.2", 00:46:40.625 "adrfam": "ipv4", 00:46:40.625 "trsvcid": "4420", 00:46:40.625 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:40.625 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:40.625 "hdgst": false, 00:46:40.625 "ddgst": false 00:46:40.625 }, 00:46:40.625 "method": "bdev_nvme_attach_controller" 00:46:40.625 }' 00:46:40.625 13:51:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:46:40.625 13:51:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:46:40.625 13:51:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # break 00:46:40.625 13:51:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:46:40.625 13:51:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:41.224 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:46:41.224 ... 00:46:41.224 fio-3.35 00:46:41.224 Starting 3 threads 00:46:47.804 00:46:47.804 filename0: (groupid=0, jobs=1): err= 0: pid=82464: Thu Nov 7 13:51:54 2024 00:46:47.804 read: IOPS=214, BW=26.9MiB/s (28.2MB/s)(135MiB/5007msec) 00:46:47.804 slat (nsec): min=6108, max=39939, avg=10827.84, stdev=1977.58 00:46:47.804 clat (usec): min=6465, max=54770, avg=13943.53, stdev=6730.74 00:46:47.804 lat (usec): min=6474, max=54781, avg=13954.36, stdev=6730.69 00:46:47.804 clat percentiles (usec): 00:46:47.804 | 1.00th=[ 7504], 5.00th=[ 8848], 10.00th=[10159], 20.00th=[11338], 00:46:47.804 | 30.00th=[11994], 40.00th=[12518], 50.00th=[12911], 60.00th=[13566], 00:46:47.804 | 70.00th=[14091], 80.00th=[14877], 90.00th=[15795], 95.00th=[16712], 00:46:47.804 | 99.00th=[52167], 99.50th=[53216], 99.90th=[54789], 99.95th=[54789], 00:46:47.804 | 99.99th=[54789] 00:46:47.804 bw ( KiB/s): min=23296, max=30720, per=33.32%, avg=27468.80, stdev=2522.90, samples=10 00:46:47.804 iops : min= 182, max= 240, avg=214.60, stdev=19.71, samples=10 00:46:47.804 lat (msec) : 10=9.01%, 20=88.20%, 50=0.56%, 100=2.23% 00:46:47.804 cpu : usr=94.77%, sys=4.93%, ctx=6, majf=0, minf=1633 00:46:47.804 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:47.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.804 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.804 issued rwts: total=1076,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:47.804 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:47.804 filename0: (groupid=0, jobs=1): err= 0: pid=82465: Thu Nov 7 13:51:54 2024 00:46:47.804 read: IOPS=208, BW=26.0MiB/s (27.3MB/s)(132MiB/5048msec) 00:46:47.804 slat (nsec): min=8225, max=59305, avg=10665.29, stdev=2207.99 00:46:47.804 clat (usec): min=6061, max=94949, avg=14339.22, stdev=5363.20 00:46:47.804 lat (usec): min=6070, max=94962, avg=14349.88, stdev=5363.29 00:46:47.805 clat percentiles (usec): 00:46:47.805 | 1.00th=[ 7898], 5.00th=[ 9896], 10.00th=[10683], 20.00th=[11863], 00:46:47.805 | 30.00th=[12780], 40.00th=[13566], 50.00th=[14091], 60.00th=[14746], 00:46:47.805 | 70.00th=[15270], 80.00th=[15795], 90.00th=[16450], 95.00th=[17433], 00:46:47.805 | 99.00th=[49021], 99.50th=[55313], 99.90th=[57410], 99.95th=[94897], 00:46:47.805 | 99.99th=[94897] 00:46:47.805 bw ( KiB/s): min=22528, max=28672, per=32.58%, avg=26854.40, stdev=1746.52, samples=10 00:46:47.805 iops : min= 176, max= 224, avg=209.80, stdev=13.64, samples=10 00:46:47.805 lat (msec) : 10=5.61%, 20=93.16%, 50=0.38%, 100=0.86% 00:46:47.805 cpu : usr=93.88%, sys=5.81%, ctx=6, majf=0, minf=1632 00:46:47.805 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:47.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.805 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.805 issued rwts: total=1052,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:47.805 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:47.805 filename0: (groupid=0, jobs=1): err= 0: pid=82466: Thu Nov 7 13:51:54 2024 00:46:47.805 read: IOPS=224, BW=28.1MiB/s (29.4MB/s)(140MiB/5004msec) 00:46:47.805 slat (nsec): min=6029, max=43645, avg=10644.17, stdev=2015.46 00:46:47.805 clat (usec): min=4876, max=53789, avg=13350.07, stdev=5810.90 00:46:47.805 lat (usec): min=4885, max=53802, avg=13360.71, stdev=5811.01 00:46:47.805 clat percentiles (usec): 00:46:47.805 | 1.00th=[ 7308], 5.00th=[ 8356], 10.00th=[ 9634], 20.00th=[10552], 00:46:47.805 | 30.00th=[11076], 40.00th=[11731], 50.00th=[12518], 60.00th=[13435], 00:46:47.805 | 70.00th=[14353], 80.00th=[15139], 90.00th=[16188], 95.00th=[16909], 00:46:47.805 | 99.00th=[51643], 99.50th=[52167], 99.90th=[53216], 99.95th=[53740], 00:46:47.805 | 99.99th=[53740] 00:46:47.805 bw ( KiB/s): min=26112, max=33792, per=34.95%, avg=28814.22, stdev=2282.97, samples=9 00:46:47.805 iops : min= 204, max= 264, avg=225.11, stdev=17.84, samples=9 00:46:47.805 lat (msec) : 10=13.45%, 20=84.68%, 50=0.45%, 100=1.42% 00:46:47.805 cpu : usr=94.32%, sys=5.38%, ctx=8, majf=0, minf=1637 00:46:47.805 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:47.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.805 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.805 issued rwts: total=1123,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:47.805 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:47.805 00:46:47.805 Run status group 0 (all jobs): 00:46:47.805 READ: bw=80.5MiB/s (84.4MB/s), 26.0MiB/s-28.1MiB/s (27.3MB/s-29.4MB/s), io=406MiB (426MB), run=5004-5048msec 00:46:48.066 ----------------------------------------------------- 00:46:48.066 Suppressions used: 00:46:48.066 count bytes template 00:46:48.066 5 44 /usr/src/fio/parse.c 00:46:48.066 1 8 libtcmalloc_minimal.so 00:46:48.066 1 904 libcrypto.so 00:46:48.066 ----------------------------------------------------- 00:46:48.066 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:48.066 bdev_null0 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:48.066 [2024-11-07 13:51:55.934434] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:48.066 bdev_null1 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:48.066 bdev_null2 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:48.066 13:51:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:48.066 13:51:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:48.066 13:51:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:46:48.066 13:51:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:48.066 13:51:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:48.066 13:51:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:46:48.067 { 00:46:48.067 "params": { 00:46:48.067 "name": "Nvme$subsystem", 00:46:48.067 "trtype": "$TEST_TRANSPORT", 00:46:48.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:48.067 "adrfam": "ipv4", 00:46:48.067 "trsvcid": "$NVMF_PORT", 00:46:48.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:48.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:48.067 "hdgst": ${hdgst:-false}, 00:46:48.067 "ddgst": ${ddgst:-false} 00:46:48.067 }, 00:46:48.067 "method": "bdev_nvme_attach_controller" 00:46:48.067 } 00:46:48.067 EOF 00:46:48.067 )") 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:46:48.067 { 00:46:48.067 "params": { 00:46:48.067 "name": "Nvme$subsystem", 00:46:48.067 "trtype": "$TEST_TRANSPORT", 00:46:48.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:48.067 "adrfam": "ipv4", 00:46:48.067 "trsvcid": "$NVMF_PORT", 00:46:48.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:48.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:48.067 "hdgst": ${hdgst:-false}, 00:46:48.067 "ddgst": ${ddgst:-false} 00:46:48.067 }, 00:46:48.067 "method": "bdev_nvme_attach_controller" 00:46:48.067 } 00:46:48.067 EOF 00:46:48.067 )") 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:46:48.067 { 00:46:48.067 "params": { 00:46:48.067 "name": "Nvme$subsystem", 00:46:48.067 "trtype": "$TEST_TRANSPORT", 00:46:48.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:48.067 "adrfam": "ipv4", 00:46:48.067 "trsvcid": "$NVMF_PORT", 00:46:48.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:48.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:48.067 "hdgst": ${hdgst:-false}, 00:46:48.067 "ddgst": ${ddgst:-false} 00:46:48.067 }, 00:46:48.067 "method": "bdev_nvme_attach_controller" 00:46:48.067 } 00:46:48.067 EOF 00:46:48.067 )") 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:46:48.067 13:51:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:46:48.067 "params": { 00:46:48.067 "name": "Nvme0", 00:46:48.067 "trtype": "tcp", 00:46:48.067 "traddr": "10.0.0.2", 00:46:48.067 "adrfam": "ipv4", 00:46:48.067 "trsvcid": "4420", 00:46:48.067 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:48.067 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:48.067 "hdgst": false, 00:46:48.067 "ddgst": false 00:46:48.067 }, 00:46:48.067 "method": "bdev_nvme_attach_controller" 00:46:48.067 },{ 00:46:48.067 "params": { 00:46:48.067 "name": "Nvme1", 00:46:48.067 "trtype": "tcp", 00:46:48.067 "traddr": "10.0.0.2", 00:46:48.067 "adrfam": "ipv4", 00:46:48.067 "trsvcid": "4420", 00:46:48.067 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:48.067 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:48.067 "hdgst": false, 00:46:48.067 "ddgst": false 00:46:48.067 }, 00:46:48.067 "method": "bdev_nvme_attach_controller" 00:46:48.067 },{ 00:46:48.067 "params": { 00:46:48.067 "name": "Nvme2", 00:46:48.067 "trtype": "tcp", 00:46:48.067 "traddr": "10.0.0.2", 00:46:48.067 "adrfam": "ipv4", 00:46:48.067 "trsvcid": "4420", 00:46:48.067 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:46:48.067 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:46:48.067 "hdgst": false, 00:46:48.067 "ddgst": false 00:46:48.067 }, 00:46:48.067 "method": "bdev_nvme_attach_controller" 00:46:48.067 }' 00:46:48.328 13:51:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:46:48.328 13:51:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:46:48.328 13:51:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # break 00:46:48.328 13:51:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:46:48.328 13:51:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:48.587 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:46:48.587 ... 00:46:48.587 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:46:48.587 ... 00:46:48.587 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:46:48.587 ... 00:46:48.587 fio-3.35 00:46:48.587 Starting 24 threads 00:47:00.816 00:47:00.816 filename0: (groupid=0, jobs=1): err= 0: pid=84027: Thu Nov 7 13:52:07 2024 00:47:00.816 read: IOPS=464, BW=1856KiB/s (1901kB/s)(18.2MiB/10034msec) 00:47:00.816 slat (usec): min=6, max=130, avg=10.72, stdev= 5.50 00:47:00.816 clat (usec): min=3321, max=49793, avg=34389.86, stdev=6666.17 00:47:00.816 lat (usec): min=3330, max=49809, avg=34400.58, stdev=6666.25 00:47:00.816 clat percentiles (usec): 00:47:00.816 | 1.00th=[ 6390], 5.00th=[24773], 10.00th=[25560], 20.00th=[27132], 00:47:00.816 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 00:47:00.816 | 70.00th=[36963], 80.00th=[37487], 90.00th=[38536], 95.00th=[39060], 00:47:00.816 | 99.00th=[40109], 99.50th=[47973], 99.90th=[49546], 99.95th=[49546], 00:47:00.816 | 99.99th=[49546] 00:47:00.816 bw ( KiB/s): min= 1660, max= 2560, per=4.52%, avg=1855.20, stdev=205.83, samples=20 00:47:00.816 iops : min= 415, max= 640, avg=463.80, stdev=51.46, samples=20 00:47:00.816 lat (msec) : 4=0.04%, 10=2.66%, 20=0.04%, 50=97.25% 00:47:00.816 cpu : usr=98.15%, sys=1.27%, ctx=131, majf=0, minf=1634 00:47:00.816 IO depths : 1=5.6%, 2=11.8%, 4=24.8%, 8=50.9%, 16=6.9%, 32=0.0%, >=64=0.0% 00:47:00.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.816 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.816 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:00.816 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:00.816 filename0: (groupid=0, jobs=1): err= 0: pid=84028: Thu Nov 7 13:52:07 2024 00:47:00.816 read: IOPS=425, BW=1702KiB/s (1743kB/s)(16.7MiB/10038msec) 00:47:00.816 slat (nsec): min=5162, max=75347, avg=18961.70, stdev=10713.52 00:47:00.816 clat (usec): min=12467, max=70953, avg=37430.36, stdev=2820.28 00:47:00.816 lat (usec): min=12476, max=70976, avg=37449.32, stdev=2820.84 00:47:00.816 clat percentiles (usec): 00:47:00.816 | 1.00th=[26084], 5.00th=[36439], 10.00th=[36963], 20.00th=[36963], 00:47:00.816 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 00:47:00.816 | 70.00th=[37487], 80.00th=[38536], 90.00th=[38536], 95.00th=[39584], 00:47:00.816 | 99.00th=[41157], 99.50th=[51119], 99.90th=[70779], 99.95th=[70779], 00:47:00.816 | 99.99th=[70779] 00:47:00.816 bw ( KiB/s): min= 1660, max= 1795, per=4.15%, avg=1702.30, stdev=60.16, samples=20 00:47:00.816 iops : min= 415, max= 448, avg=425.50, stdev=15.00, samples=20 00:47:00.816 lat (msec) : 20=0.05%, 50=99.34%, 100=0.61% 00:47:00.816 cpu : usr=98.48%, sys=1.13%, ctx=66, majf=0, minf=1636 00:47:00.816 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:47:00.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.816 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.816 issued rwts: total=4272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:00.816 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:00.816 filename0: (groupid=0, jobs=1): err= 0: pid=84029: Thu Nov 7 13:52:07 2024 00:47:00.816 read: IOPS=424, BW=1696KiB/s (1737kB/s)(16.6MiB/10036msec) 00:47:00.816 slat (nsec): min=4346, max=56861, avg=13673.81, stdev=8014.53 00:47:00.816 clat (usec): min=24761, max=71481, avg=37610.97, stdev=2644.13 00:47:00.816 lat (usec): min=24771, max=71495, avg=37624.65, stdev=2644.27 00:47:00.816 clat percentiles (usec): 00:47:00.816 | 1.00th=[35914], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 00:47:00.816 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[37487], 00:47:00.816 | 70.00th=[37487], 80.00th=[38536], 90.00th=[39060], 95.00th=[39584], 00:47:00.816 | 99.00th=[41157], 99.50th=[49546], 99.90th=[71828], 99.95th=[71828], 00:47:00.816 | 99.99th=[71828] 00:47:00.816 bw ( KiB/s): min= 1536, max= 1792, per=4.13%, avg=1695.60, stdev=70.62, samples=20 00:47:00.816 iops : min= 384, max= 448, avg=423.90, stdev=17.65, samples=20 00:47:00.816 lat (msec) : 50=99.62%, 100=0.38% 00:47:00.816 cpu : usr=97.71%, sys=1.56%, ctx=208, majf=0, minf=1634 00:47:00.816 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:47:00.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.817 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.817 issued rwts: total=4256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:00.817 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:00.817 filename0: (groupid=0, jobs=1): err= 0: pid=84031: Thu Nov 7 13:52:07 2024 00:47:00.817 read: IOPS=421, BW=1687KiB/s (1727kB/s)(16.5MiB/10018msec) 00:47:00.817 slat (nsec): min=4474, max=59046, avg=18724.02, stdev=9845.95 00:47:00.817 clat (usec): min=23548, max=96431, avg=37763.76, stdev=3692.26 00:47:00.817 lat (usec): min=23590, max=96448, avg=37782.49, stdev=3691.79 00:47:00.817 clat percentiles (usec): 00:47:00.817 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 00:47:00.817 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 00:47:00.817 | 70.00th=[37487], 80.00th=[38536], 90.00th=[38536], 95.00th=[39584], 00:47:00.817 | 99.00th=[41157], 99.50th=[71828], 99.90th=[83362], 99.95th=[83362], 00:47:00.817 | 99.99th=[95945] 00:47:00.817 bw ( KiB/s): min= 1536, max= 1792, per=4.10%, avg=1683.00, stdev=75.21, samples=20 00:47:00.817 iops : min= 384, max= 448, avg=420.75, stdev=18.80, samples=20 00:47:00.817 lat (msec) : 50=99.24%, 100=0.76% 00:47:00.817 cpu : usr=98.44%, sys=1.10%, ctx=92, majf=0, minf=1635 00:47:00.817 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:47:00.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.817 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.817 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:00.817 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:00.817 filename0: (groupid=0, jobs=1): err= 0: pid=84032: Thu Nov 7 13:52:07 2024 00:47:00.817 read: IOPS=421, BW=1687KiB/s (1727kB/s)(16.6MiB/10056msec) 00:47:00.817 slat (nsec): min=6342, max=69916, avg=16490.84, stdev=10498.77 00:47:00.817 clat (usec): min=25918, max=83701, avg=37815.67, stdev=3732.11 00:47:00.817 lat (usec): min=25948, max=83713, avg=37832.16, stdev=3731.19 00:47:00.817 clat percentiles (usec): 00:47:00.817 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 00:47:00.817 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[37487], 00:47:00.817 | 70.00th=[37487], 80.00th=[38536], 90.00th=[38536], 95.00th=[39060], 00:47:00.817 | 99.00th=[41157], 99.50th=[71828], 99.90th=[83362], 99.95th=[83362], 00:47:00.817 | 99.99th=[83362] 00:47:00.817 bw ( KiB/s): min= 1536, max= 1792, per=4.12%, avg=1689.15, stdev=78.37, samples=20 00:47:00.817 iops : min= 384, max= 448, avg=422.25, stdev=19.67, samples=20 00:47:00.817 lat (msec) : 50=99.20%, 100=0.80% 00:47:00.817 cpu : usr=98.43%, sys=1.07%, ctx=53, majf=0, minf=1636 00:47:00.817 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:47:00.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.817 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.817 issued rwts: total=4240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:00.817 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:00.817 filename0: (groupid=0, jobs=1): err= 0: pid=84033: Thu Nov 7 13:52:07 2024 00:47:00.817 read: IOPS=458, BW=1832KiB/s (1876kB/s)(17.9MiB/10026msec) 00:47:00.817 slat (nsec): min=6422, max=50443, avg=11462.77, stdev=5777.82 00:47:00.817 clat (usec): min=4710, max=49290, avg=34827.90, stdev=6155.78 00:47:00.817 lat (usec): min=4719, max=49302, avg=34839.36, stdev=6156.87 00:47:00.817 clat percentiles (usec): 00:47:00.817 | 1.00th=[ 5211], 5.00th=[24249], 10.00th=[25560], 20.00th=[36439], 00:47:00.817 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 00:47:00.817 | 70.00th=[37487], 80.00th=[37487], 90.00th=[38536], 95.00th=[39060], 00:47:00.817 | 99.00th=[40109], 99.50th=[40109], 99.90th=[49021], 99.95th=[49021], 00:47:00.817 | 99.99th=[49546] 00:47:00.817 bw ( KiB/s): min= 1660, max= 2299, per=4.46%, avg=1829.35, stdev=143.83, samples=20 00:47:00.817 iops : min= 415, max= 574, avg=457.30, stdev=35.83, samples=20 00:47:00.817 lat (msec) : 10=1.74%, 20=0.35%, 50=97.91% 00:47:00.817 cpu : usr=98.68%, sys=1.02%, ctx=15, majf=0, minf=1638 00:47:00.817 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:47:00.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.817 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.817 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:00.817 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:00.817 filename0: (groupid=0, jobs=1): err= 0: pid=84035: Thu Nov 7 13:52:07 2024 00:47:00.817 read: IOPS=421, BW=1688KiB/s (1728kB/s)(16.5MiB/10011msec) 00:47:00.817 slat (nsec): min=6211, max=63079, avg=16896.81, stdev=10043.52 00:47:00.817 clat (usec): min=24175, max=75579, avg=37747.71, stdev=3339.25 00:47:00.817 lat (usec): min=24183, max=75614, avg=37764.61, stdev=3339.26 00:47:00.817 clat percentiles (usec): 00:47:00.817 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 00:47:00.817 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 00:47:00.817 | 70.00th=[37487], 80.00th=[38536], 90.00th=[39060], 95.00th=[39584], 00:47:00.817 | 99.00th=[41157], 99.50th=[71828], 99.90th=[76022], 99.95th=[76022], 00:47:00.817 | 99.99th=[76022] 00:47:00.817 bw ( KiB/s): min= 1536, max= 1792, per=4.12%, avg=1690.32, stdev=68.95, samples=19 00:47:00.817 iops : min= 384, max= 448, avg=422.58, stdev=17.24, samples=19 00:47:00.817 lat (msec) : 50=99.10%, 100=0.90% 00:47:00.817 cpu : usr=98.70%, sys=1.00%, ctx=14, majf=0, minf=1633 00:47:00.817 IO depths : 1=5.8%, 2=12.0%, 4=24.9%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:47:00.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.817 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.817 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:00.817 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:00.817 filename0: (groupid=0, jobs=1): err= 0: pid=84036: Thu Nov 7 13:52:07 2024 00:47:00.817 read: IOPS=422, BW=1690KiB/s (1730kB/s)(16.6MiB/10041msec) 00:47:00.817 slat (nsec): min=6400, max=71150, avg=19979.95, stdev=11180.19 00:47:00.817 clat (usec): min=24886, max=86779, avg=37684.60, stdev=3838.85 00:47:00.817 lat (usec): min=24894, max=86814, avg=37704.58, stdev=3838.05 00:47:00.817 clat percentiles (usec): 00:47:00.817 | 1.00th=[30016], 5.00th=[36439], 10.00th=[36963], 20.00th=[36963], 00:47:00.817 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[37487], 00:47:00.817 | 70.00th=[37487], 80.00th=[38536], 90.00th=[38536], 95.00th=[39584], 00:47:00.817 | 99.00th=[43779], 99.50th=[65799], 99.90th=[83362], 99.95th=[83362], 00:47:00.817 | 99.99th=[86508] 00:47:00.817 bw ( KiB/s): min= 1555, max= 1792, per=4.14%, avg=1698.26, stdev=69.89, samples=19 00:47:00.817 iops : min= 388, max= 448, avg=424.53, stdev=17.56, samples=19 00:47:00.817 lat (msec) : 50=99.01%, 100=0.99% 00:47:00.817 cpu : usr=98.80%, sys=0.88%, ctx=19, majf=0, minf=1633 00:47:00.817 IO depths : 1=5.9%, 2=12.0%, 4=24.3%, 8=51.2%, 16=6.6%, 32=0.0%, >=64=0.0% 00:47:00.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.817 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.817 issued rwts: total=4242,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:00.817 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:00.817 filename1: (groupid=0, jobs=1): err= 0: pid=84037: Thu Nov 7 13:52:07 2024 00:47:00.817 read: IOPS=420, BW=1682KiB/s (1723kB/s)(16.5MiB/10044msec) 00:47:00.817 slat (nsec): min=6298, max=77089, avg=20430.93, stdev=11540.88 00:47:00.817 clat (usec): min=24978, max=91061, avg=37817.62, stdev=4386.33 00:47:00.817 lat (usec): min=24985, max=91090, avg=37838.05, stdev=4385.47 00:47:00.817 clat percentiles (usec): 00:47:00.817 | 1.00th=[36439], 5.00th=[36439], 10.00th=[36963], 20.00th=[36963], 00:47:00.817 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 00:47:00.817 | 70.00th=[37487], 80.00th=[38536], 90.00th=[38536], 95.00th=[39060], 00:47:00.817 | 99.00th=[40633], 99.50th=[83362], 99.90th=[88605], 99.95th=[88605], 00:47:00.817 | 99.99th=[90702] 00:47:00.817 bw ( KiB/s): min= 1539, max= 1792, per=4.12%, avg=1690.47, stdev=68.43, samples=19 00:47:00.817 iops : min= 384, max= 448, avg=422.58, stdev=17.20, samples=19 00:47:00.817 lat (msec) : 50=99.24%, 100=0.76% 00:47:00.817 cpu : usr=98.58%, sys=0.98%, ctx=50, majf=0, minf=1634 00:47:00.817 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:47:00.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.817 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.817 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:00.817 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:00.817 filename1: (groupid=0, jobs=1): err= 0: pid=84038: Thu Nov 7 13:52:07 2024 00:47:00.817 read: IOPS=421, BW=1686KiB/s (1727kB/s)(16.6MiB/10057msec) 00:47:00.817 slat (nsec): min=6326, max=67245, avg=18087.22, stdev=9728.15 00:47:00.817 clat (usec): min=22982, max=83780, avg=37790.69, stdev=3718.67 00:47:00.817 lat (usec): min=22997, max=83793, avg=37808.78, stdev=3717.79 00:47:00.817 clat percentiles (usec): 00:47:00.817 | 1.00th=[36439], 5.00th=[36439], 10.00th=[36963], 20.00th=[36963], 00:47:00.817 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[37487], 00:47:00.817 | 70.00th=[37487], 80.00th=[38536], 90.00th=[38536], 95.00th=[39584], 00:47:00.817 | 99.00th=[40633], 99.50th=[71828], 99.90th=[83362], 99.95th=[83362], 00:47:00.817 | 99.99th=[83362] 00:47:00.817 bw ( KiB/s): min= 1536, max= 1792, per=4.12%, avg=1689.15, stdev=78.37, samples=20 00:47:00.817 iops : min= 384, max= 448, avg=422.25, stdev=19.67, samples=20 00:47:00.817 lat (msec) : 50=99.20%, 100=0.80% 00:47:00.817 cpu : usr=98.68%, sys=1.00%, ctx=17, majf=0, minf=1632 00:47:00.817 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:47:00.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.817 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.817 issued rwts: total=4240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:00.817 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:00.817 filename1: (groupid=0, jobs=1): err= 0: pid=84040: Thu Nov 7 13:52:07 2024 00:47:00.817 read: IOPS=423, BW=1696KiB/s (1736kB/s)(16.6MiB/10039msec) 00:47:00.817 slat (nsec): min=4682, max=59987, avg=18053.58, stdev=9072.22 00:47:00.817 clat (usec): min=20946, max=71004, avg=37573.84, stdev=2498.12 00:47:00.817 lat (usec): min=20955, max=71023, avg=37591.90, stdev=2498.31 00:47:00.817 clat percentiles (usec): 00:47:00.817 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 00:47:00.817 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 00:47:00.817 | 70.00th=[37487], 80.00th=[38536], 90.00th=[38536], 95.00th=[39584], 00:47:00.818 | 99.00th=[41157], 99.50th=[46924], 99.90th=[70779], 99.95th=[70779], 00:47:00.818 | 99.99th=[70779] 00:47:00.818 bw ( KiB/s): min= 1660, max= 1792, per=4.13%, avg=1695.75, stdev=56.56, samples=20 00:47:00.818 iops : min= 415, max= 448, avg=423.90, stdev=14.16, samples=20 00:47:00.818 lat (msec) : 50=99.62%, 100=0.38% 00:47:00.818 cpu : usr=98.64%, sys=1.05%, ctx=22, majf=0, minf=1634 00:47:00.818 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:47:00.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.818 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.818 issued rwts: total=4256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:00.818 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:00.818 filename1: (groupid=0, jobs=1): err= 0: pid=84041: Thu Nov 7 13:52:07 2024 00:47:00.818 read: IOPS=422, BW=1688KiB/s (1729kB/s)(16.6MiB/10047msec) 00:47:00.818 slat (nsec): min=6149, max=71523, avg=18992.51, stdev=10839.47 00:47:00.818 clat (usec): min=36117, max=76499, avg=37726.84, stdev=3102.99 00:47:00.818 lat (usec): min=36127, max=76529, avg=37745.83, stdev=3102.02 00:47:00.818 clat percentiles (usec): 00:47:00.818 | 1.00th=[36439], 5.00th=[36439], 10.00th=[36963], 20.00th=[36963], 00:47:00.818 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[37487], 00:47:00.818 | 70.00th=[37487], 80.00th=[38536], 90.00th=[39060], 95.00th=[39060], 00:47:00.818 | 99.00th=[41157], 99.50th=[66847], 99.90th=[76022], 99.95th=[76022], 00:47:00.818 | 99.99th=[76022] 00:47:00.818 bw ( KiB/s): min= 1536, max= 1792, per=4.12%, avg=1690.53, stdev=68.29, samples=19 00:47:00.818 iops : min= 384, max= 448, avg=422.63, stdev=17.07, samples=19 00:47:00.818 lat (msec) : 50=99.25%, 100=0.75% 00:47:00.818 cpu : usr=98.65%, sys=0.92%, ctx=56, majf=0, minf=1634 00:47:00.818 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:47:00.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.818 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.818 issued rwts: total=4240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:00.818 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:00.818 filename1: (groupid=0, jobs=1): err= 0: pid=84042: Thu Nov 7 13:52:07 2024 00:47:00.818 read: IOPS=425, BW=1700KiB/s (1741kB/s)(16.7MiB/10051msec) 00:47:00.818 slat (nsec): min=4251, max=72684, avg=17946.44, stdev=11165.73 00:47:00.818 clat (usec): min=20851, max=96121, avg=37483.71, stdev=5757.25 00:47:00.818 lat (usec): min=20871, max=96138, avg=37501.66, stdev=5756.74 00:47:00.818 clat percentiles (usec): 00:47:00.818 | 1.00th=[26084], 5.00th=[31065], 10.00th=[32375], 20.00th=[36963], 00:47:00.818 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 00:47:00.818 | 70.00th=[37487], 80.00th=[38536], 90.00th=[40109], 95.00th=[43779], 00:47:00.818 | 99.00th=[53740], 99.50th=[85459], 99.90th=[95945], 99.95th=[95945], 00:47:00.818 | 99.99th=[95945] 00:47:00.818 bw ( KiB/s): min= 1410, max= 1776, per=4.15%, avg=1702.30, stdev=83.06, samples=20 00:47:00.818 iops : min= 352, max= 444, avg=425.55, stdev=20.86, samples=20 00:47:00.818 lat (msec) : 50=98.83%, 100=1.17% 00:47:00.818 cpu : usr=98.35%, sys=1.18%, ctx=37, majf=0, minf=1630 00:47:00.818 IO depths : 1=0.3%, 2=3.4%, 4=13.3%, 8=68.8%, 16=14.2%, 32=0.0%, >=64=0.0% 00:47:00.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.818 complete : 0=0.0%, 4=91.5%, 8=4.9%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.818 issued rwts: total=4272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:00.818 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:00.818 filename1: (groupid=0, jobs=1): err= 0: pid=84043: Thu Nov 7 13:52:07 2024 00:47:00.818 read: IOPS=421, BW=1688KiB/s (1728kB/s)(16.5MiB/10010msec) 00:47:00.818 slat (nsec): min=6699, max=68474, avg=17741.14, stdev=8734.50 00:47:00.818 clat (usec): min=36342, max=75341, avg=37759.05, stdev=3236.16 00:47:00.818 lat (usec): min=36358, max=75372, avg=37776.79, stdev=3235.27 00:47:00.818 clat percentiles (usec): 00:47:00.818 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 00:47:00.818 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 00:47:00.818 | 70.00th=[37487], 80.00th=[38536], 90.00th=[39060], 95.00th=[39584], 00:47:00.818 | 99.00th=[41157], 99.50th=[71828], 99.90th=[74974], 99.95th=[74974], 00:47:00.818 | 99.99th=[74974] 00:47:00.818 bw ( KiB/s): min= 1539, max= 1792, per=4.12%, avg=1690.47, stdev=68.43, samples=19 00:47:00.818 iops : min= 384, max= 448, avg=422.58, stdev=17.20, samples=19 00:47:00.818 lat (msec) : 50=99.24%, 100=0.76% 00:47:00.818 cpu : usr=98.57%, sys=1.01%, ctx=34, majf=0, minf=1632 00:47:00.818 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:47:00.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.818 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.818 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:00.818 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:00.818 filename1: (groupid=0, jobs=1): err= 0: pid=84044: Thu Nov 7 13:52:07 2024 00:47:00.818 read: IOPS=422, BW=1691KiB/s (1732kB/s)(16.6MiB/10066msec) 00:47:00.818 slat (nsec): min=6426, max=72361, avg=18894.13, stdev=10663.35 00:47:00.818 clat (usec): min=23262, max=83370, avg=37682.83, stdev=3202.79 00:47:00.818 lat (usec): min=23269, max=83388, avg=37701.73, stdev=3201.82 00:47:00.818 clat percentiles (usec): 00:47:00.818 | 1.00th=[36439], 5.00th=[36439], 10.00th=[36963], 20.00th=[36963], 00:47:00.818 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[37487], 00:47:00.818 | 70.00th=[37487], 80.00th=[38536], 90.00th=[38536], 95.00th=[39060], 00:47:00.818 | 99.00th=[41157], 99.50th=[53740], 99.90th=[83362], 99.95th=[83362], 00:47:00.818 | 99.99th=[83362] 00:47:00.818 bw ( KiB/s): min= 1660, max= 1792, per=4.13%, avg=1695.75, stdev=55.35, samples=20 00:47:00.818 iops : min= 415, max= 448, avg=423.90, stdev=13.86, samples=20 00:47:00.818 lat (msec) : 50=99.20%, 100=0.80% 00:47:00.818 cpu : usr=98.70%, sys=0.99%, ctx=17, majf=0, minf=1635 00:47:00.818 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:47:00.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.818 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.818 issued rwts: total=4256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:00.818 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:00.818 filename1: (groupid=0, jobs=1): err= 0: pid=84045: Thu Nov 7 13:52:07 2024 00:47:00.818 read: IOPS=484, BW=1939KiB/s (1986kB/s)(19.0MiB/10032msec) 00:47:00.818 slat (nsec): min=6222, max=56178, avg=11620.17, stdev=6081.30 00:47:00.818 clat (usec): min=4545, max=49628, avg=32899.34, stdev=6761.78 00:47:00.818 lat (usec): min=4560, max=49636, avg=32910.96, stdev=6763.20 00:47:00.818 clat percentiles (usec): 00:47:00.818 | 1.00th=[ 6325], 5.00th=[23987], 10.00th=[24773], 20.00th=[25822], 00:47:00.818 | 30.00th=[26870], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 00:47:00.818 | 70.00th=[36963], 80.00th=[36963], 90.00th=[37487], 95.00th=[38536], 00:47:00.818 | 99.00th=[39584], 99.50th=[46400], 99.90th=[49546], 99.95th=[49546], 00:47:00.818 | 99.99th=[49546] 00:47:00.818 bw ( KiB/s): min= 1664, max= 2432, per=4.72%, avg=1938.45, stdev=224.11, samples=20 00:47:00.818 iops : min= 416, max= 608, avg=484.50, stdev=55.99, samples=20 00:47:00.818 lat (msec) : 10=1.93%, 20=0.33%, 50=97.74% 00:47:00.818 cpu : usr=98.79%, sys=0.88%, ctx=16, majf=0, minf=1632 00:47:00.818 IO depths : 1=5.9%, 2=12.2%, 4=24.9%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:47:00.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.818 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.818 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:00.818 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:00.818 filename2: (groupid=0, jobs=1): err= 0: pid=84047: Thu Nov 7 13:52:07 2024 00:47:00.818 read: IOPS=424, BW=1697KiB/s (1737kB/s)(16.7MiB/10071msec) 00:47:00.818 slat (nsec): min=4861, max=75827, avg=19269.03, stdev=12009.26 00:47:00.818 clat (usec): min=12619, max=76333, avg=37551.15, stdev=2699.85 00:47:00.818 lat (usec): min=12629, max=76347, avg=37570.42, stdev=2698.77 00:47:00.818 clat percentiles (usec): 00:47:00.818 | 1.00th=[33162], 5.00th=[36439], 10.00th=[36963], 20.00th=[36963], 00:47:00.818 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[37487], 00:47:00.818 | 70.00th=[37487], 80.00th=[38536], 90.00th=[38536], 95.00th=[39060], 00:47:00.818 | 99.00th=[40633], 99.50th=[41157], 99.90th=[76022], 99.95th=[76022], 00:47:00.818 | 99.99th=[76022] 00:47:00.818 bw ( KiB/s): min= 1660, max= 1792, per=4.15%, avg=1701.95, stdev=60.06, samples=20 00:47:00.818 iops : min= 415, max= 448, avg=425.45, stdev=15.04, samples=20 00:47:00.818 lat (msec) : 20=0.05%, 50=99.58%, 100=0.37% 00:47:00.818 cpu : usr=98.87%, sys=0.81%, ctx=19, majf=0, minf=1633 00:47:00.818 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:47:00.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.818 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.818 issued rwts: total=4272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:00.818 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:00.818 filename2: (groupid=0, jobs=1): err= 0: pid=84048: Thu Nov 7 13:52:07 2024 00:47:00.818 read: IOPS=421, BW=1686KiB/s (1727kB/s)(16.6MiB/10059msec) 00:47:00.818 slat (nsec): min=4174, max=77332, avg=17941.35, stdev=10294.05 00:47:00.818 clat (usec): min=24025, max=92226, avg=37802.48, stdev=3685.14 00:47:00.818 lat (usec): min=24038, max=92242, avg=37820.42, stdev=3683.78 00:47:00.818 clat percentiles (usec): 00:47:00.818 | 1.00th=[36439], 5.00th=[36439], 10.00th=[36963], 20.00th=[36963], 00:47:00.818 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[37487], 00:47:00.818 | 70.00th=[37487], 80.00th=[38536], 90.00th=[39060], 95.00th=[39060], 00:47:00.818 | 99.00th=[40633], 99.50th=[76022], 99.90th=[79168], 99.95th=[79168], 00:47:00.818 | 99.99th=[91751] 00:47:00.818 bw ( KiB/s): min= 1532, max= 1792, per=4.12%, avg=1689.40, stdev=67.45, samples=20 00:47:00.818 iops : min= 383, max= 448, avg=422.35, stdev=16.86, samples=20 00:47:00.818 lat (msec) : 50=99.25%, 100=0.75% 00:47:00.818 cpu : usr=98.80%, sys=0.88%, ctx=14, majf=0, minf=1633 00:47:00.818 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:47:00.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.818 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.818 issued rwts: total=4240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:00.818 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:00.818 filename2: (groupid=0, jobs=1): err= 0: pid=84049: Thu Nov 7 13:52:07 2024 00:47:00.818 read: IOPS=421, BW=1687KiB/s (1727kB/s)(16.6MiB/10055msec) 00:47:00.818 slat (nsec): min=4383, max=79219, avg=18844.01, stdev=10405.31 00:47:00.818 clat (usec): min=26275, max=76596, avg=37768.19, stdev=3490.87 00:47:00.819 lat (usec): min=26283, max=76612, avg=37787.04, stdev=3489.62 00:47:00.819 clat percentiles (usec): 00:47:00.819 | 1.00th=[36439], 5.00th=[36439], 10.00th=[36963], 20.00th=[36963], 00:47:00.819 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[37487], 00:47:00.819 | 70.00th=[37487], 80.00th=[38536], 90.00th=[38536], 95.00th=[39060], 00:47:00.819 | 99.00th=[40633], 99.50th=[76022], 99.90th=[76022], 99.95th=[77071], 00:47:00.819 | 99.99th=[77071] 00:47:00.819 bw ( KiB/s): min= 1536, max= 1792, per=4.12%, avg=1689.40, stdev=67.05, samples=20 00:47:00.819 iops : min= 384, max= 448, avg=422.35, stdev=16.76, samples=20 00:47:00.819 lat (msec) : 50=99.20%, 100=0.80% 00:47:00.819 cpu : usr=98.86%, sys=0.85%, ctx=15, majf=0, minf=1631 00:47:00.819 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:47:00.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.819 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.819 issued rwts: total=4240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:00.819 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:00.819 filename2: (groupid=0, jobs=1): err= 0: pid=84050: Thu Nov 7 13:52:07 2024 00:47:00.819 read: IOPS=420, BW=1682KiB/s (1722kB/s)(16.5MiB/10045msec) 00:47:00.819 slat (nsec): min=4440, max=73120, avg=20043.44, stdev=10379.53 00:47:00.819 clat (msec): min=21, max=106, avg=37.86, stdev= 4.78 00:47:00.819 lat (msec): min=21, max=106, avg=37.88, stdev= 4.78 00:47:00.819 clat percentiles (msec): 00:47:00.819 | 1.00th=[ 37], 5.00th=[ 37], 10.00th=[ 37], 20.00th=[ 37], 00:47:00.819 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 38], 00:47:00.819 | 70.00th=[ 38], 80.00th=[ 39], 90.00th=[ 39], 95.00th=[ 40], 00:47:00.819 | 99.00th=[ 41], 99.50th=[ 84], 99.90th=[ 97], 99.95th=[ 97], 00:47:00.819 | 99.99th=[ 107] 00:47:00.819 bw ( KiB/s): min= 1408, max= 1792, per=4.10%, avg=1683.00, stdev=95.14, samples=20 00:47:00.819 iops : min= 352, max= 448, avg=420.75, stdev=23.79, samples=20 00:47:00.819 lat (msec) : 50=99.20%, 100=0.76%, 250=0.05% 00:47:00.819 cpu : usr=98.26%, sys=1.18%, ctx=74, majf=0, minf=1633 00:47:00.819 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:47:00.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.819 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.819 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:00.819 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:00.819 filename2: (groupid=0, jobs=1): err= 0: pid=84052: Thu Nov 7 13:52:07 2024 00:47:00.819 read: IOPS=424, BW=1697KiB/s (1737kB/s)(16.7MiB/10071msec) 00:47:00.819 slat (nsec): min=6291, max=77573, avg=11030.79, stdev=6982.31 00:47:00.819 clat (usec): min=23471, max=76535, avg=37619.29, stdev=2730.29 00:47:00.819 lat (usec): min=23481, max=76548, avg=37630.32, stdev=2729.39 00:47:00.819 clat percentiles (usec): 00:47:00.819 | 1.00th=[33162], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 00:47:00.819 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[37487], 00:47:00.819 | 70.00th=[37487], 80.00th=[38536], 90.00th=[39060], 95.00th=[39060], 00:47:00.819 | 99.00th=[40633], 99.50th=[41681], 99.90th=[76022], 99.95th=[76022], 00:47:00.819 | 99.99th=[76022] 00:47:00.819 bw ( KiB/s): min= 1660, max= 1792, per=4.15%, avg=1702.15, stdev=60.37, samples=20 00:47:00.819 iops : min= 415, max= 448, avg=425.50, stdev=15.12, samples=20 00:47:00.819 lat (msec) : 50=99.53%, 100=0.47% 00:47:00.819 cpu : usr=98.67%, sys=1.02%, ctx=15, majf=0, minf=1633 00:47:00.819 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:47:00.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.819 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.819 issued rwts: total=4272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:00.819 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:00.819 filename2: (groupid=0, jobs=1): err= 0: pid=84053: Thu Nov 7 13:52:07 2024 00:47:00.819 read: IOPS=422, BW=1691KiB/s (1732kB/s)(16.6MiB/10067msec) 00:47:00.819 slat (nsec): min=6207, max=72638, avg=19440.24, stdev=10987.05 00:47:00.819 clat (usec): min=32002, max=79137, avg=37648.87, stdev=2720.08 00:47:00.819 lat (usec): min=32012, max=79145, avg=37668.31, stdev=2718.98 00:47:00.819 clat percentiles (usec): 00:47:00.819 | 1.00th=[36439], 5.00th=[36439], 10.00th=[36963], 20.00th=[36963], 00:47:00.819 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[37487], 00:47:00.819 | 70.00th=[37487], 80.00th=[38536], 90.00th=[38536], 95.00th=[39060], 00:47:00.819 | 99.00th=[40633], 99.50th=[52167], 99.90th=[76022], 99.95th=[76022], 00:47:00.819 | 99.99th=[79168] 00:47:00.819 bw ( KiB/s): min= 1660, max= 1792, per=4.13%, avg=1695.95, stdev=56.91, samples=20 00:47:00.819 iops : min= 415, max= 448, avg=423.95, stdev=14.25, samples=20 00:47:00.819 lat (msec) : 50=99.25%, 100=0.75% 00:47:00.819 cpu : usr=98.46%, sys=1.14%, ctx=69, majf=0, minf=1634 00:47:00.819 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:47:00.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.819 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.819 issued rwts: total=4256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:00.819 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:00.819 filename2: (groupid=0, jobs=1): err= 0: pid=84054: Thu Nov 7 13:52:07 2024 00:47:00.819 read: IOPS=423, BW=1695KiB/s (1736kB/s)(16.6MiB/10041msec) 00:47:00.819 slat (nsec): min=4407, max=59335, avg=12740.69, stdev=7335.25 00:47:00.819 clat (usec): min=20950, max=71288, avg=37636.05, stdev=2665.14 00:47:00.819 lat (usec): min=20979, max=71300, avg=37648.79, stdev=2664.40 00:47:00.819 clat percentiles (usec): 00:47:00.819 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 00:47:00.819 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[37487], 00:47:00.819 | 70.00th=[37487], 80.00th=[38536], 90.00th=[39060], 95.00th=[39584], 00:47:00.819 | 99.00th=[41157], 99.50th=[51119], 99.90th=[70779], 99.95th=[70779], 00:47:00.819 | 99.99th=[70779] 00:47:00.819 bw ( KiB/s): min= 1660, max= 1792, per=4.13%, avg=1695.60, stdev=56.64, samples=20 00:47:00.819 iops : min= 415, max= 448, avg=423.90, stdev=14.16, samples=20 00:47:00.819 lat (msec) : 50=99.44%, 100=0.56% 00:47:00.819 cpu : usr=98.26%, sys=1.23%, ctx=146, majf=0, minf=1633 00:47:00.819 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:47:00.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.819 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.819 issued rwts: total=4256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:00.819 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:00.819 filename2: (groupid=0, jobs=1): err= 0: pid=84055: Thu Nov 7 13:52:07 2024 00:47:00.819 read: IOPS=424, BW=1699KiB/s (1740kB/s)(16.7MiB/10057msec) 00:47:00.819 slat (nsec): min=4391, max=73078, avg=18585.01, stdev=10749.96 00:47:00.819 clat (usec): min=18735, max=83673, avg=37486.56, stdev=4584.02 00:47:00.819 lat (usec): min=18743, max=83684, avg=37505.15, stdev=4583.81 00:47:00.819 clat percentiles (usec): 00:47:00.819 | 1.00th=[25297], 5.00th=[36439], 10.00th=[36439], 20.00th=[36963], 00:47:00.819 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[36963], 00:47:00.819 | 70.00th=[37487], 80.00th=[38536], 90.00th=[38536], 95.00th=[39584], 00:47:00.819 | 99.00th=[53740], 99.50th=[82314], 99.90th=[83362], 99.95th=[83362], 00:47:00.819 | 99.99th=[83362] 00:47:00.819 bw ( KiB/s): min= 1536, max= 1936, per=4.15%, avg=1702.60, stdev=86.61, samples=20 00:47:00.819 iops : min= 384, max= 484, avg=425.65, stdev=21.65, samples=20 00:47:00.819 lat (msec) : 20=0.09%, 50=98.74%, 100=1.17% 00:47:00.819 cpu : usr=98.51%, sys=1.04%, ctx=53, majf=0, minf=1633 00:47:00.819 IO depths : 1=5.7%, 2=11.6%, 4=24.0%, 8=51.8%, 16=6.8%, 32=0.0%, >=64=0.0% 00:47:00.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.819 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:00.819 issued rwts: total=4272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:00.819 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:00.819 00:47:00.819 Run status group 0 (all jobs): 00:47:00.819 READ: bw=40.1MiB/s (42.0MB/s), 1682KiB/s-1939KiB/s (1722kB/s-1986kB/s), io=404MiB (423MB), run=10010-10071msec 00:47:01.081 ----------------------------------------------------- 00:47:01.081 Suppressions used: 00:47:01.081 count bytes template 00:47:01.081 45 402 /usr/src/fio/parse.c 00:47:01.081 1 8 libtcmalloc_minimal.so 00:47:01.081 1 904 libcrypto.so 00:47:01.081 ----------------------------------------------------- 00:47:01.081 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:01.081 bdev_null0 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:01.081 13:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:47:01.082 13:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:01.082 13:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:01.082 13:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:01.082 13:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:47:01.082 13:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:01.082 13:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:01.082 [2024-11-07 13:52:08.981875] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:47:01.082 13:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:01.082 13:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:47:01.082 13:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:47:01.082 13:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:47:01.082 13:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:47:01.082 13:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:01.082 13:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:01.082 bdev_null1 00:47:01.082 13:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:01.082 13:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:47:01.082 13:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:01.082 13:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:47:01.082 { 00:47:01.082 "params": { 00:47:01.082 "name": "Nvme$subsystem", 00:47:01.082 "trtype": "$TEST_TRANSPORT", 00:47:01.082 "traddr": "$NVMF_FIRST_TARGET_IP", 00:47:01.082 "adrfam": "ipv4", 00:47:01.082 "trsvcid": "$NVMF_PORT", 00:47:01.082 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:47:01.082 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:47:01.082 "hdgst": ${hdgst:-false}, 00:47:01.082 "ddgst": ${ddgst:-false} 00:47:01.082 }, 00:47:01.082 "method": "bdev_nvme_attach_controller" 00:47:01.082 } 00:47:01.082 EOF 00:47:01.082 )") 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:47:01.082 { 00:47:01.082 "params": { 00:47:01.082 "name": "Nvme$subsystem", 00:47:01.082 "trtype": "$TEST_TRANSPORT", 00:47:01.082 "traddr": "$NVMF_FIRST_TARGET_IP", 00:47:01.082 "adrfam": "ipv4", 00:47:01.082 "trsvcid": "$NVMF_PORT", 00:47:01.082 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:47:01.082 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:47:01.082 "hdgst": ${hdgst:-false}, 00:47:01.082 "ddgst": ${ddgst:-false} 00:47:01.082 }, 00:47:01.082 "method": "bdev_nvme_attach_controller" 00:47:01.082 } 00:47:01.082 EOF 00:47:01.082 )") 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:47:01.082 "params": { 00:47:01.082 "name": "Nvme0", 00:47:01.082 "trtype": "tcp", 00:47:01.082 "traddr": "10.0.0.2", 00:47:01.082 "adrfam": "ipv4", 00:47:01.082 "trsvcid": "4420", 00:47:01.082 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:01.082 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:01.082 "hdgst": false, 00:47:01.082 "ddgst": false 00:47:01.082 }, 00:47:01.082 "method": "bdev_nvme_attach_controller" 00:47:01.082 },{ 00:47:01.082 "params": { 00:47:01.082 "name": "Nvme1", 00:47:01.082 "trtype": "tcp", 00:47:01.082 "traddr": "10.0.0.2", 00:47:01.082 "adrfam": "ipv4", 00:47:01.082 "trsvcid": "4420", 00:47:01.082 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:47:01.082 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:47:01.082 "hdgst": false, 00:47:01.082 "ddgst": false 00:47:01.082 }, 00:47:01.082 "method": "bdev_nvme_attach_controller" 00:47:01.082 }' 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # break 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:47:01.082 13:52:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:47:01.657 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:47:01.657 ... 00:47:01.657 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:47:01.657 ... 00:47:01.657 fio-3.35 00:47:01.657 Starting 4 threads 00:47:08.242 00:47:08.242 filename0: (groupid=0, jobs=1): err= 0: pid=86521: Thu Nov 7 13:52:15 2024 00:47:08.242 read: IOPS=1694, BW=13.2MiB/s (13.9MB/s)(66.2MiB/5002msec) 00:47:08.242 slat (nsec): min=5884, max=45371, avg=7192.38, stdev=1869.98 00:47:08.242 clat (usec): min=1165, max=11715, avg=4700.27, stdev=746.36 00:47:08.242 lat (usec): min=1171, max=11760, avg=4707.46, stdev=746.20 00:47:08.242 clat percentiles (usec): 00:47:08.242 | 1.00th=[ 3752], 5.00th=[ 4113], 10.00th=[ 4178], 20.00th=[ 4228], 00:47:08.242 | 30.00th=[ 4293], 40.00th=[ 4359], 50.00th=[ 4424], 60.00th=[ 4621], 00:47:08.242 | 70.00th=[ 4752], 80.00th=[ 5014], 90.00th=[ 5276], 95.00th=[ 6652], 00:47:08.242 | 99.00th=[ 7046], 99.50th=[ 7177], 99.90th=[ 7439], 99.95th=[11338], 00:47:08.242 | 99.99th=[11731] 00:47:08.242 bw ( KiB/s): min=13264, max=13952, per=22.67%, avg=13552.00, stdev=193.00, samples=9 00:47:08.242 iops : min= 1658, max= 1744, avg=1694.00, stdev=24.12, samples=9 00:47:08.242 lat (msec) : 2=0.13%, 4=2.94%, 10=96.84%, 20=0.09% 00:47:08.242 cpu : usr=96.06%, sys=3.66%, ctx=5, majf=0, minf=1633 00:47:08.242 IO depths : 1=0.1%, 2=0.1%, 4=73.8%, 8=26.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:08.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:08.242 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:08.242 issued rwts: total=8474,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:08.242 latency : target=0, window=0, percentile=100.00%, depth=8 00:47:08.242 filename0: (groupid=0, jobs=1): err= 0: pid=86522: Thu Nov 7 13:52:15 2024 00:47:08.242 read: IOPS=1763, BW=13.8MiB/s (14.4MB/s)(68.9MiB/5002msec) 00:47:08.242 slat (nsec): min=5931, max=58342, avg=9882.59, stdev=3026.67 00:47:08.242 clat (usec): min=2462, max=7225, avg=4512.17, stdev=408.10 00:47:08.242 lat (usec): min=2468, max=7238, avg=4522.05, stdev=407.90 00:47:08.242 clat percentiles (usec): 00:47:08.242 | 1.00th=[ 3884], 5.00th=[ 4015], 10.00th=[ 4146], 20.00th=[ 4228], 00:47:08.242 | 30.00th=[ 4293], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4424], 00:47:08.242 | 70.00th=[ 4621], 80.00th=[ 4883], 90.00th=[ 5145], 95.00th=[ 5145], 00:47:08.242 | 99.00th=[ 5538], 99.50th=[ 5997], 99.90th=[ 6980], 99.95th=[ 7111], 00:47:08.242 | 99.99th=[ 7242] 00:47:08.242 bw ( KiB/s): min=13968, max=14256, per=23.60%, avg=14104.00, stdev=82.71, samples=10 00:47:08.242 iops : min= 1746, max= 1782, avg=1763.00, stdev=10.34, samples=10 00:47:08.242 lat (msec) : 4=2.79%, 10=97.21% 00:47:08.242 cpu : usr=94.86%, sys=3.90%, ctx=269, majf=0, minf=1632 00:47:08.242 IO depths : 1=0.1%, 2=0.1%, 4=67.1%, 8=32.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:08.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:08.242 complete : 0=0.0%, 4=96.4%, 8=3.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:08.242 issued rwts: total=8820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:08.242 latency : target=0, window=0, percentile=100.00%, depth=8 00:47:08.242 filename1: (groupid=0, jobs=1): err= 0: pid=86524: Thu Nov 7 13:52:15 2024 00:47:08.242 read: IOPS=1775, BW=13.9MiB/s (14.5MB/s)(69.4MiB/5005msec) 00:47:08.242 slat (nsec): min=5933, max=51258, avg=9535.35, stdev=3068.13 00:47:08.242 clat (usec): min=1280, max=10437, avg=4479.90, stdev=526.14 00:47:08.242 lat (usec): min=1292, max=10476, avg=4489.43, stdev=525.87 00:47:08.242 clat percentiles (usec): 00:47:08.242 | 1.00th=[ 3392], 5.00th=[ 3818], 10.00th=[ 4015], 20.00th=[ 4178], 00:47:08.242 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4424], 00:47:08.242 | 70.00th=[ 4621], 80.00th=[ 4883], 90.00th=[ 5080], 95.00th=[ 5145], 00:47:08.242 | 99.00th=[ 6587], 99.50th=[ 6783], 99.90th=[ 7111], 99.95th=[10159], 00:47:08.242 | 99.99th=[10421] 00:47:08.242 bw ( KiB/s): min=13712, max=14896, per=23.76%, avg=14201.60, stdev=315.09, samples=10 00:47:08.242 iops : min= 1714, max= 1862, avg=1775.20, stdev=39.39, samples=10 00:47:08.242 lat (msec) : 2=0.02%, 4=8.30%, 10=91.59%, 20=0.09% 00:47:08.242 cpu : usr=96.54%, sys=3.04%, ctx=78, majf=0, minf=1637 00:47:08.242 IO depths : 1=0.1%, 2=0.6%, 4=71.0%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:08.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:08.242 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:08.242 issued rwts: total=8884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:08.242 latency : target=0, window=0, percentile=100.00%, depth=8 00:47:08.242 filename1: (groupid=0, jobs=1): err= 0: pid=86525: Thu Nov 7 13:52:15 2024 00:47:08.242 read: IOPS=2243, BW=17.5MiB/s (18.4MB/s)(87.6MiB/5001msec) 00:47:08.242 slat (nsec): min=5889, max=46101, avg=9204.18, stdev=2913.82 00:47:08.242 clat (usec): min=1318, max=6089, avg=3536.24, stdev=503.05 00:47:08.242 lat (usec): min=1327, max=6098, avg=3545.44, stdev=503.16 00:47:08.242 clat percentiles (usec): 00:47:08.242 | 1.00th=[ 2278], 5.00th=[ 2802], 10.00th=[ 3064], 20.00th=[ 3163], 00:47:08.242 | 30.00th=[ 3326], 40.00th=[ 3392], 50.00th=[ 3458], 60.00th=[ 3589], 00:47:08.242 | 70.00th=[ 3654], 80.00th=[ 3884], 90.00th=[ 4228], 95.00th=[ 4293], 00:47:08.242 | 99.00th=[ 5276], 99.50th=[ 5473], 99.90th=[ 5669], 99.95th=[ 5735], 00:47:08.242 | 99.99th=[ 6063] 00:47:08.242 bw ( KiB/s): min=16593, max=18592, per=30.02%, avg=17944.10, stdev=609.17, samples=10 00:47:08.242 iops : min= 2074, max= 2324, avg=2243.00, stdev=76.18, samples=10 00:47:08.242 lat (msec) : 2=0.04%, 4=82.65%, 10=17.31% 00:47:08.242 cpu : usr=96.50%, sys=3.18%, ctx=6, majf=0, minf=1636 00:47:08.242 IO depths : 1=0.1%, 2=13.1%, 4=57.6%, 8=29.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:08.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:08.242 complete : 0=0.0%, 4=93.7%, 8=6.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:08.242 issued rwts: total=11218,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:08.242 latency : target=0, window=0, percentile=100.00%, depth=8 00:47:08.242 00:47:08.242 Run status group 0 (all jobs): 00:47:08.242 READ: bw=58.4MiB/s (61.2MB/s), 13.2MiB/s-17.5MiB/s (13.9MB/s-18.4MB/s), io=292MiB (306MB), run=5001-5005msec 00:47:08.504 ----------------------------------------------------- 00:47:08.504 Suppressions used: 00:47:08.504 count bytes template 00:47:08.504 6 52 /usr/src/fio/parse.c 00:47:08.504 1 8 libtcmalloc_minimal.so 00:47:08.504 1 904 libcrypto.so 00:47:08.504 ----------------------------------------------------- 00:47:08.504 00:47:08.504 13:52:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:47:08.504 13:52:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:47:08.504 13:52:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:47:08.504 13:52:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:47:08.504 13:52:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:47:08.504 13:52:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:47:08.504 13:52:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:08.504 13:52:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:08.504 13:52:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:08.504 13:52:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:47:08.504 13:52:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:08.504 13:52:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:08.504 13:52:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:08.504 13:52:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:47:08.504 13:52:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:47:08.504 13:52:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:47:08.504 13:52:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:47:08.504 13:52:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:08.504 13:52:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:08.504 13:52:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:08.504 13:52:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:47:08.504 13:52:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:08.504 13:52:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:08.504 13:52:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:08.504 00:47:08.504 real 0m27.904s 00:47:08.504 user 5m18.663s 00:47:08.504 sys 0m6.027s 00:47:08.504 13:52:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:47:08.504 13:52:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:08.504 ************************************ 00:47:08.504 END TEST fio_dif_rand_params 00:47:08.504 ************************************ 00:47:08.504 13:52:16 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:47:08.504 13:52:16 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:47:08.504 13:52:16 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:47:08.504 13:52:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:47:08.504 ************************************ 00:47:08.504 START TEST fio_dif_digest 00:47:08.504 ************************************ 00:47:08.504 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:47:08.504 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:47:08.504 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:47:08.504 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:47:08.504 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:47:08.504 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:47:08.504 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:47:08.504 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:47:08.504 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:47:08.504 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:47:08.504 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:47:08.504 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:47:08.505 bdev_null0 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:47:08.505 [2024-11-07 13:52:16.455508] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:47:08.505 { 00:47:08.505 "params": { 00:47:08.505 "name": "Nvme$subsystem", 00:47:08.505 "trtype": "$TEST_TRANSPORT", 00:47:08.505 "traddr": "$NVMF_FIRST_TARGET_IP", 00:47:08.505 "adrfam": "ipv4", 00:47:08.505 "trsvcid": "$NVMF_PORT", 00:47:08.505 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:47:08.505 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:47:08.505 "hdgst": ${hdgst:-false}, 00:47:08.505 "ddgst": ${ddgst:-false} 00:47:08.505 }, 00:47:08.505 "method": "bdev_nvme_attach_controller" 00:47:08.505 } 00:47:08.505 EOF 00:47:08.505 )") 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:47:08.505 "params": { 00:47:08.505 "name": "Nvme0", 00:47:08.505 "trtype": "tcp", 00:47:08.505 "traddr": "10.0.0.2", 00:47:08.505 "adrfam": "ipv4", 00:47:08.505 "trsvcid": "4420", 00:47:08.505 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:08.505 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:08.505 "hdgst": true, 00:47:08.505 "ddgst": true 00:47:08.505 }, 00:47:08.505 "method": "bdev_nvme_attach_controller" 00:47:08.505 }' 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # break 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:47:08.505 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:47:09.093 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:47:09.093 ... 00:47:09.093 fio-3.35 00:47:09.093 Starting 3 threads 00:47:21.319 00:47:21.319 filename0: (groupid=0, jobs=1): err= 0: pid=88026: Thu Nov 7 13:52:27 2024 00:47:21.319 read: IOPS=145, BW=18.1MiB/s (19.0MB/s)(182MiB/10037msec) 00:47:21.319 slat (nsec): min=6569, max=45074, avg=10811.19, stdev=2077.41 00:47:21.319 clat (usec): min=8740, max=97711, avg=20651.35, stdev=13444.76 00:47:21.319 lat (usec): min=8752, max=97721, avg=20662.16, stdev=13444.59 00:47:21.319 clat percentiles (usec): 00:47:21.319 | 1.00th=[10814], 5.00th=[14091], 10.00th=[14615], 20.00th=[15139], 00:47:21.319 | 30.00th=[15533], 40.00th=[15795], 50.00th=[16057], 60.00th=[16450], 00:47:21.319 | 70.00th=[16909], 80.00th=[17433], 90.00th=[54789], 95.00th=[56361], 00:47:21.319 | 99.00th=[58459], 99.50th=[58983], 99.90th=[96994], 99.95th=[98042], 00:47:21.319 | 99.99th=[98042] 00:47:21.319 bw ( KiB/s): min=14336, max=21760, per=25.56%, avg=18611.20, stdev=2059.92, samples=20 00:47:21.319 iops : min= 112, max= 170, avg=145.40, stdev=16.09, samples=20 00:47:21.319 lat (msec) : 10=0.21%, 20=88.33%, 50=0.14%, 100=11.32% 00:47:21.319 cpu : usr=95.08%, sys=4.49%, ctx=281, majf=0, minf=1633 00:47:21.319 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:21.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:21.319 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:21.319 issued rwts: total=1457,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:21.319 latency : target=0, window=0, percentile=100.00%, depth=3 00:47:21.319 filename0: (groupid=0, jobs=1): err= 0: pid=88028: Thu Nov 7 13:52:27 2024 00:47:21.319 read: IOPS=209, BW=26.2MiB/s (27.5MB/s)(263MiB/10044msec) 00:47:21.319 slat (nsec): min=6391, max=46770, avg=10696.63, stdev=1830.29 00:47:21.319 clat (usec): min=8878, max=55484, avg=14280.45, stdev=2816.40 00:47:21.319 lat (usec): min=8895, max=55494, avg=14291.15, stdev=2816.42 00:47:21.319 clat percentiles (usec): 00:47:21.319 | 1.00th=[ 9765], 5.00th=[10421], 10.00th=[10945], 20.00th=[11863], 00:47:21.319 | 30.00th=[13435], 40.00th=[14353], 50.00th=[14877], 60.00th=[15270], 00:47:21.319 | 70.00th=[15533], 80.00th=[15926], 90.00th=[16450], 95.00th=[16909], 00:47:21.319 | 99.00th=[17957], 99.50th=[18482], 99.90th=[54264], 99.95th=[54789], 00:47:21.319 | 99.99th=[55313] 00:47:21.319 bw ( KiB/s): min=24320, max=28672, per=36.97%, avg=26918.40, stdev=1064.54, samples=20 00:47:21.319 iops : min= 190, max= 224, avg=210.30, stdev= 8.32, samples=20 00:47:21.320 lat (msec) : 10=2.00%, 20=97.77%, 50=0.05%, 100=0.19% 00:47:21.320 cpu : usr=94.93%, sys=4.79%, ctx=18, majf=0, minf=1637 00:47:21.320 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:21.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:21.320 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:21.320 issued rwts: total=2105,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:21.320 latency : target=0, window=0, percentile=100.00%, depth=3 00:47:21.320 filename0: (groupid=0, jobs=1): err= 0: pid=88029: Thu Nov 7 13:52:27 2024 00:47:21.320 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(269MiB/10045msec) 00:47:21.320 slat (nsec): min=6479, max=62917, avg=9717.25, stdev=1372.74 00:47:21.320 clat (usec): min=8756, max=58415, avg=13970.74, stdev=2832.62 00:47:21.320 lat (usec): min=8766, max=58478, avg=13980.45, stdev=2833.11 00:47:21.320 clat percentiles (usec): 00:47:21.320 | 1.00th=[ 9503], 5.00th=[10290], 10.00th=[10814], 20.00th=[11469], 00:47:21.320 | 30.00th=[12911], 40.00th=[13960], 50.00th=[14484], 60.00th=[14877], 00:47:21.320 | 70.00th=[15270], 80.00th=[15533], 90.00th=[16057], 95.00th=[16450], 00:47:21.320 | 99.00th=[17433], 99.50th=[17957], 99.90th=[57934], 99.95th=[57934], 00:47:21.320 | 99.99th=[58459] 00:47:21.320 bw ( KiB/s): min=24320, max=29440, per=37.80%, avg=27520.00, stdev=1164.29, samples=20 00:47:21.320 iops : min= 190, max= 230, avg=215.00, stdev= 9.10, samples=20 00:47:21.320 lat (msec) : 10=3.16%, 20=96.61%, 50=0.05%, 100=0.19% 00:47:21.320 cpu : usr=94.48%, sys=5.24%, ctx=16, majf=0, minf=1636 00:47:21.320 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:21.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:21.320 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:21.320 issued rwts: total=2152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:21.320 latency : target=0, window=0, percentile=100.00%, depth=3 00:47:21.320 00:47:21.320 Run status group 0 (all jobs): 00:47:21.320 READ: bw=71.1MiB/s (74.6MB/s), 18.1MiB/s-26.8MiB/s (19.0MB/s-28.1MB/s), io=714MiB (749MB), run=10037-10045msec 00:47:21.320 ----------------------------------------------------- 00:47:21.320 Suppressions used: 00:47:21.320 count bytes template 00:47:21.320 5 44 /usr/src/fio/parse.c 00:47:21.320 1 8 libtcmalloc_minimal.so 00:47:21.320 1 904 libcrypto.so 00:47:21.320 ----------------------------------------------------- 00:47:21.320 00:47:21.320 13:52:28 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:47:21.320 13:52:28 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:47:21.320 13:52:28 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:47:21.320 13:52:28 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:47:21.320 13:52:28 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:47:21.320 13:52:28 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:47:21.320 13:52:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:21.320 13:52:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:47:21.320 13:52:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:21.320 13:52:28 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:47:21.320 13:52:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:21.320 13:52:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:47:21.320 13:52:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:21.320 00:47:21.320 real 0m12.311s 00:47:21.320 user 0m43.138s 00:47:21.320 sys 0m2.078s 00:47:21.320 13:52:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:47:21.320 13:52:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:47:21.320 ************************************ 00:47:21.320 END TEST fio_dif_digest 00:47:21.320 ************************************ 00:47:21.320 13:52:28 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:47:21.320 13:52:28 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:47:21.320 13:52:28 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:47:21.320 13:52:28 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:47:21.320 13:52:28 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:47:21.320 13:52:28 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:47:21.320 13:52:28 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:47:21.320 13:52:28 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:47:21.320 rmmod nvme_tcp 00:47:21.320 rmmod nvme_fabrics 00:47:21.320 rmmod nvme_keyring 00:47:21.320 13:52:28 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:47:21.320 13:52:28 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:47:21.320 13:52:28 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:47:21.320 13:52:28 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 76979 ']' 00:47:21.320 13:52:28 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 76979 00:47:21.320 13:52:28 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 76979 ']' 00:47:21.320 13:52:28 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 76979 00:47:21.320 13:52:28 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:47:21.320 13:52:28 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:47:21.320 13:52:28 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76979 00:47:21.320 13:52:28 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:47:21.320 13:52:28 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:47:21.320 13:52:28 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76979' 00:47:21.320 killing process with pid 76979 00:47:21.320 13:52:28 nvmf_dif -- common/autotest_common.sh@971 -- # kill 76979 00:47:21.320 13:52:28 nvmf_dif -- common/autotest_common.sh@976 -- # wait 76979 00:47:21.890 13:52:29 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:47:21.890 13:52:29 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:47:26.093 Waiting for block devices as requested 00:47:26.093 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:47:26.093 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:47:26.093 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:47:26.093 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:47:26.093 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:47:26.093 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:47:26.093 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:47:26.093 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:47:26.093 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:47:26.353 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:47:26.353 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:47:26.353 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:47:26.613 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:47:26.613 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:47:26.613 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:47:26.613 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:47:26.873 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:47:27.134 13:52:34 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:47:27.134 13:52:34 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:47:27.134 13:52:34 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:47:27.134 13:52:34 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:47:27.134 13:52:34 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:47:27.134 13:52:34 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:47:27.134 13:52:34 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:47:27.134 13:52:34 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:47:27.134 13:52:34 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:27.134 13:52:34 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:47:27.134 13:52:34 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:29.042 13:52:37 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:47:29.043 00:47:29.043 real 1m26.987s 00:47:29.043 user 8m9.124s 00:47:29.043 sys 0m24.806s 00:47:29.043 13:52:37 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:47:29.043 13:52:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:47:29.043 ************************************ 00:47:29.043 END TEST nvmf_dif 00:47:29.043 ************************************ 00:47:29.303 13:52:37 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:47:29.303 13:52:37 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:47:29.303 13:52:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:47:29.303 13:52:37 -- common/autotest_common.sh@10 -- # set +x 00:47:29.303 ************************************ 00:47:29.303 START TEST nvmf_abort_qd_sizes 00:47:29.303 ************************************ 00:47:29.303 13:52:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:47:29.303 * Looking for test storage... 00:47:29.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:47:29.303 13:52:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:47:29.303 13:52:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:47:29.303 13:52:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:47:29.303 13:52:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:47:29.303 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:29.303 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:29.303 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:29.303 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:47:29.303 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:47:29.303 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:47:29.303 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:47:29.303 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:47:29.303 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:47:29.303 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:47:29.303 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:29.303 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:47:29.303 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:47:29.303 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:29.303 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:29.303 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:47:29.303 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:47:29.303 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:29.303 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:47:29.303 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:47:29.303 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:47:29.303 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:47:29.303 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:29.303 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:47:29.303 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:47:29.303 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:29.303 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:29.303 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:47:29.303 13:52:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:29.303 13:52:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:47:29.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:29.303 --rc genhtml_branch_coverage=1 00:47:29.304 --rc genhtml_function_coverage=1 00:47:29.304 --rc genhtml_legend=1 00:47:29.304 --rc geninfo_all_blocks=1 00:47:29.304 --rc geninfo_unexecuted_blocks=1 00:47:29.304 00:47:29.304 ' 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:47:29.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:29.304 --rc genhtml_branch_coverage=1 00:47:29.304 --rc genhtml_function_coverage=1 00:47:29.304 --rc genhtml_legend=1 00:47:29.304 --rc geninfo_all_blocks=1 00:47:29.304 --rc geninfo_unexecuted_blocks=1 00:47:29.304 00:47:29.304 ' 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:47:29.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:29.304 --rc genhtml_branch_coverage=1 00:47:29.304 --rc genhtml_function_coverage=1 00:47:29.304 --rc genhtml_legend=1 00:47:29.304 --rc geninfo_all_blocks=1 00:47:29.304 --rc geninfo_unexecuted_blocks=1 00:47:29.304 00:47:29.304 ' 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:47:29.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:29.304 --rc genhtml_branch_coverage=1 00:47:29.304 --rc genhtml_function_coverage=1 00:47:29.304 --rc genhtml_legend=1 00:47:29.304 --rc geninfo_all_blocks=1 00:47:29.304 --rc geninfo_unexecuted_blocks=1 00:47:29.304 00:47:29.304 ' 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:29.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:47:29.304 13:52:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:47:37.445 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:47:37.445 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:47:37.445 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:47:37.445 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:47:37.445 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:47:37.445 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:47:37.445 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:47:37.445 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:47:37.445 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:47:37.445 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:47:37.445 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:47:37.445 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:47:37.445 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:47:37.445 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:47:37.445 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:47:37.445 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:47:37.445 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:47:37.445 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:47:37.445 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:47:37.445 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:47:37.445 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:47:37.445 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:47:37.445 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:47:37.445 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:47:37.445 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:47:37.445 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:47:37.445 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:47:37.446 Found 0000:31:00.0 (0x8086 - 0x159b) 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:47:37.446 Found 0000:31:00.1 (0x8086 - 0x159b) 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:47:37.446 Found net devices under 0000:31:00.0: cvl_0_0 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:47:37.446 Found net devices under 0000:31:00.1: cvl_0_1 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:47:37.446 13:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:47:37.446 13:52:45 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:47:37.446 13:52:45 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:47:37.446 13:52:45 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:47:37.446 13:52:45 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:47:37.446 13:52:45 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:47:37.446 13:52:45 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:47:37.446 13:52:45 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:47:37.446 13:52:45 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:47:37.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:37.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.413 ms 00:47:37.446 00:47:37.446 --- 10.0.0.2 ping statistics --- 00:47:37.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:37.446 rtt min/avg/max/mdev = 0.413/0.413/0.413/0.000 ms 00:47:37.446 13:52:45 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:47:37.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:37.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:47:37.446 00:47:37.446 --- 10.0.0.1 ping statistics --- 00:47:37.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:37.446 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:47:37.446 13:52:45 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:37.446 13:52:45 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:47:37.446 13:52:45 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:47:37.446 13:52:45 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:47:40.744 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:47:40.744 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:47:40.744 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:47:40.744 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:47:40.744 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:47:40.744 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:47:40.744 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:47:41.004 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:47:41.004 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:47:41.004 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:47:41.004 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:47:41.004 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:47:41.004 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:47:41.004 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:47:41.004 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:47:41.004 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:47:41.004 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:47:41.265 13:52:49 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:41.265 13:52:49 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:47:41.265 13:52:49 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:47:41.265 13:52:49 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:41.265 13:52:49 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:47:41.265 13:52:49 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:47:41.526 13:52:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:47:41.526 13:52:49 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:47:41.526 13:52:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:47:41.526 13:52:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:47:41.526 13:52:49 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=98527 00:47:41.526 13:52:49 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 98527 00:47:41.526 13:52:49 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:47:41.526 13:52:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 98527 ']' 00:47:41.526 13:52:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:41.526 13:52:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:47:41.526 13:52:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:41.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:41.526 13:52:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:47:41.526 13:52:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:47:41.526 [2024-11-07 13:52:49.375468] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:47:41.526 [2024-11-07 13:52:49.375581] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:41.786 [2024-11-07 13:52:49.538392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:47:41.786 [2024-11-07 13:52:49.639121] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:41.786 [2024-11-07 13:52:49.639163] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:41.786 [2024-11-07 13:52:49.639178] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:41.786 [2024-11-07 13:52:49.639190] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:41.786 [2024-11-07 13:52:49.639199] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:41.786 [2024-11-07 13:52:49.641451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:41.786 [2024-11-07 13:52:49.641537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:47:41.786 [2024-11-07 13:52:49.641655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:41.786 [2024-11-07 13:52:49.641679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:47:42.357 13:52:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:47:42.357 13:52:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:47:42.357 13:52:50 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:47:42.357 13:52:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:47:42.357 13:52:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:47:42.357 13:52:50 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:42.357 13:52:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:47:42.357 13:52:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:47:42.357 13:52:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:47:42.357 13:52:50 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:47:42.357 13:52:50 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:47:42.357 13:52:50 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:47:42.357 13:52:50 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:47:42.357 13:52:50 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:47:42.357 13:52:50 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:47:42.357 13:52:50 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:47:42.357 13:52:50 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:47:42.357 13:52:50 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:47:42.357 13:52:50 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:47:42.357 13:52:50 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:47:42.357 13:52:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:47:42.357 13:52:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:47:42.357 13:52:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:47:42.357 13:52:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:47:42.357 13:52:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:47:42.358 13:52:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:47:42.358 ************************************ 00:47:42.358 START TEST spdk_target_abort 00:47:42.358 ************************************ 00:47:42.358 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:47:42.358 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:47:42.358 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:47:42.358 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:42.358 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:42.619 spdk_targetn1 00:47:42.619 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:42.619 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:47:42.619 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:42.619 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:42.619 [2024-11-07 13:52:50.561766] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:42.619 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:42.619 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:47:42.619 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:42.619 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:42.619 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:42.619 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:47:42.619 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:42.619 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:42.619 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:42.619 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:47:42.619 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:42.619 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:42.619 [2024-11-07 13:52:50.611848] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:47:42.619 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:42.619 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:47:42.619 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:47:42.619 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:47:42.619 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:47:42.619 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:47:42.619 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:47:42.619 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:47:42.619 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:47:42.619 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:47:42.619 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:42.619 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:47:42.619 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:42.619 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:47:42.619 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:42.619 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:47:42.619 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:42.619 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:47:42.619 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:42.619 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:47:42.619 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:47:42.619 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:47:42.880 [2024-11-07 13:52:50.794872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:32 len:8 PRP1 0x200004abf000 PRP2 0x0 00:47:42.880 [2024-11-07 13:52:50.794907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0006 p:1 m:0 dnr:0 00:47:42.880 [2024-11-07 13:52:50.803000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:280 len:8 PRP1 0x200004ac3000 PRP2 0x0 00:47:42.880 [2024-11-07 13:52:50.803024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0025 p:1 m:0 dnr:0 00:47:42.880 [2024-11-07 13:52:50.810375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:504 len:8 PRP1 0x200004ac3000 PRP2 0x0 00:47:42.880 [2024-11-07 13:52:50.810397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0041 p:1 m:0 dnr:0 00:47:42.880 [2024-11-07 13:52:50.827314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1128 len:8 PRP1 0x200004ac1000 PRP2 0x0 00:47:42.880 [2024-11-07 13:52:50.827336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:008f p:1 m:0 dnr:0 00:47:42.880 [2024-11-07 13:52:50.853422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:1944 len:8 PRP1 0x200004abf000 PRP2 0x0 00:47:42.880 [2024-11-07 13:52:50.853444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00f4 p:1 m:0 dnr:0 00:47:42.880 [2024-11-07 13:52:50.853617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1952 len:8 PRP1 0x200004abd000 PRP2 0x0 00:47:42.880 [2024-11-07 13:52:50.853631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00f5 p:1 m:0 dnr:0 00:47:46.178 Initializing NVMe Controllers 00:47:46.178 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:47:46.178 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:47:46.178 Initialization complete. Launching workers. 00:47:46.178 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 13328, failed: 6 00:47:46.178 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2644, failed to submit 10690 00:47:46.178 success 723, unsuccessful 1921, failed 0 00:47:46.178 13:52:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:47:46.178 13:52:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:47:46.438 [2024-11-07 13:52:54.221343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:183 nsid:1 lba:488 len:8 PRP1 0x200004e4f000 PRP2 0x0 00:47:46.438 [2024-11-07 13:52:54.221394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:183 cdw0:0 sqhd:0048 p:1 m:0 dnr:0 00:47:46.438 [2024-11-07 13:52:54.276126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:180 nsid:1 lba:1760 len:8 PRP1 0x200004e5b000 PRP2 0x0 00:47:46.438 [2024-11-07 13:52:54.276164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:180 cdw0:0 sqhd:00e4 p:1 m:0 dnr:0 00:47:46.438 [2024-11-07 13:52:54.316171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:186 nsid:1 lba:2640 len:8 PRP1 0x200004e57000 PRP2 0x0 00:47:46.438 [2024-11-07 13:52:54.316205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:186 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:47:46.438 [2024-11-07 13:52:54.356242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:173 nsid:1 lba:3616 len:8 PRP1 0x200004e43000 PRP2 0x0 00:47:46.438 [2024-11-07 13:52:54.356279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:173 cdw0:0 sqhd:00c6 p:0 m:0 dnr:0 00:47:46.438 [2024-11-07 13:52:54.372183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:184 nsid:1 lba:3960 len:8 PRP1 0x200004e5b000 PRP2 0x0 00:47:46.438 [2024-11-07 13:52:54.372213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:184 cdw0:0 sqhd:00fa p:0 m:0 dnr:0 00:47:47.821 [2024-11-07 13:52:55.583231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:173 nsid:1 lba:31880 len:8 PRP1 0x200004e5d000 PRP2 0x0 00:47:47.821 [2024-11-07 13:52:55.583277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:173 cdw0:0 sqhd:0093 p:0 m:0 dnr:0 00:47:49.735 Initializing NVMe Controllers 00:47:49.735 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:47:49.735 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:47:49.735 Initialization complete. Launching workers. 00:47:49.735 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8531, failed: 6 00:47:49.735 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1241, failed to submit 7296 00:47:49.735 success 327, unsuccessful 914, failed 0 00:47:49.735 13:52:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:47:49.735 13:52:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:47:49.995 [2024-11-07 13:52:57.747367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:169 nsid:1 lba:2152 len:8 PRP1 0x200004adb000 PRP2 0x0 00:47:49.995 [2024-11-07 13:52:57.747404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:169 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:47:53.292 Initializing NVMe Controllers 00:47:53.292 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:47:53.292 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:47:53.292 Initialization complete. Launching workers. 00:47:53.292 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38012, failed: 1 00:47:53.292 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2622, failed to submit 35391 00:47:53.292 success 598, unsuccessful 2024, failed 0 00:47:53.292 13:53:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:47:53.292 13:53:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:53.292 13:53:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:53.292 13:53:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:53.292 13:53:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:47:53.292 13:53:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:53.292 13:53:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:54.675 13:53:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:54.675 13:53:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 98527 00:47:54.675 13:53:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 98527 ']' 00:47:54.675 13:53:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 98527 00:47:54.675 13:53:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:47:54.675 13:53:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:47:54.675 13:53:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 98527 00:47:54.940 13:53:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:47:54.940 13:53:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:47:54.940 13:53:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 98527' 00:47:54.940 killing process with pid 98527 00:47:54.940 13:53:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 98527 00:47:54.940 13:53:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 98527 00:47:55.600 00:47:55.600 real 0m13.135s 00:47:55.600 user 0m52.213s 00:47:55.600 sys 0m2.171s 00:47:55.600 13:53:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:47:55.600 13:53:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:55.600 ************************************ 00:47:55.600 END TEST spdk_target_abort 00:47:55.600 ************************************ 00:47:55.600 13:53:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:47:55.600 13:53:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:47:55.600 13:53:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:47:55.600 13:53:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:47:55.600 ************************************ 00:47:55.600 START TEST kernel_target_abort 00:47:55.600 ************************************ 00:47:55.600 13:53:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:47:55.600 13:53:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:47:55.600 13:53:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:47:55.600 13:53:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:47:55.600 13:53:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:47:55.600 13:53:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:55.600 13:53:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:55.600 13:53:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:47:55.600 13:53:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:55.600 13:53:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:47:55.600 13:53:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:47:55.600 13:53:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:47:55.600 13:53:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:47:55.600 13:53:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:47:55.600 13:53:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:47:55.600 13:53:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:47:55.600 13:53:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:47:55.600 13:53:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:47:55.600 13:53:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:47:55.600 13:53:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:47:55.600 13:53:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:47:55.600 13:53:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:47:55.600 13:53:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:47:59.807 Waiting for block devices as requested 00:47:59.807 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:47:59.807 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:47:59.807 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:47:59.807 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:47:59.807 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:47:59.807 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:47:59.807 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:47:59.807 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:48:00.068 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:48:00.068 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:48:00.068 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:48:00.329 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:48:00.329 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:48:00.329 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:48:00.589 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:48:00.589 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:48:00.589 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:48:01.531 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:48:01.531 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:48:01.531 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:48:01.531 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:48:01.531 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:48:01.531 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:48:01.531 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:48:01.531 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:48:01.531 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:48:01.531 No valid GPT data, bailing 00:48:01.531 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:48:01.531 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:48:01.531 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:48:01.531 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:48:01.531 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:48:01.531 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:48:01.531 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:48:01.531 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:48:01.531 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:48:01.531 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:48:01.531 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:48:01.531 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:48:01.531 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:48:01.531 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:48:01.531 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:48:01.531 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:48:01.531 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:48:01.531 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:48:01.531 00:48:01.531 Discovery Log Number of Records 2, Generation counter 2 00:48:01.531 =====Discovery Log Entry 0====== 00:48:01.531 trtype: tcp 00:48:01.531 adrfam: ipv4 00:48:01.531 subtype: current discovery subsystem 00:48:01.531 treq: not specified, sq flow control disable supported 00:48:01.531 portid: 1 00:48:01.531 trsvcid: 4420 00:48:01.531 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:48:01.531 traddr: 10.0.0.1 00:48:01.531 eflags: none 00:48:01.531 sectype: none 00:48:01.531 =====Discovery Log Entry 1====== 00:48:01.531 trtype: tcp 00:48:01.531 adrfam: ipv4 00:48:01.531 subtype: nvme subsystem 00:48:01.531 treq: not specified, sq flow control disable supported 00:48:01.531 portid: 1 00:48:01.531 trsvcid: 4420 00:48:01.532 subnqn: nqn.2016-06.io.spdk:testnqn 00:48:01.532 traddr: 10.0.0.1 00:48:01.532 eflags: none 00:48:01.532 sectype: none 00:48:01.532 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:48:01.532 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:48:01.532 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:48:01.532 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:48:01.532 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:48:01.532 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:48:01.532 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:48:01.532 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:48:01.532 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:48:01.532 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:48:01.532 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:48:01.532 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:48:01.532 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:48:01.532 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:48:01.532 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:48:01.532 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:48:01.532 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:48:01.532 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:48:01.532 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:48:01.532 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:48:01.532 13:53:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:48:04.829 Initializing NVMe Controllers 00:48:04.829 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:48:04.829 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:48:04.829 Initialization complete. Launching workers. 00:48:04.829 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 61161, failed: 0 00:48:04.829 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 61161, failed to submit 0 00:48:04.829 success 0, unsuccessful 61161, failed 0 00:48:04.829 13:53:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:48:04.829 13:53:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:48:08.129 Initializing NVMe Controllers 00:48:08.129 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:48:08.129 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:48:08.129 Initialization complete. Launching workers. 00:48:08.129 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 97668, failed: 0 00:48:08.129 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24646, failed to submit 73022 00:48:08.129 success 0, unsuccessful 24646, failed 0 00:48:08.129 13:53:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:48:08.129 13:53:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:48:11.429 Initializing NVMe Controllers 00:48:11.429 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:48:11.429 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:48:11.429 Initialization complete. Launching workers. 00:48:11.429 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 92345, failed: 0 00:48:11.429 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23066, failed to submit 69279 00:48:11.429 success 0, unsuccessful 23066, failed 0 00:48:11.429 13:53:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:48:11.429 13:53:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:48:11.429 13:53:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:48:11.429 13:53:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:48:11.429 13:53:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:48:11.429 13:53:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:48:11.429 13:53:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:48:11.429 13:53:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:48:11.429 13:53:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:48:11.429 13:53:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:48:15.635 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:48:15.635 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:48:15.635 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:48:15.635 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:48:15.635 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:48:15.635 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:48:15.635 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:48:15.635 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:48:15.635 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:48:15.635 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:48:15.635 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:48:15.635 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:48:15.635 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:48:15.635 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:48:15.635 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:48:15.635 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:48:17.019 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:48:17.279 00:48:17.279 real 0m21.720s 00:48:17.279 user 0m10.482s 00:48:17.279 sys 0m7.139s 00:48:17.279 13:53:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:48:17.279 13:53:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:48:17.279 ************************************ 00:48:17.279 END TEST kernel_target_abort 00:48:17.279 ************************************ 00:48:17.279 13:53:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:48:17.279 13:53:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:48:17.279 13:53:25 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:48:17.279 13:53:25 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:48:17.279 13:53:25 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:48:17.279 13:53:25 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:48:17.279 13:53:25 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:48:17.280 13:53:25 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:48:17.280 rmmod nvme_tcp 00:48:17.280 rmmod nvme_fabrics 00:48:17.280 rmmod nvme_keyring 00:48:17.280 13:53:25 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:48:17.280 13:53:25 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:48:17.280 13:53:25 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:48:17.280 13:53:25 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 98527 ']' 00:48:17.280 13:53:25 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 98527 00:48:17.280 13:53:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 98527 ']' 00:48:17.280 13:53:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 98527 00:48:17.280 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (98527) - No such process 00:48:17.280 13:53:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 98527 is not found' 00:48:17.280 Process with pid 98527 is not found 00:48:17.280 13:53:25 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:48:17.280 13:53:25 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:48:21.482 Waiting for block devices as requested 00:48:21.482 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:48:21.482 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:48:21.482 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:48:21.482 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:48:21.482 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:48:21.482 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:48:21.742 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:48:21.742 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:48:21.742 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:48:22.002 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:48:22.002 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:48:22.002 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:48:22.262 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:48:22.262 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:48:22.262 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:48:22.262 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:48:22.522 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:48:22.783 13:53:30 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:48:22.783 13:53:30 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:48:22.783 13:53:30 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:48:22.783 13:53:30 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:48:22.783 13:53:30 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:48:22.783 13:53:30 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:48:22.783 13:53:30 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:48:22.783 13:53:30 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:48:22.783 13:53:30 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:22.783 13:53:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:48:22.783 13:53:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:25.324 13:53:32 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:48:25.324 00:48:25.324 real 0m55.648s 00:48:25.324 user 1m8.397s 00:48:25.324 sys 0m21.002s 00:48:25.324 13:53:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:48:25.324 13:53:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:48:25.324 ************************************ 00:48:25.324 END TEST nvmf_abort_qd_sizes 00:48:25.324 ************************************ 00:48:25.324 13:53:32 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:48:25.324 13:53:32 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:48:25.324 13:53:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:48:25.324 13:53:32 -- common/autotest_common.sh@10 -- # set +x 00:48:25.324 ************************************ 00:48:25.324 START TEST keyring_file 00:48:25.324 ************************************ 00:48:25.324 13:53:32 keyring_file -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:48:25.324 * Looking for test storage... 00:48:25.324 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:48:25.324 13:53:32 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:48:25.324 13:53:32 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:48:25.324 13:53:32 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:48:25.324 13:53:32 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:48:25.324 13:53:32 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:48:25.324 13:53:32 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:48:25.324 13:53:32 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:48:25.324 13:53:32 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:48:25.324 13:53:32 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:48:25.324 13:53:32 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:48:25.324 13:53:32 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:48:25.324 13:53:32 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:48:25.324 13:53:32 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:48:25.324 13:53:32 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:48:25.324 13:53:32 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:48:25.324 13:53:32 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:48:25.324 13:53:32 keyring_file -- scripts/common.sh@345 -- # : 1 00:48:25.324 13:53:32 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:48:25.324 13:53:32 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:48:25.324 13:53:32 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:48:25.324 13:53:32 keyring_file -- scripts/common.sh@353 -- # local d=1 00:48:25.324 13:53:32 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:48:25.324 13:53:32 keyring_file -- scripts/common.sh@355 -- # echo 1 00:48:25.324 13:53:32 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:48:25.324 13:53:32 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:48:25.324 13:53:32 keyring_file -- scripts/common.sh@353 -- # local d=2 00:48:25.324 13:53:32 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:48:25.324 13:53:32 keyring_file -- scripts/common.sh@355 -- # echo 2 00:48:25.324 13:53:32 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:48:25.324 13:53:32 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:48:25.324 13:53:32 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:48:25.324 13:53:32 keyring_file -- scripts/common.sh@368 -- # return 0 00:48:25.324 13:53:32 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:48:25.324 13:53:32 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:48:25.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:25.324 --rc genhtml_branch_coverage=1 00:48:25.324 --rc genhtml_function_coverage=1 00:48:25.324 --rc genhtml_legend=1 00:48:25.324 --rc geninfo_all_blocks=1 00:48:25.324 --rc geninfo_unexecuted_blocks=1 00:48:25.324 00:48:25.324 ' 00:48:25.324 13:53:32 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:48:25.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:25.324 --rc genhtml_branch_coverage=1 00:48:25.324 --rc genhtml_function_coverage=1 00:48:25.324 --rc genhtml_legend=1 00:48:25.324 --rc geninfo_all_blocks=1 00:48:25.324 --rc geninfo_unexecuted_blocks=1 00:48:25.324 00:48:25.324 ' 00:48:25.324 13:53:32 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:48:25.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:25.324 --rc genhtml_branch_coverage=1 00:48:25.324 --rc genhtml_function_coverage=1 00:48:25.324 --rc genhtml_legend=1 00:48:25.324 --rc geninfo_all_blocks=1 00:48:25.324 --rc geninfo_unexecuted_blocks=1 00:48:25.324 00:48:25.324 ' 00:48:25.324 13:53:32 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:48:25.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:25.324 --rc genhtml_branch_coverage=1 00:48:25.324 --rc genhtml_function_coverage=1 00:48:25.324 --rc genhtml_legend=1 00:48:25.324 --rc geninfo_all_blocks=1 00:48:25.324 --rc geninfo_unexecuted_blocks=1 00:48:25.324 00:48:25.324 ' 00:48:25.324 13:53:32 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:48:25.324 13:53:32 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:48:25.324 13:53:32 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:48:25.324 13:53:32 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:48:25.324 13:53:32 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:48:25.324 13:53:32 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:48:25.324 13:53:32 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:48:25.324 13:53:32 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:48:25.324 13:53:32 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:48:25.324 13:53:32 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:48:25.324 13:53:32 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:48:25.324 13:53:32 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:48:25.324 13:53:32 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:48:25.324 13:53:32 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:48:25.324 13:53:32 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:48:25.324 13:53:32 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:48:25.324 13:53:32 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:48:25.324 13:53:32 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:48:25.324 13:53:32 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:48:25.324 13:53:32 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:48:25.324 13:53:32 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:48:25.324 13:53:32 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:48:25.324 13:53:32 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:48:25.324 13:53:32 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:48:25.324 13:53:32 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:25.324 13:53:32 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:25.324 13:53:32 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:25.324 13:53:32 keyring_file -- paths/export.sh@5 -- # export PATH 00:48:25.324 13:53:32 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:25.324 13:53:32 keyring_file -- nvmf/common.sh@51 -- # : 0 00:48:25.324 13:53:32 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:48:25.324 13:53:32 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:48:25.324 13:53:32 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:48:25.324 13:53:32 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:48:25.324 13:53:32 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:48:25.324 13:53:32 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:48:25.324 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:48:25.324 13:53:32 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:48:25.324 13:53:32 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:48:25.325 13:53:32 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:48:25.325 13:53:32 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:48:25.325 13:53:32 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:48:25.325 13:53:32 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:48:25.325 13:53:32 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:48:25.325 13:53:32 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:48:25.325 13:53:32 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:48:25.325 13:53:32 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:48:25.325 13:53:32 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:48:25.325 13:53:32 keyring_file -- keyring/common.sh@17 -- # name=key0 00:48:25.325 13:53:32 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:48:25.325 13:53:32 keyring_file -- keyring/common.sh@17 -- # digest=0 00:48:25.325 13:53:32 keyring_file -- keyring/common.sh@18 -- # mktemp 00:48:25.325 13:53:32 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.2K875NTGnj 00:48:25.325 13:53:32 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:48:25.325 13:53:32 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:48:25.325 13:53:32 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:48:25.325 13:53:32 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:48:25.325 13:53:32 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:48:25.325 13:53:32 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:48:25.325 13:53:32 keyring_file -- nvmf/common.sh@733 -- # python - 00:48:25.325 13:53:33 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.2K875NTGnj 00:48:25.325 13:53:33 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.2K875NTGnj 00:48:25.325 13:53:33 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.2K875NTGnj 00:48:25.325 13:53:33 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:48:25.325 13:53:33 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:48:25.325 13:53:33 keyring_file -- keyring/common.sh@17 -- # name=key1 00:48:25.325 13:53:33 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:48:25.325 13:53:33 keyring_file -- keyring/common.sh@17 -- # digest=0 00:48:25.325 13:53:33 keyring_file -- keyring/common.sh@18 -- # mktemp 00:48:25.325 13:53:33 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.fcjmFYL4tH 00:48:25.325 13:53:33 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:48:25.325 13:53:33 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:48:25.325 13:53:33 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:48:25.325 13:53:33 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:48:25.325 13:53:33 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:48:25.325 13:53:33 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:48:25.325 13:53:33 keyring_file -- nvmf/common.sh@733 -- # python - 00:48:25.325 13:53:33 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.fcjmFYL4tH 00:48:25.325 13:53:33 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.fcjmFYL4tH 00:48:25.325 13:53:33 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.fcjmFYL4tH 00:48:25.325 13:53:33 keyring_file -- keyring/file.sh@30 -- # tgtpid=109521 00:48:25.325 13:53:33 keyring_file -- keyring/file.sh@32 -- # waitforlisten 109521 00:48:25.325 13:53:33 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:48:25.325 13:53:33 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 109521 ']' 00:48:25.325 13:53:33 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:25.325 13:53:33 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:48:25.325 13:53:33 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:25.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:25.325 13:53:33 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:48:25.325 13:53:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:48:25.325 [2024-11-07 13:53:33.183701] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:48:25.325 [2024-11-07 13:53:33.183821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109521 ] 00:48:25.325 [2024-11-07 13:53:33.321588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:25.585 [2024-11-07 13:53:33.417951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:48:26.155 13:53:34 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:48:26.155 13:53:34 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:48:26.155 13:53:34 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:48:26.155 13:53:34 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:26.155 13:53:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:48:26.155 [2024-11-07 13:53:34.065869] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:26.155 null0 00:48:26.155 [2024-11-07 13:53:34.097910] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:48:26.155 [2024-11-07 13:53:34.098358] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:48:26.155 13:53:34 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:26.155 13:53:34 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:48:26.155 13:53:34 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:48:26.155 13:53:34 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:48:26.155 13:53:34 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:48:26.155 13:53:34 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:48:26.155 13:53:34 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:48:26.155 13:53:34 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:48:26.155 13:53:34 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:48:26.155 13:53:34 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:26.155 13:53:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:48:26.155 [2024-11-07 13:53:34.129961] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:48:26.155 request: 00:48:26.155 { 00:48:26.155 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:48:26.155 "secure_channel": false, 00:48:26.155 "listen_address": { 00:48:26.155 "trtype": "tcp", 00:48:26.155 "traddr": "127.0.0.1", 00:48:26.155 "trsvcid": "4420" 00:48:26.155 }, 00:48:26.155 "method": "nvmf_subsystem_add_listener", 00:48:26.155 "req_id": 1 00:48:26.155 } 00:48:26.155 Got JSON-RPC error response 00:48:26.155 response: 00:48:26.155 { 00:48:26.155 "code": -32602, 00:48:26.155 "message": "Invalid parameters" 00:48:26.155 } 00:48:26.155 13:53:34 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:48:26.155 13:53:34 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:48:26.155 13:53:34 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:48:26.155 13:53:34 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:48:26.155 13:53:34 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:48:26.155 13:53:34 keyring_file -- keyring/file.sh@47 -- # bperfpid=109679 00:48:26.155 13:53:34 keyring_file -- keyring/file.sh@49 -- # waitforlisten 109679 /var/tmp/bperf.sock 00:48:26.155 13:53:34 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:48:26.155 13:53:34 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 109679 ']' 00:48:26.155 13:53:34 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:48:26.155 13:53:34 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:48:26.155 13:53:34 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:48:26.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:48:26.155 13:53:34 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:48:26.155 13:53:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:48:26.415 [2024-11-07 13:53:34.218053] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:48:26.415 [2024-11-07 13:53:34.218165] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109679 ] 00:48:26.415 [2024-11-07 13:53:34.371089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:26.675 [2024-11-07 13:53:34.468375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:27.245 13:53:34 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:48:27.245 13:53:34 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:48:27.245 13:53:34 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2K875NTGnj 00:48:27.245 13:53:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2K875NTGnj 00:48:27.245 13:53:35 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.fcjmFYL4tH 00:48:27.246 13:53:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.fcjmFYL4tH 00:48:27.506 13:53:35 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:48:27.506 13:53:35 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:48:27.506 13:53:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:27.506 13:53:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:27.506 13:53:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:27.506 13:53:35 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.2K875NTGnj == \/\t\m\p\/\t\m\p\.\2\K\8\7\5\N\T\G\n\j ]] 00:48:27.506 13:53:35 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:48:27.506 13:53:35 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:48:27.506 13:53:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:27.506 13:53:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:48:27.506 13:53:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:27.766 13:53:35 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.fcjmFYL4tH == \/\t\m\p\/\t\m\p\.\f\c\j\m\F\Y\L\4\t\H ]] 00:48:27.766 13:53:35 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:48:27.766 13:53:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:48:27.766 13:53:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:27.766 13:53:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:27.766 13:53:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:27.766 13:53:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:28.027 13:53:35 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:48:28.027 13:53:35 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:48:28.027 13:53:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:48:28.027 13:53:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:28.027 13:53:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:28.027 13:53:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:28.027 13:53:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:48:28.027 13:53:36 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:48:28.027 13:53:36 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:28.027 13:53:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:28.289 [2024-11-07 13:53:36.162796] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:48:28.289 nvme0n1 00:48:28.289 13:53:36 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:48:28.289 13:53:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:48:28.289 13:53:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:28.289 13:53:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:28.289 13:53:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:28.289 13:53:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:28.549 13:53:36 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:48:28.549 13:53:36 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:48:28.549 13:53:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:48:28.549 13:53:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:28.549 13:53:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:28.549 13:53:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:28.549 13:53:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:48:28.809 13:53:36 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:48:28.809 13:53:36 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:48:28.809 Running I/O for 1 seconds... 00:48:29.749 13798.00 IOPS, 53.90 MiB/s 00:48:29.749 Latency(us) 00:48:29.749 [2024-11-07T12:53:37.756Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:29.749 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:48:29.749 nvme0n1 : 1.01 13806.75 53.93 0.00 0.00 9227.99 5160.96 17039.36 00:48:29.749 [2024-11-07T12:53:37.756Z] =================================================================================================================== 00:48:29.749 [2024-11-07T12:53:37.756Z] Total : 13806.75 53.93 0.00 0.00 9227.99 5160.96 17039.36 00:48:29.749 { 00:48:29.749 "results": [ 00:48:29.749 { 00:48:29.749 "job": "nvme0n1", 00:48:29.749 "core_mask": "0x2", 00:48:29.749 "workload": "randrw", 00:48:29.749 "percentage": 50, 00:48:29.749 "status": "finished", 00:48:29.749 "queue_depth": 128, 00:48:29.749 "io_size": 4096, 00:48:29.749 "runtime": 1.008637, 00:48:29.749 "iops": 13806.751090828515, 00:48:29.749 "mibps": 53.932621448548886, 00:48:29.749 "io_failed": 0, 00:48:29.749 "io_timeout": 0, 00:48:29.749 "avg_latency_us": 9227.985638374264, 00:48:29.749 "min_latency_us": 5160.96, 00:48:29.749 "max_latency_us": 17039.36 00:48:29.749 } 00:48:29.749 ], 00:48:29.749 "core_count": 1 00:48:29.749 } 00:48:29.749 13:53:37 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:48:29.749 13:53:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:48:30.010 13:53:37 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:48:30.010 13:53:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:48:30.010 13:53:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:30.010 13:53:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:30.010 13:53:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:30.010 13:53:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:30.270 13:53:38 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:48:30.270 13:53:38 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:48:30.270 13:53:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:48:30.270 13:53:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:30.270 13:53:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:30.270 13:53:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:30.270 13:53:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:48:30.531 13:53:38 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:48:30.531 13:53:38 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:48:30.531 13:53:38 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:48:30.531 13:53:38 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:48:30.531 13:53:38 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:48:30.531 13:53:38 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:48:30.531 13:53:38 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:48:30.531 13:53:38 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:48:30.531 13:53:38 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:48:30.531 13:53:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:48:30.531 [2024-11-07 13:53:38.443170] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:48:30.531 [2024-11-07 13:53:38.443534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500041be80 (107): Transport endpoint is not connected 00:48:30.531 [2024-11-07 13:53:38.444517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500041be80 (9): Bad file descriptor 00:48:30.531 [2024-11-07 13:53:38.445515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:48:30.531 [2024-11-07 13:53:38.445531] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:48:30.531 [2024-11-07 13:53:38.445541] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:48:30.531 [2024-11-07 13:53:38.445552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:48:30.531 request: 00:48:30.531 { 00:48:30.531 "name": "nvme0", 00:48:30.531 "trtype": "tcp", 00:48:30.531 "traddr": "127.0.0.1", 00:48:30.531 "adrfam": "ipv4", 00:48:30.531 "trsvcid": "4420", 00:48:30.531 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:30.531 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:30.531 "prchk_reftag": false, 00:48:30.531 "prchk_guard": false, 00:48:30.531 "hdgst": false, 00:48:30.531 "ddgst": false, 00:48:30.531 "psk": "key1", 00:48:30.531 "allow_unrecognized_csi": false, 00:48:30.531 "method": "bdev_nvme_attach_controller", 00:48:30.531 "req_id": 1 00:48:30.531 } 00:48:30.531 Got JSON-RPC error response 00:48:30.531 response: 00:48:30.531 { 00:48:30.531 "code": -5, 00:48:30.531 "message": "Input/output error" 00:48:30.531 } 00:48:30.531 13:53:38 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:48:30.531 13:53:38 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:48:30.531 13:53:38 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:48:30.531 13:53:38 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:48:30.531 13:53:38 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:48:30.531 13:53:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:48:30.531 13:53:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:30.531 13:53:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:30.531 13:53:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:30.531 13:53:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:30.791 13:53:38 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:48:30.791 13:53:38 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:48:30.791 13:53:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:48:30.791 13:53:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:30.791 13:53:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:30.791 13:53:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:48:30.791 13:53:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:31.050 13:53:38 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:48:31.050 13:53:38 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:48:31.050 13:53:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:48:31.050 13:53:38 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:48:31.050 13:53:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:48:31.310 13:53:39 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:48:31.310 13:53:39 keyring_file -- keyring/file.sh@78 -- # jq length 00:48:31.310 13:53:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:31.570 13:53:39 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:48:31.570 13:53:39 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.2K875NTGnj 00:48:31.570 13:53:39 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.2K875NTGnj 00:48:31.570 13:53:39 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:48:31.570 13:53:39 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.2K875NTGnj 00:48:31.570 13:53:39 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:48:31.570 13:53:39 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:48:31.570 13:53:39 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:48:31.570 13:53:39 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:48:31.570 13:53:39 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2K875NTGnj 00:48:31.570 13:53:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2K875NTGnj 00:48:31.570 [2024-11-07 13:53:39.507831] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.2K875NTGnj': 0100660 00:48:31.570 [2024-11-07 13:53:39.507861] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:48:31.570 request: 00:48:31.570 { 00:48:31.570 "name": "key0", 00:48:31.570 "path": "/tmp/tmp.2K875NTGnj", 00:48:31.570 "method": "keyring_file_add_key", 00:48:31.570 "req_id": 1 00:48:31.570 } 00:48:31.570 Got JSON-RPC error response 00:48:31.570 response: 00:48:31.570 { 00:48:31.570 "code": -1, 00:48:31.570 "message": "Operation not permitted" 00:48:31.570 } 00:48:31.570 13:53:39 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:48:31.570 13:53:39 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:48:31.570 13:53:39 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:48:31.570 13:53:39 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:48:31.570 13:53:39 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.2K875NTGnj 00:48:31.570 13:53:39 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2K875NTGnj 00:48:31.570 13:53:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2K875NTGnj 00:48:31.834 13:53:39 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.2K875NTGnj 00:48:31.834 13:53:39 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:48:31.834 13:53:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:48:31.834 13:53:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:31.834 13:53:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:31.834 13:53:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:31.834 13:53:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:32.153 13:53:39 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:48:32.153 13:53:39 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:32.153 13:53:39 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:48:32.153 13:53:39 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:32.153 13:53:39 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:48:32.153 13:53:39 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:48:32.153 13:53:39 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:48:32.153 13:53:39 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:48:32.153 13:53:39 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:32.153 13:53:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:32.153 [2024-11-07 13:53:40.017193] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.2K875NTGnj': No such file or directory 00:48:32.153 [2024-11-07 13:53:40.017223] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:48:32.153 [2024-11-07 13:53:40.017241] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:48:32.153 [2024-11-07 13:53:40.017251] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:48:32.153 [2024-11-07 13:53:40.017260] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:48:32.153 [2024-11-07 13:53:40.017268] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:48:32.153 request: 00:48:32.153 { 00:48:32.153 "name": "nvme0", 00:48:32.153 "trtype": "tcp", 00:48:32.153 "traddr": "127.0.0.1", 00:48:32.153 "adrfam": "ipv4", 00:48:32.153 "trsvcid": "4420", 00:48:32.153 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:32.153 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:32.153 "prchk_reftag": false, 00:48:32.153 "prchk_guard": false, 00:48:32.153 "hdgst": false, 00:48:32.153 "ddgst": false, 00:48:32.153 "psk": "key0", 00:48:32.153 "allow_unrecognized_csi": false, 00:48:32.153 "method": "bdev_nvme_attach_controller", 00:48:32.153 "req_id": 1 00:48:32.153 } 00:48:32.153 Got JSON-RPC error response 00:48:32.153 response: 00:48:32.153 { 00:48:32.153 "code": -19, 00:48:32.153 "message": "No such device" 00:48:32.153 } 00:48:32.154 13:53:40 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:48:32.154 13:53:40 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:48:32.154 13:53:40 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:48:32.154 13:53:40 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:48:32.154 13:53:40 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:48:32.154 13:53:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:48:32.415 13:53:40 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:48:32.415 13:53:40 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:48:32.416 13:53:40 keyring_file -- keyring/common.sh@17 -- # name=key0 00:48:32.416 13:53:40 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:48:32.416 13:53:40 keyring_file -- keyring/common.sh@17 -- # digest=0 00:48:32.416 13:53:40 keyring_file -- keyring/common.sh@18 -- # mktemp 00:48:32.416 13:53:40 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.MNSAv0ZunF 00:48:32.416 13:53:40 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:48:32.416 13:53:40 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:48:32.416 13:53:40 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:48:32.416 13:53:40 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:48:32.416 13:53:40 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:48:32.416 13:53:40 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:48:32.416 13:53:40 keyring_file -- nvmf/common.sh@733 -- # python - 00:48:32.416 13:53:40 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.MNSAv0ZunF 00:48:32.416 13:53:40 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.MNSAv0ZunF 00:48:32.416 13:53:40 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.MNSAv0ZunF 00:48:32.416 13:53:40 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.MNSAv0ZunF 00:48:32.416 13:53:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.MNSAv0ZunF 00:48:32.416 13:53:40 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:32.416 13:53:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:32.675 nvme0n1 00:48:32.675 13:53:40 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:48:32.675 13:53:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:48:32.675 13:53:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:32.675 13:53:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:32.675 13:53:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:32.675 13:53:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:32.935 13:53:40 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:48:32.935 13:53:40 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:48:32.935 13:53:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:48:33.195 13:53:40 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:48:33.195 13:53:40 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:48:33.195 13:53:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:33.195 13:53:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:33.195 13:53:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:33.195 13:53:41 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:48:33.195 13:53:41 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:48:33.195 13:53:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:33.195 13:53:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:48:33.195 13:53:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:33.195 13:53:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:33.195 13:53:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:33.456 13:53:41 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:48:33.456 13:53:41 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:48:33.456 13:53:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:48:33.717 13:53:41 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:48:33.717 13:53:41 keyring_file -- keyring/file.sh@105 -- # jq length 00:48:33.717 13:53:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:33.717 13:53:41 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:48:33.717 13:53:41 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.MNSAv0ZunF 00:48:33.717 13:53:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.MNSAv0ZunF 00:48:33.978 13:53:41 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.fcjmFYL4tH 00:48:33.978 13:53:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.fcjmFYL4tH 00:48:34.238 13:53:42 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:34.238 13:53:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:34.498 nvme0n1 00:48:34.498 13:53:42 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:48:34.498 13:53:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:48:34.759 13:53:42 keyring_file -- keyring/file.sh@113 -- # config='{ 00:48:34.759 "subsystems": [ 00:48:34.759 { 00:48:34.759 "subsystem": "keyring", 00:48:34.759 "config": [ 00:48:34.759 { 00:48:34.759 "method": "keyring_file_add_key", 00:48:34.759 "params": { 00:48:34.759 "name": "key0", 00:48:34.759 "path": "/tmp/tmp.MNSAv0ZunF" 00:48:34.759 } 00:48:34.759 }, 00:48:34.759 { 00:48:34.759 "method": "keyring_file_add_key", 00:48:34.759 "params": { 00:48:34.759 "name": "key1", 00:48:34.759 "path": "/tmp/tmp.fcjmFYL4tH" 00:48:34.759 } 00:48:34.759 } 00:48:34.759 ] 00:48:34.759 }, 00:48:34.759 { 00:48:34.759 "subsystem": "iobuf", 00:48:34.759 "config": [ 00:48:34.759 { 00:48:34.759 "method": "iobuf_set_options", 00:48:34.759 "params": { 00:48:34.759 "small_pool_count": 8192, 00:48:34.759 "large_pool_count": 1024, 00:48:34.759 "small_bufsize": 8192, 00:48:34.759 "large_bufsize": 135168, 00:48:34.759 "enable_numa": false 00:48:34.759 } 00:48:34.759 } 00:48:34.759 ] 00:48:34.759 }, 00:48:34.759 { 00:48:34.759 "subsystem": "sock", 00:48:34.759 "config": [ 00:48:34.759 { 00:48:34.759 "method": "sock_set_default_impl", 00:48:34.759 "params": { 00:48:34.759 "impl_name": "posix" 00:48:34.759 } 00:48:34.759 }, 00:48:34.759 { 00:48:34.759 "method": "sock_impl_set_options", 00:48:34.759 "params": { 00:48:34.759 "impl_name": "ssl", 00:48:34.759 "recv_buf_size": 4096, 00:48:34.759 "send_buf_size": 4096, 00:48:34.759 "enable_recv_pipe": true, 00:48:34.759 "enable_quickack": false, 00:48:34.759 "enable_placement_id": 0, 00:48:34.759 "enable_zerocopy_send_server": true, 00:48:34.759 "enable_zerocopy_send_client": false, 00:48:34.759 "zerocopy_threshold": 0, 00:48:34.759 "tls_version": 0, 00:48:34.759 "enable_ktls": false 00:48:34.759 } 00:48:34.759 }, 00:48:34.759 { 00:48:34.759 "method": "sock_impl_set_options", 00:48:34.759 "params": { 00:48:34.759 "impl_name": "posix", 00:48:34.759 "recv_buf_size": 2097152, 00:48:34.759 "send_buf_size": 2097152, 00:48:34.759 "enable_recv_pipe": true, 00:48:34.759 "enable_quickack": false, 00:48:34.759 "enable_placement_id": 0, 00:48:34.759 "enable_zerocopy_send_server": true, 00:48:34.759 "enable_zerocopy_send_client": false, 00:48:34.759 "zerocopy_threshold": 0, 00:48:34.759 "tls_version": 0, 00:48:34.759 "enable_ktls": false 00:48:34.759 } 00:48:34.759 } 00:48:34.759 ] 00:48:34.759 }, 00:48:34.759 { 00:48:34.759 "subsystem": "vmd", 00:48:34.759 "config": [] 00:48:34.759 }, 00:48:34.759 { 00:48:34.759 "subsystem": "accel", 00:48:34.759 "config": [ 00:48:34.759 { 00:48:34.759 "method": "accel_set_options", 00:48:34.759 "params": { 00:48:34.759 "small_cache_size": 128, 00:48:34.759 "large_cache_size": 16, 00:48:34.759 "task_count": 2048, 00:48:34.759 "sequence_count": 2048, 00:48:34.759 "buf_count": 2048 00:48:34.759 } 00:48:34.759 } 00:48:34.759 ] 00:48:34.759 }, 00:48:34.759 { 00:48:34.759 "subsystem": "bdev", 00:48:34.759 "config": [ 00:48:34.759 { 00:48:34.759 "method": "bdev_set_options", 00:48:34.759 "params": { 00:48:34.759 "bdev_io_pool_size": 65535, 00:48:34.759 "bdev_io_cache_size": 256, 00:48:34.759 "bdev_auto_examine": true, 00:48:34.759 "iobuf_small_cache_size": 128, 00:48:34.759 "iobuf_large_cache_size": 16 00:48:34.759 } 00:48:34.759 }, 00:48:34.759 { 00:48:34.759 "method": "bdev_raid_set_options", 00:48:34.759 "params": { 00:48:34.759 "process_window_size_kb": 1024, 00:48:34.759 "process_max_bandwidth_mb_sec": 0 00:48:34.759 } 00:48:34.759 }, 00:48:34.759 { 00:48:34.759 "method": "bdev_iscsi_set_options", 00:48:34.759 "params": { 00:48:34.759 "timeout_sec": 30 00:48:34.759 } 00:48:34.759 }, 00:48:34.759 { 00:48:34.759 "method": "bdev_nvme_set_options", 00:48:34.759 "params": { 00:48:34.759 "action_on_timeout": "none", 00:48:34.759 "timeout_us": 0, 00:48:34.759 "timeout_admin_us": 0, 00:48:34.759 "keep_alive_timeout_ms": 10000, 00:48:34.759 "arbitration_burst": 0, 00:48:34.759 "low_priority_weight": 0, 00:48:34.759 "medium_priority_weight": 0, 00:48:34.759 "high_priority_weight": 0, 00:48:34.759 "nvme_adminq_poll_period_us": 10000, 00:48:34.759 "nvme_ioq_poll_period_us": 0, 00:48:34.759 "io_queue_requests": 512, 00:48:34.759 "delay_cmd_submit": true, 00:48:34.759 "transport_retry_count": 4, 00:48:34.759 "bdev_retry_count": 3, 00:48:34.759 "transport_ack_timeout": 0, 00:48:34.759 "ctrlr_loss_timeout_sec": 0, 00:48:34.759 "reconnect_delay_sec": 0, 00:48:34.759 "fast_io_fail_timeout_sec": 0, 00:48:34.759 "disable_auto_failback": false, 00:48:34.759 "generate_uuids": false, 00:48:34.759 "transport_tos": 0, 00:48:34.759 "nvme_error_stat": false, 00:48:34.759 "rdma_srq_size": 0, 00:48:34.759 "io_path_stat": false, 00:48:34.759 "allow_accel_sequence": false, 00:48:34.759 "rdma_max_cq_size": 0, 00:48:34.759 "rdma_cm_event_timeout_ms": 0, 00:48:34.759 "dhchap_digests": [ 00:48:34.759 "sha256", 00:48:34.759 "sha384", 00:48:34.759 "sha512" 00:48:34.759 ], 00:48:34.759 "dhchap_dhgroups": [ 00:48:34.759 "null", 00:48:34.759 "ffdhe2048", 00:48:34.759 "ffdhe3072", 00:48:34.759 "ffdhe4096", 00:48:34.759 "ffdhe6144", 00:48:34.759 "ffdhe8192" 00:48:34.759 ] 00:48:34.759 } 00:48:34.759 }, 00:48:34.759 { 00:48:34.759 "method": "bdev_nvme_attach_controller", 00:48:34.759 "params": { 00:48:34.759 "name": "nvme0", 00:48:34.759 "trtype": "TCP", 00:48:34.759 "adrfam": "IPv4", 00:48:34.759 "traddr": "127.0.0.1", 00:48:34.759 "trsvcid": "4420", 00:48:34.759 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:34.759 "prchk_reftag": false, 00:48:34.759 "prchk_guard": false, 00:48:34.759 "ctrlr_loss_timeout_sec": 0, 00:48:34.759 "reconnect_delay_sec": 0, 00:48:34.759 "fast_io_fail_timeout_sec": 0, 00:48:34.759 "psk": "key0", 00:48:34.759 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:34.759 "hdgst": false, 00:48:34.759 "ddgst": false, 00:48:34.759 "multipath": "multipath" 00:48:34.759 } 00:48:34.759 }, 00:48:34.759 { 00:48:34.759 "method": "bdev_nvme_set_hotplug", 00:48:34.759 "params": { 00:48:34.759 "period_us": 100000, 00:48:34.759 "enable": false 00:48:34.759 } 00:48:34.759 }, 00:48:34.759 { 00:48:34.759 "method": "bdev_wait_for_examine" 00:48:34.759 } 00:48:34.759 ] 00:48:34.759 }, 00:48:34.759 { 00:48:34.759 "subsystem": "nbd", 00:48:34.759 "config": [] 00:48:34.759 } 00:48:34.759 ] 00:48:34.759 }' 00:48:34.759 13:53:42 keyring_file -- keyring/file.sh@115 -- # killprocess 109679 00:48:34.759 13:53:42 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 109679 ']' 00:48:34.760 13:53:42 keyring_file -- common/autotest_common.sh@956 -- # kill -0 109679 00:48:34.760 13:53:42 keyring_file -- common/autotest_common.sh@957 -- # uname 00:48:34.760 13:53:42 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:48:34.760 13:53:42 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 109679 00:48:34.760 13:53:42 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:48:34.760 13:53:42 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:48:34.760 13:53:42 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 109679' 00:48:34.760 killing process with pid 109679 00:48:34.760 13:53:42 keyring_file -- common/autotest_common.sh@971 -- # kill 109679 00:48:34.760 Received shutdown signal, test time was about 1.000000 seconds 00:48:34.760 00:48:34.760 Latency(us) 00:48:34.760 [2024-11-07T12:53:42.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:34.760 [2024-11-07T12:53:42.767Z] =================================================================================================================== 00:48:34.760 [2024-11-07T12:53:42.767Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:48:34.760 13:53:42 keyring_file -- common/autotest_common.sh@976 -- # wait 109679 00:48:35.341 13:53:43 keyring_file -- keyring/file.sh@118 -- # bperfpid=111344 00:48:35.341 13:53:43 keyring_file -- keyring/file.sh@120 -- # waitforlisten 111344 /var/tmp/bperf.sock 00:48:35.341 13:53:43 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:48:35.341 "subsystems": [ 00:48:35.341 { 00:48:35.341 "subsystem": "keyring", 00:48:35.341 "config": [ 00:48:35.341 { 00:48:35.341 "method": "keyring_file_add_key", 00:48:35.341 "params": { 00:48:35.341 "name": "key0", 00:48:35.341 "path": "/tmp/tmp.MNSAv0ZunF" 00:48:35.341 } 00:48:35.341 }, 00:48:35.341 { 00:48:35.341 "method": "keyring_file_add_key", 00:48:35.341 "params": { 00:48:35.341 "name": "key1", 00:48:35.341 "path": "/tmp/tmp.fcjmFYL4tH" 00:48:35.341 } 00:48:35.341 } 00:48:35.341 ] 00:48:35.341 }, 00:48:35.341 { 00:48:35.341 "subsystem": "iobuf", 00:48:35.341 "config": [ 00:48:35.341 { 00:48:35.341 "method": "iobuf_set_options", 00:48:35.341 "params": { 00:48:35.341 "small_pool_count": 8192, 00:48:35.341 "large_pool_count": 1024, 00:48:35.341 "small_bufsize": 8192, 00:48:35.341 "large_bufsize": 135168, 00:48:35.341 "enable_numa": false 00:48:35.341 } 00:48:35.341 } 00:48:35.341 ] 00:48:35.341 }, 00:48:35.341 { 00:48:35.341 "subsystem": "sock", 00:48:35.341 "config": [ 00:48:35.341 { 00:48:35.341 "method": "sock_set_default_impl", 00:48:35.341 "params": { 00:48:35.341 "impl_name": "posix" 00:48:35.341 } 00:48:35.341 }, 00:48:35.341 { 00:48:35.341 "method": "sock_impl_set_options", 00:48:35.341 "params": { 00:48:35.341 "impl_name": "ssl", 00:48:35.341 "recv_buf_size": 4096, 00:48:35.341 "send_buf_size": 4096, 00:48:35.341 "enable_recv_pipe": true, 00:48:35.341 "enable_quickack": false, 00:48:35.341 "enable_placement_id": 0, 00:48:35.341 "enable_zerocopy_send_server": true, 00:48:35.341 "enable_zerocopy_send_client": false, 00:48:35.341 "zerocopy_threshold": 0, 00:48:35.341 "tls_version": 0, 00:48:35.341 "enable_ktls": false 00:48:35.341 } 00:48:35.341 }, 00:48:35.341 { 00:48:35.341 "method": "sock_impl_set_options", 00:48:35.341 "params": { 00:48:35.341 "impl_name": "posix", 00:48:35.341 "recv_buf_size": 2097152, 00:48:35.341 "send_buf_size": 2097152, 00:48:35.341 "enable_recv_pipe": true, 00:48:35.341 "enable_quickack": false, 00:48:35.341 "enable_placement_id": 0, 00:48:35.341 "enable_zerocopy_send_server": true, 00:48:35.341 "enable_zerocopy_send_client": false, 00:48:35.341 "zerocopy_threshold": 0, 00:48:35.341 "tls_version": 0, 00:48:35.341 "enable_ktls": false 00:48:35.341 } 00:48:35.341 } 00:48:35.341 ] 00:48:35.341 }, 00:48:35.341 { 00:48:35.341 "subsystem": "vmd", 00:48:35.341 "config": [] 00:48:35.341 }, 00:48:35.341 { 00:48:35.341 "subsystem": "accel", 00:48:35.341 "config": [ 00:48:35.341 { 00:48:35.341 "method": "accel_set_options", 00:48:35.341 "params": { 00:48:35.341 "small_cache_size": 128, 00:48:35.341 "large_cache_size": 16, 00:48:35.341 "task_count": 2048, 00:48:35.341 "sequence_count": 2048, 00:48:35.341 "buf_count": 2048 00:48:35.341 } 00:48:35.341 } 00:48:35.341 ] 00:48:35.341 }, 00:48:35.341 { 00:48:35.341 "subsystem": "bdev", 00:48:35.341 "config": [ 00:48:35.341 { 00:48:35.341 "method": "bdev_set_options", 00:48:35.341 "params": { 00:48:35.341 "bdev_io_pool_size": 65535, 00:48:35.341 "bdev_io_cache_size": 256, 00:48:35.341 "bdev_auto_examine": true, 00:48:35.341 "iobuf_small_cache_size": 128, 00:48:35.341 "iobuf_large_cache_size": 16 00:48:35.341 } 00:48:35.341 }, 00:48:35.341 { 00:48:35.341 "method": "bdev_raid_set_options", 00:48:35.341 "params": { 00:48:35.341 "process_window_size_kb": 1024, 00:48:35.341 "process_max_bandwidth_mb_sec": 0 00:48:35.341 } 00:48:35.341 }, 00:48:35.341 { 00:48:35.341 "method": "bdev_iscsi_set_options", 00:48:35.341 "params": { 00:48:35.341 "timeout_sec": 30 00:48:35.341 } 00:48:35.341 }, 00:48:35.341 { 00:48:35.341 "method": "bdev_nvme_set_options", 00:48:35.341 "params": { 00:48:35.341 "action_on_timeout": "none", 00:48:35.341 "timeout_us": 0, 00:48:35.341 "timeout_admin_us": 0, 00:48:35.341 "keep_alive_timeout_ms": 10000, 00:48:35.341 "arbitration_burst": 0, 00:48:35.341 "low_priority_weight": 0, 00:48:35.341 "medium_priority_weight": 0, 00:48:35.341 "high_priority_weight": 0, 00:48:35.341 "nvme_adminq_poll_period_us": 10000, 00:48:35.341 "nvme_ioq_poll_period_us": 0, 00:48:35.341 "io_queue_requests": 512, 00:48:35.341 "delay_cmd_submit": true, 00:48:35.341 "transport_retry_count": 4, 00:48:35.341 "bdev_retry_count": 3, 00:48:35.341 "transport_ack_timeout": 0, 00:48:35.341 "ctrlr_loss_timeout_sec": 0, 00:48:35.341 "reconnect_delay_sec": 0, 00:48:35.341 "fast_io_fail_timeout_sec": 0, 00:48:35.341 "disable_auto_failback": false, 00:48:35.341 "generate_uuids": false, 00:48:35.341 "transport_tos": 0, 00:48:35.341 "nvme_error_stat": false, 00:48:35.341 "rdma_srq_size": 0, 00:48:35.341 "io_path_stat": false, 00:48:35.341 "allow_accel_sequence": false, 00:48:35.341 "rdma_max_cq_size": 0, 00:48:35.341 "rdma_cm_event_timeout_ms": 0, 00:48:35.341 "dhchap_digests": [ 00:48:35.341 "sha256", 00:48:35.341 "sha384", 00:48:35.341 "sha512" 00:48:35.341 ], 00:48:35.341 "dhchap_dhgroups": [ 00:48:35.341 "null", 00:48:35.341 "ffdhe2048", 00:48:35.341 "ffdhe3072", 00:48:35.341 "ffdhe4096", 00:48:35.341 "ffdhe6144", 00:48:35.341 "ffdhe8192" 00:48:35.341 ] 00:48:35.341 } 00:48:35.341 }, 00:48:35.341 { 00:48:35.341 "method": "bdev_nvme_attach_controller", 00:48:35.341 "params": { 00:48:35.341 "name": "nvme0", 00:48:35.341 "trtype": "TCP", 00:48:35.341 "adrfam": "IPv4", 00:48:35.341 "traddr": "127.0.0.1", 00:48:35.341 "trsvcid": "4420", 00:48:35.341 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:35.341 "prchk_reftag": false, 00:48:35.341 "prchk_guard": false, 00:48:35.341 "ctrlr_loss_timeout_sec": 0, 00:48:35.341 "reconnect_delay_sec": 0, 00:48:35.341 "fast_io_fail_timeout_sec": 0, 00:48:35.341 "psk": "key0", 00:48:35.341 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:35.341 "hdgst": false, 00:48:35.341 "ddgst": false, 00:48:35.341 "multipath": "multipath" 00:48:35.341 } 00:48:35.341 }, 00:48:35.341 { 00:48:35.341 "method": "bdev_nvme_set_hotplug", 00:48:35.341 "params": { 00:48:35.341 "period_us": 100000, 00:48:35.341 "enable": false 00:48:35.341 } 00:48:35.341 }, 00:48:35.341 { 00:48:35.341 "method": "bdev_wait_for_examine" 00:48:35.341 } 00:48:35.341 ] 00:48:35.341 }, 00:48:35.341 { 00:48:35.341 "subsystem": "nbd", 00:48:35.341 "config": [] 00:48:35.341 } 00:48:35.341 ] 00:48:35.341 }' 00:48:35.341 13:53:43 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 111344 ']' 00:48:35.341 13:53:43 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:48:35.341 13:53:43 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:48:35.341 13:53:43 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:48:35.341 13:53:43 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:48:35.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:48:35.341 13:53:43 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:48:35.341 13:53:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:48:35.341 [2024-11-07 13:53:43.112711] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:48:35.341 [2024-11-07 13:53:43.112831] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111344 ] 00:48:35.341 [2024-11-07 13:53:43.254897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:35.341 [2024-11-07 13:53:43.328678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:35.601 [2024-11-07 13:53:43.595055] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:48:36.171 13:53:43 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:48:36.171 13:53:43 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:48:36.171 13:53:43 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:48:36.171 13:53:43 keyring_file -- keyring/file.sh@121 -- # jq length 00:48:36.171 13:53:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:36.171 13:53:44 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:48:36.171 13:53:44 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:48:36.171 13:53:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:48:36.171 13:53:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:36.171 13:53:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:36.171 13:53:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:36.171 13:53:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:36.432 13:53:44 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:48:36.432 13:53:44 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:48:36.432 13:53:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:48:36.432 13:53:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:36.432 13:53:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:36.432 13:53:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:48:36.432 13:53:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:36.432 13:53:44 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:48:36.432 13:53:44 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:48:36.432 13:53:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:48:36.432 13:53:44 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:48:36.692 13:53:44 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:48:36.692 13:53:44 keyring_file -- keyring/file.sh@1 -- # cleanup 00:48:36.692 13:53:44 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.MNSAv0ZunF /tmp/tmp.fcjmFYL4tH 00:48:36.692 13:53:44 keyring_file -- keyring/file.sh@20 -- # killprocess 111344 00:48:36.692 13:53:44 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 111344 ']' 00:48:36.692 13:53:44 keyring_file -- common/autotest_common.sh@956 -- # kill -0 111344 00:48:36.692 13:53:44 keyring_file -- common/autotest_common.sh@957 -- # uname 00:48:36.692 13:53:44 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:48:36.692 13:53:44 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 111344 00:48:36.692 13:53:44 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:48:36.692 13:53:44 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:48:36.692 13:53:44 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 111344' 00:48:36.692 killing process with pid 111344 00:48:36.692 13:53:44 keyring_file -- common/autotest_common.sh@971 -- # kill 111344 00:48:36.692 Received shutdown signal, test time was about 1.000000 seconds 00:48:36.692 00:48:36.692 Latency(us) 00:48:36.692 [2024-11-07T12:53:44.699Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:36.692 [2024-11-07T12:53:44.699Z] =================================================================================================================== 00:48:36.692 [2024-11-07T12:53:44.699Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:48:36.692 13:53:44 keyring_file -- common/autotest_common.sh@976 -- # wait 111344 00:48:37.263 13:53:45 keyring_file -- keyring/file.sh@21 -- # killprocess 109521 00:48:37.263 13:53:45 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 109521 ']' 00:48:37.263 13:53:45 keyring_file -- common/autotest_common.sh@956 -- # kill -0 109521 00:48:37.263 13:53:45 keyring_file -- common/autotest_common.sh@957 -- # uname 00:48:37.263 13:53:45 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:48:37.263 13:53:45 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 109521 00:48:37.263 13:53:45 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:48:37.263 13:53:45 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:48:37.263 13:53:45 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 109521' 00:48:37.263 killing process with pid 109521 00:48:37.263 13:53:45 keyring_file -- common/autotest_common.sh@971 -- # kill 109521 00:48:37.263 13:53:45 keyring_file -- common/autotest_common.sh@976 -- # wait 109521 00:48:39.176 00:48:39.176 real 0m13.982s 00:48:39.176 user 0m30.939s 00:48:39.176 sys 0m2.899s 00:48:39.176 13:53:46 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:48:39.176 13:53:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:48:39.176 ************************************ 00:48:39.176 END TEST keyring_file 00:48:39.176 ************************************ 00:48:39.176 13:53:46 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:48:39.176 13:53:46 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:48:39.176 13:53:46 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:48:39.176 13:53:46 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:48:39.176 13:53:46 -- common/autotest_common.sh@10 -- # set +x 00:48:39.176 ************************************ 00:48:39.176 START TEST keyring_linux 00:48:39.176 ************************************ 00:48:39.176 13:53:46 keyring_linux -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:48:39.176 Joined session keyring: 106089791 00:48:39.176 * Looking for test storage... 00:48:39.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:48:39.176 13:53:46 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:48:39.176 13:53:46 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:48:39.176 13:53:46 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:48:39.176 13:53:46 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:48:39.176 13:53:46 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:48:39.176 13:53:46 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:48:39.176 13:53:46 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:48:39.176 13:53:46 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:48:39.176 13:53:46 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:48:39.176 13:53:46 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:48:39.176 13:53:46 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:48:39.176 13:53:46 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:48:39.176 13:53:46 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:48:39.176 13:53:46 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:48:39.176 13:53:46 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:48:39.176 13:53:46 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:48:39.176 13:53:46 keyring_linux -- scripts/common.sh@345 -- # : 1 00:48:39.176 13:53:46 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:48:39.176 13:53:46 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:48:39.176 13:53:46 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:48:39.176 13:53:46 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:48:39.176 13:53:46 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:48:39.176 13:53:46 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:48:39.176 13:53:46 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:48:39.176 13:53:46 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:48:39.176 13:53:46 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:48:39.176 13:53:46 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:48:39.176 13:53:46 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:48:39.176 13:53:46 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:48:39.176 13:53:46 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:48:39.176 13:53:46 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:48:39.176 13:53:46 keyring_linux -- scripts/common.sh@368 -- # return 0 00:48:39.176 13:53:46 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:48:39.176 13:53:46 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:48:39.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:39.176 --rc genhtml_branch_coverage=1 00:48:39.176 --rc genhtml_function_coverage=1 00:48:39.176 --rc genhtml_legend=1 00:48:39.176 --rc geninfo_all_blocks=1 00:48:39.176 --rc geninfo_unexecuted_blocks=1 00:48:39.176 00:48:39.176 ' 00:48:39.176 13:53:46 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:48:39.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:39.176 --rc genhtml_branch_coverage=1 00:48:39.176 --rc genhtml_function_coverage=1 00:48:39.176 --rc genhtml_legend=1 00:48:39.176 --rc geninfo_all_blocks=1 00:48:39.176 --rc geninfo_unexecuted_blocks=1 00:48:39.176 00:48:39.176 ' 00:48:39.176 13:53:46 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:48:39.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:39.176 --rc genhtml_branch_coverage=1 00:48:39.176 --rc genhtml_function_coverage=1 00:48:39.176 --rc genhtml_legend=1 00:48:39.176 --rc geninfo_all_blocks=1 00:48:39.176 --rc geninfo_unexecuted_blocks=1 00:48:39.176 00:48:39.176 ' 00:48:39.176 13:53:46 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:48:39.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:39.176 --rc genhtml_branch_coverage=1 00:48:39.176 --rc genhtml_function_coverage=1 00:48:39.176 --rc genhtml_legend=1 00:48:39.176 --rc geninfo_all_blocks=1 00:48:39.176 --rc geninfo_unexecuted_blocks=1 00:48:39.176 00:48:39.176 ' 00:48:39.176 13:53:46 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:48:39.176 13:53:46 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:48:39.176 13:53:46 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:48:39.176 13:53:46 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:48:39.176 13:53:46 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:48:39.176 13:53:46 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:48:39.176 13:53:46 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:48:39.176 13:53:46 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:48:39.176 13:53:46 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:48:39.177 13:53:46 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:48:39.177 13:53:46 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:48:39.177 13:53:46 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:48:39.177 13:53:46 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:48:39.177 13:53:47 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:48:39.177 13:53:47 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:48:39.177 13:53:47 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:48:39.177 13:53:47 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:48:39.177 13:53:47 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:48:39.177 13:53:47 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:48:39.177 13:53:47 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:48:39.177 13:53:47 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:48:39.177 13:53:47 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:48:39.177 13:53:47 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:48:39.177 13:53:47 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:48:39.177 13:53:47 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:39.177 13:53:47 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:39.177 13:53:47 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:39.177 13:53:47 keyring_linux -- paths/export.sh@5 -- # export PATH 00:48:39.177 13:53:47 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:39.177 13:53:47 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:48:39.177 13:53:47 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:48:39.177 13:53:47 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:48:39.177 13:53:47 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:48:39.177 13:53:47 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:48:39.177 13:53:47 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:48:39.177 13:53:47 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:48:39.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:48:39.177 13:53:47 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:48:39.177 13:53:47 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:48:39.177 13:53:47 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:48:39.177 13:53:47 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:48:39.177 13:53:47 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:48:39.177 13:53:47 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:48:39.177 13:53:47 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:48:39.177 13:53:47 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:48:39.177 13:53:47 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:48:39.177 13:53:47 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:48:39.177 13:53:47 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:48:39.177 13:53:47 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:48:39.177 13:53:47 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:48:39.177 13:53:47 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:48:39.177 13:53:47 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:48:39.177 13:53:47 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:48:39.177 13:53:47 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:48:39.177 13:53:47 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:48:39.177 13:53:47 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:48:39.177 13:53:47 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:48:39.177 13:53:47 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:48:39.177 13:53:47 keyring_linux -- nvmf/common.sh@733 -- # python - 00:48:39.177 13:53:47 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:48:39.177 13:53:47 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:48:39.177 /tmp/:spdk-test:key0 00:48:39.177 13:53:47 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:48:39.177 13:53:47 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:48:39.177 13:53:47 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:48:39.177 13:53:47 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:48:39.177 13:53:47 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:48:39.177 13:53:47 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:48:39.177 13:53:47 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:48:39.177 13:53:47 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:48:39.177 13:53:47 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:48:39.177 13:53:47 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:48:39.177 13:53:47 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:48:39.177 13:53:47 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:48:39.177 13:53:47 keyring_linux -- nvmf/common.sh@733 -- # python - 00:48:39.177 13:53:47 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:48:39.177 13:53:47 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:48:39.177 /tmp/:spdk-test:key1 00:48:39.177 13:53:47 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=112274 00:48:39.177 13:53:47 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 112274 00:48:39.177 13:53:47 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:48:39.177 13:53:47 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 112274 ']' 00:48:39.177 13:53:47 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:39.177 13:53:47 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:48:39.177 13:53:47 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:39.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:39.177 13:53:47 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:48:39.177 13:53:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:48:39.437 [2024-11-07 13:53:47.200427] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:48:39.437 [2024-11-07 13:53:47.200544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112274 ] 00:48:39.437 [2024-11-07 13:53:47.338847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:39.437 [2024-11-07 13:53:47.437124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:48:40.378 13:53:48 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:48:40.378 13:53:48 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:48:40.378 13:53:48 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:48:40.378 13:53:48 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:40.378 13:53:48 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:48:40.378 [2024-11-07 13:53:48.086133] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:40.378 null0 00:48:40.378 [2024-11-07 13:53:48.118167] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:48:40.378 [2024-11-07 13:53:48.118636] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:48:40.378 13:53:48 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:40.378 13:53:48 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:48:40.378 16680656 00:48:40.378 13:53:48 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:48:40.378 132716438 00:48:40.378 13:53:48 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=112424 00:48:40.378 13:53:48 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 112424 /var/tmp/bperf.sock 00:48:40.378 13:53:48 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:48:40.378 13:53:48 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 112424 ']' 00:48:40.378 13:53:48 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:48:40.378 13:53:48 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:48:40.378 13:53:48 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:48:40.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:48:40.378 13:53:48 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:48:40.378 13:53:48 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:48:40.378 [2024-11-07 13:53:48.233591] Starting SPDK v25.01-pre git sha1 b264e22f0 / DPDK 24.03.0 initialization... 00:48:40.378 [2024-11-07 13:53:48.233698] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112424 ] 00:48:40.378 [2024-11-07 13:53:48.374209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:40.639 [2024-11-07 13:53:48.449881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:41.209 13:53:48 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:48:41.209 13:53:48 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:48:41.209 13:53:48 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:48:41.209 13:53:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:48:41.209 13:53:49 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:48:41.209 13:53:49 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:48:41.778 13:53:49 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:48:41.778 13:53:49 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:48:41.778 [2024-11-07 13:53:49.644742] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:48:41.778 nvme0n1 00:48:41.778 13:53:49 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:48:41.778 13:53:49 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:48:41.778 13:53:49 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:48:41.778 13:53:49 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:48:41.778 13:53:49 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:48:41.778 13:53:49 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:42.038 13:53:49 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:48:42.038 13:53:49 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:48:42.038 13:53:49 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:48:42.038 13:53:49 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:48:42.038 13:53:49 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:42.038 13:53:49 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:48:42.038 13:53:49 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:42.297 13:53:50 keyring_linux -- keyring/linux.sh@25 -- # sn=16680656 00:48:42.297 13:53:50 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:48:42.297 13:53:50 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:48:42.297 13:53:50 keyring_linux -- keyring/linux.sh@26 -- # [[ 16680656 == \1\6\6\8\0\6\5\6 ]] 00:48:42.297 13:53:50 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 16680656 00:48:42.297 13:53:50 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:48:42.297 13:53:50 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:48:42.297 Running I/O for 1 seconds... 00:48:43.235 13829.00 IOPS, 54.02 MiB/s 00:48:43.235 Latency(us) 00:48:43.235 [2024-11-07T12:53:51.242Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:43.236 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:48:43.236 nvme0n1 : 1.01 13830.29 54.02 0.00 0.00 9212.99 7318.19 15619.41 00:48:43.236 [2024-11-07T12:53:51.243Z] =================================================================================================================== 00:48:43.236 [2024-11-07T12:53:51.243Z] Total : 13830.29 54.02 0.00 0.00 9212.99 7318.19 15619.41 00:48:43.236 { 00:48:43.236 "results": [ 00:48:43.236 { 00:48:43.236 "job": "nvme0n1", 00:48:43.236 "core_mask": "0x2", 00:48:43.236 "workload": "randread", 00:48:43.236 "status": "finished", 00:48:43.236 "queue_depth": 128, 00:48:43.236 "io_size": 4096, 00:48:43.236 "runtime": 1.009162, 00:48:43.236 "iops": 13830.2869113185, 00:48:43.236 "mibps": 54.02455824733789, 00:48:43.236 "io_failed": 0, 00:48:43.236 "io_timeout": 0, 00:48:43.236 "avg_latency_us": 9212.987990733442, 00:48:43.236 "min_latency_us": 7318.1866666666665, 00:48:43.236 "max_latency_us": 15619.413333333334 00:48:43.236 } 00:48:43.236 ], 00:48:43.236 "core_count": 1 00:48:43.236 } 00:48:43.236 13:53:51 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:48:43.236 13:53:51 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:48:43.496 13:53:51 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:48:43.496 13:53:51 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:48:43.496 13:53:51 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:48:43.496 13:53:51 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:48:43.496 13:53:51 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:48:43.496 13:53:51 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:43.756 13:53:51 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:48:43.756 13:53:51 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:48:43.756 13:53:51 keyring_linux -- keyring/linux.sh@23 -- # return 00:48:43.756 13:53:51 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:48:43.756 13:53:51 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:48:43.756 13:53:51 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:48:43.756 13:53:51 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:48:43.756 13:53:51 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:48:43.756 13:53:51 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:48:43.756 13:53:51 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:48:43.756 13:53:51 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:48:43.756 13:53:51 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:48:43.756 [2024-11-07 13:53:51.731770] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:48:43.756 [2024-11-07 13:53:51.732429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500041be80 (107): Transport endpoint is not connected 00:48:43.756 [2024-11-07 13:53:51.733413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500041be80 (9): Bad file descriptor 00:48:43.756 [2024-11-07 13:53:51.734411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:48:43.756 [2024-11-07 13:53:51.734425] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:48:43.756 [2024-11-07 13:53:51.734438] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:48:43.756 [2024-11-07 13:53:51.734447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:48:43.756 request: 00:48:43.756 { 00:48:43.756 "name": "nvme0", 00:48:43.756 "trtype": "tcp", 00:48:43.756 "traddr": "127.0.0.1", 00:48:43.756 "adrfam": "ipv4", 00:48:43.756 "trsvcid": "4420", 00:48:43.756 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:43.756 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:43.756 "prchk_reftag": false, 00:48:43.756 "prchk_guard": false, 00:48:43.756 "hdgst": false, 00:48:43.756 "ddgst": false, 00:48:43.756 "psk": ":spdk-test:key1", 00:48:43.756 "allow_unrecognized_csi": false, 00:48:43.756 "method": "bdev_nvme_attach_controller", 00:48:43.756 "req_id": 1 00:48:43.756 } 00:48:43.756 Got JSON-RPC error response 00:48:43.756 response: 00:48:43.756 { 00:48:43.756 "code": -5, 00:48:43.756 "message": "Input/output error" 00:48:43.756 } 00:48:43.756 13:53:51 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:48:43.756 13:53:51 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:48:43.756 13:53:51 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:48:43.756 13:53:51 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:48:43.756 13:53:51 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:48:43.756 13:53:51 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:48:43.756 13:53:51 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:48:43.756 13:53:51 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:48:43.756 13:53:51 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:48:43.756 13:53:51 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:48:43.756 13:53:51 keyring_linux -- keyring/linux.sh@33 -- # sn=16680656 00:48:43.756 13:53:51 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 16680656 00:48:44.016 1 links removed 00:48:44.016 13:53:51 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:48:44.016 13:53:51 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:48:44.016 13:53:51 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:48:44.016 13:53:51 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:48:44.016 13:53:51 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:48:44.016 13:53:51 keyring_linux -- keyring/linux.sh@33 -- # sn=132716438 00:48:44.016 13:53:51 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 132716438 00:48:44.016 1 links removed 00:48:44.016 13:53:51 keyring_linux -- keyring/linux.sh@41 -- # killprocess 112424 00:48:44.016 13:53:51 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 112424 ']' 00:48:44.016 13:53:51 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 112424 00:48:44.016 13:53:51 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:48:44.016 13:53:51 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:48:44.016 13:53:51 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 112424 00:48:44.016 13:53:51 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:48:44.016 13:53:51 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:48:44.016 13:53:51 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 112424' 00:48:44.016 killing process with pid 112424 00:48:44.016 13:53:51 keyring_linux -- common/autotest_common.sh@971 -- # kill 112424 00:48:44.016 Received shutdown signal, test time was about 1.000000 seconds 00:48:44.016 00:48:44.016 Latency(us) 00:48:44.016 [2024-11-07T12:53:52.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:44.016 [2024-11-07T12:53:52.023Z] =================================================================================================================== 00:48:44.016 [2024-11-07T12:53:52.023Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:48:44.016 13:53:51 keyring_linux -- common/autotest_common.sh@976 -- # wait 112424 00:48:44.276 13:53:52 keyring_linux -- keyring/linux.sh@42 -- # killprocess 112274 00:48:44.276 13:53:52 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 112274 ']' 00:48:44.276 13:53:52 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 112274 00:48:44.276 13:53:52 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:48:44.276 13:53:52 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:48:44.277 13:53:52 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 112274 00:48:44.537 13:53:52 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:48:44.537 13:53:52 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:48:44.537 13:53:52 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 112274' 00:48:44.537 killing process with pid 112274 00:48:44.537 13:53:52 keyring_linux -- common/autotest_common.sh@971 -- # kill 112274 00:48:44.537 13:53:52 keyring_linux -- common/autotest_common.sh@976 -- # wait 112274 00:48:46.449 00:48:46.449 real 0m7.148s 00:48:46.449 user 0m11.809s 00:48:46.449 sys 0m1.644s 00:48:46.449 13:53:53 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:48:46.449 13:53:53 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:48:46.449 ************************************ 00:48:46.449 END TEST keyring_linux 00:48:46.449 ************************************ 00:48:46.449 13:53:53 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:48:46.449 13:53:53 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:48:46.449 13:53:53 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:48:46.449 13:53:53 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:48:46.449 13:53:53 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:48:46.449 13:53:53 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:48:46.449 13:53:53 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:48:46.449 13:53:53 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:48:46.449 13:53:53 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:48:46.449 13:53:53 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:48:46.449 13:53:53 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:48:46.449 13:53:53 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:48:46.449 13:53:53 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:48:46.449 13:53:53 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:48:46.449 13:53:53 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:48:46.449 13:53:53 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:48:46.449 13:53:53 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:48:46.449 13:53:53 -- common/autotest_common.sh@724 -- # xtrace_disable 00:48:46.449 13:53:53 -- common/autotest_common.sh@10 -- # set +x 00:48:46.449 13:53:53 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:48:46.449 13:53:53 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:48:46.449 13:53:53 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:48:46.449 13:53:53 -- common/autotest_common.sh@10 -- # set +x 00:48:54.588 INFO: APP EXITING 00:48:54.588 INFO: killing all VMs 00:48:54.588 INFO: killing vhost app 00:48:54.588 INFO: EXIT DONE 00:48:57.885 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:48:57.885 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:48:57.885 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:48:57.885 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:48:57.885 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:48:57.885 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:48:57.885 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:48:57.885 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:48:57.885 0000:65:00.0 (144d a80a): Already using the nvme driver 00:48:57.885 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:48:57.885 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:48:57.885 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:48:57.885 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:48:57.885 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:48:57.885 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:48:57.885 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:48:57.885 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:49:02.087 Cleaning 00:49:02.087 Removing: /var/run/dpdk/spdk0/config 00:49:02.087 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:49:02.087 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:49:02.087 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:49:02.087 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:49:02.087 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:49:02.087 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:49:02.087 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:49:02.087 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:49:02.087 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:49:02.087 Removing: /var/run/dpdk/spdk0/hugepage_info 00:49:02.087 Removing: /var/run/dpdk/spdk1/config 00:49:02.087 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:49:02.087 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:49:02.087 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:49:02.087 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:49:02.087 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:49:02.087 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:49:02.087 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:49:02.087 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:49:02.087 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:49:02.087 Removing: /var/run/dpdk/spdk1/hugepage_info 00:49:02.087 Removing: /var/run/dpdk/spdk2/config 00:49:02.087 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:49:02.087 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:49:02.087 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:49:02.087 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:49:02.087 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:49:02.087 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:49:02.087 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:49:02.087 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:49:02.087 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:49:02.087 Removing: /var/run/dpdk/spdk2/hugepage_info 00:49:02.087 Removing: /var/run/dpdk/spdk3/config 00:49:02.087 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:49:02.087 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:49:02.087 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:49:02.087 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:49:02.087 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:49:02.087 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:49:02.087 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:49:02.087 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:49:02.087 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:49:02.087 Removing: /var/run/dpdk/spdk3/hugepage_info 00:49:02.087 Removing: /var/run/dpdk/spdk4/config 00:49:02.087 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:49:02.087 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:49:02.087 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:49:02.087 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:49:02.087 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:49:02.087 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:49:02.087 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:49:02.087 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:49:02.087 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:49:02.087 Removing: /var/run/dpdk/spdk4/hugepage_info 00:49:02.087 Removing: /dev/shm/bdev_svc_trace.1 00:49:02.087 Removing: /dev/shm/nvmf_trace.0 00:49:02.087 Removing: /dev/shm/spdk_tgt_trace.pid3579788 00:49:02.087 Removing: /var/run/dpdk/spdk0 00:49:02.087 Removing: /var/run/dpdk/spdk1 00:49:02.087 Removing: /var/run/dpdk/spdk2 00:49:02.087 Removing: /var/run/dpdk/spdk3 00:49:02.087 Removing: /var/run/dpdk/spdk4 00:49:02.087 Removing: /var/run/dpdk/spdk_pid103438 00:49:02.087 Removing: /var/run/dpdk/spdk_pid103913 00:49:02.087 Removing: /var/run/dpdk/spdk_pid104448 00:49:02.087 Removing: /var/run/dpdk/spdk_pid109521 00:49:02.087 Removing: /var/run/dpdk/spdk_pid109679 00:49:02.087 Removing: /var/run/dpdk/spdk_pid111344 00:49:02.087 Removing: /var/run/dpdk/spdk_pid112274 00:49:02.087 Removing: /var/run/dpdk/spdk_pid112424 00:49:02.087 Removing: /var/run/dpdk/spdk_pid16070 00:49:02.087 Removing: /var/run/dpdk/spdk_pid2413 00:49:02.087 Removing: /var/run/dpdk/spdk_pid28147 00:49:02.087 Removing: /var/run/dpdk/spdk_pid30127 00:49:02.087 Removing: /var/run/dpdk/spdk_pid31443 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3576985 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3579788 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3580756 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3582596 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3583262 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3584661 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3584995 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3585598 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3586951 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3587754 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3588487 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3589222 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3589968 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3590704 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3590990 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3591271 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3591668 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3592892 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3596766 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3597334 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3597913 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3598231 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3599616 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3599948 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3601334 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3601665 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3602074 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3602384 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3602792 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3603097 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3604216 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3604890 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3605290 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3610772 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3616483 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3629668 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3630484 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3636384 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3636738 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3642704 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3650473 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3653529 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3667642 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3679936 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3682160 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3683835 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3706979 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3712405 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3817741 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3824883 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3833191 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3845730 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3882389 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3888480 00:49:02.087 Removing: /var/run/dpdk/spdk_pid3890407 00:49:02.348 Removing: /var/run/dpdk/spdk_pid3892715 00:49:02.348 Removing: /var/run/dpdk/spdk_pid3893049 00:49:02.348 Removing: /var/run/dpdk/spdk_pid3893395 00:49:02.348 Removing: /var/run/dpdk/spdk_pid3893739 00:49:02.348 Removing: /var/run/dpdk/spdk_pid3894767 00:49:02.348 Removing: /var/run/dpdk/spdk_pid3897084 00:49:02.348 Removing: /var/run/dpdk/spdk_pid3898483 00:49:02.348 Removing: /var/run/dpdk/spdk_pid3899192 00:49:02.348 Removing: /var/run/dpdk/spdk_pid3901884 00:49:02.348 Removing: /var/run/dpdk/spdk_pid3902904 00:49:02.348 Removing: /var/run/dpdk/spdk_pid3903940 00:49:02.348 Removing: /var/run/dpdk/spdk_pid3909447 00:49:02.348 Removing: /var/run/dpdk/spdk_pid3917300 00:49:02.348 Removing: /var/run/dpdk/spdk_pid3917302 00:49:02.348 Removing: /var/run/dpdk/spdk_pid3917303 00:49:02.348 Removing: /var/run/dpdk/spdk_pid3922527 00:49:02.348 Removing: /var/run/dpdk/spdk_pid3927843 00:49:02.348 Removing: /var/run/dpdk/spdk_pid3933529 00:49:02.348 Removing: /var/run/dpdk/spdk_pid3980224 00:49:02.348 Removing: /var/run/dpdk/spdk_pid3984981 00:49:02.348 Removing: /var/run/dpdk/spdk_pid3992799 00:49:02.348 Removing: /var/run/dpdk/spdk_pid3994697 00:49:02.348 Removing: /var/run/dpdk/spdk_pid3996805 00:49:02.348 Removing: /var/run/dpdk/spdk_pid3998894 00:49:02.348 Removing: /var/run/dpdk/spdk_pid4005340 00:49:02.348 Removing: /var/run/dpdk/spdk_pid4011893 00:49:02.348 Removing: /var/run/dpdk/spdk_pid4017575 00:49:02.348 Removing: /var/run/dpdk/spdk_pid4027963 00:49:02.348 Removing: /var/run/dpdk/spdk_pid4027966 00:49:02.348 Removing: /var/run/dpdk/spdk_pid4033707 00:49:02.348 Removing: /var/run/dpdk/spdk_pid4033992 00:49:02.348 Removing: /var/run/dpdk/spdk_pid4034321 00:49:02.348 Removing: /var/run/dpdk/spdk_pid4034962 00:49:02.348 Removing: /var/run/dpdk/spdk_pid4034983 00:49:02.348 Removing: /var/run/dpdk/spdk_pid4036327 00:49:02.348 Removing: /var/run/dpdk/spdk_pid4038295 00:49:02.348 Removing: /var/run/dpdk/spdk_pid4040241 00:49:02.348 Removing: /var/run/dpdk/spdk_pid4042089 00:49:02.348 Removing: /var/run/dpdk/spdk_pid4043985 00:49:02.348 Removing: /var/run/dpdk/spdk_pid4045867 00:49:02.348 Removing: /var/run/dpdk/spdk_pid4053828 00:49:02.348 Removing: /var/run/dpdk/spdk_pid4054654 00:49:02.348 Removing: /var/run/dpdk/spdk_pid4055928 00:49:02.348 Removing: /var/run/dpdk/spdk_pid4057857 00:49:02.348 Removing: /var/run/dpdk/spdk_pid4065131 00:49:02.348 Removing: /var/run/dpdk/spdk_pid4068325 00:49:02.348 Removing: /var/run/dpdk/spdk_pid4075357 00:49:02.348 Removing: /var/run/dpdk/spdk_pid4082463 00:49:02.348 Removing: /var/run/dpdk/spdk_pid4093000 00:49:02.348 Removing: /var/run/dpdk/spdk_pid4102192 00:49:02.348 Removing: /var/run/dpdk/spdk_pid4102254 00:49:02.348 Removing: /var/run/dpdk/spdk_pid4127434 00:49:02.348 Removing: /var/run/dpdk/spdk_pid4128376 00:49:02.348 Removing: /var/run/dpdk/spdk_pid4129107 00:49:02.348 Removing: /var/run/dpdk/spdk_pid4129901 00:49:02.348 Removing: /var/run/dpdk/spdk_pid4131158 00:49:02.348 Removing: /var/run/dpdk/spdk_pid4131841 00:49:02.348 Removing: /var/run/dpdk/spdk_pid4132660 00:49:02.348 Removing: /var/run/dpdk/spdk_pid4133509 00:49:02.348 Removing: /var/run/dpdk/spdk_pid4139204 00:49:02.609 Removing: /var/run/dpdk/spdk_pid4139595 00:49:02.609 Removing: /var/run/dpdk/spdk_pid4147515 00:49:02.609 Removing: /var/run/dpdk/spdk_pid4147890 00:49:02.609 Removing: /var/run/dpdk/spdk_pid4154959 00:49:02.609 Removing: /var/run/dpdk/spdk_pid4161190 00:49:02.609 Removing: /var/run/dpdk/spdk_pid4173099 00:49:02.609 Removing: /var/run/dpdk/spdk_pid4173763 00:49:02.609 Removing: /var/run/dpdk/spdk_pid4179447 00:49:02.609 Removing: /var/run/dpdk/spdk_pid4179793 00:49:02.609 Removing: /var/run/dpdk/spdk_pid4185472 00:49:02.609 Removing: /var/run/dpdk/spdk_pid4192815 00:49:02.609 Removing: /var/run/dpdk/spdk_pid52927 00:49:02.609 Removing: /var/run/dpdk/spdk_pid58296 00:49:02.609 Removing: /var/run/dpdk/spdk_pid61691 00:49:02.609 Removing: /var/run/dpdk/spdk_pid69927 00:49:02.609 Removing: /var/run/dpdk/spdk_pid69933 00:49:02.609 Removing: /var/run/dpdk/spdk_pid77033 00:49:02.609 Removing: /var/run/dpdk/spdk_pid79500 00:49:02.609 Removing: /var/run/dpdk/spdk_pid81994 00:49:02.609 Removing: /var/run/dpdk/spdk_pid83658 00:49:02.609 Removing: /var/run/dpdk/spdk_pid86244 00:49:02.609 Removing: /var/run/dpdk/spdk_pid87781 00:49:02.609 Removing: /var/run/dpdk/spdk_pid98679 00:49:02.609 Removing: /var/run/dpdk/spdk_pid99221 00:49:02.609 Removing: /var/run/dpdk/spdk_pid99882 00:49:02.609 Clean 00:49:02.609 13:54:10 -- common/autotest_common.sh@1451 -- # return 0 00:49:02.609 13:54:10 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:49:02.609 13:54:10 -- common/autotest_common.sh@730 -- # xtrace_disable 00:49:02.609 13:54:10 -- common/autotest_common.sh@10 -- # set +x 00:49:02.609 13:54:10 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:49:02.609 13:54:10 -- common/autotest_common.sh@730 -- # xtrace_disable 00:49:02.609 13:54:10 -- common/autotest_common.sh@10 -- # set +x 00:49:02.609 13:54:10 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:49:02.609 13:54:10 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:49:02.609 13:54:10 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:49:02.869 13:54:10 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:49:02.869 13:54:10 -- spdk/autotest.sh@394 -- # hostname 00:49:02.870 13:54:10 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:49:02.870 geninfo: WARNING: invalid characters removed from testname! 00:49:24.883 13:54:29 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:49:24.883 13:54:32 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:49:26.263 13:54:34 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:49:28.804 13:54:36 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:49:30.186 13:54:37 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:49:31.568 13:54:39 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:49:32.994 13:54:40 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:49:32.994 13:54:40 -- spdk/autorun.sh@1 -- $ timing_finish 00:49:32.994 13:54:40 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:49:32.994 13:54:40 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:49:32.994 13:54:40 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:49:32.994 13:54:40 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:49:32.994 + [[ -n 3491358 ]] 00:49:32.994 + sudo kill 3491358 00:49:33.005 [Pipeline] } 00:49:33.021 [Pipeline] // stage 00:49:33.025 [Pipeline] } 00:49:33.040 [Pipeline] // timeout 00:49:33.045 [Pipeline] } 00:49:33.060 [Pipeline] // catchError 00:49:33.065 [Pipeline] } 00:49:33.080 [Pipeline] // wrap 00:49:33.086 [Pipeline] } 00:49:33.100 [Pipeline] // catchError 00:49:33.109 [Pipeline] stage 00:49:33.111 [Pipeline] { (Epilogue) 00:49:33.124 [Pipeline] catchError 00:49:33.126 [Pipeline] { 00:49:33.139 [Pipeline] echo 00:49:33.140 Cleanup processes 00:49:33.147 [Pipeline] sh 00:49:33.438 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:49:33.438 122860 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:49:33.454 [Pipeline] sh 00:49:33.743 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:49:33.744 ++ grep -v 'sudo pgrep' 00:49:33.744 ++ awk '{print $1}' 00:49:33.744 + sudo kill -9 00:49:33.744 + true 00:49:33.757 [Pipeline] sh 00:49:34.046 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:49:46.429 [Pipeline] sh 00:49:46.718 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:49:46.718 Artifacts sizes are good 00:49:46.734 [Pipeline] archiveArtifacts 00:49:46.741 Archiving artifacts 00:49:46.898 [Pipeline] sh 00:49:47.184 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:49:47.200 [Pipeline] cleanWs 00:49:47.211 [WS-CLEANUP] Deleting project workspace... 00:49:47.212 [WS-CLEANUP] Deferred wipeout is used... 00:49:47.220 [WS-CLEANUP] done 00:49:47.222 [Pipeline] } 00:49:47.239 [Pipeline] // catchError 00:49:47.252 [Pipeline] sh 00:49:47.540 + logger -p user.info -t JENKINS-CI 00:49:47.550 [Pipeline] } 00:49:47.564 [Pipeline] // stage 00:49:47.570 [Pipeline] } 00:49:47.585 [Pipeline] // node 00:49:47.591 [Pipeline] End of Pipeline 00:49:47.630 Finished: SUCCESS